id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
234461051
pes2o/s2orc
v3-fos-license
Ternary Mathematics Principles Truth Tables and Logical Operators 3 D Placement of Logical Elements Extensions of Boolean Algebra The approach presented in this article aims at transition between two systems of counting binary and ternary I propose to use ternary math principle in coding the signal Instead of using duos of numbers 01 I propose to use triplets (1,0-1) and make a transition from binary to ternary so that the binary code is converted to ternary and vice versa That same principle can be used for building microcircuits where logical elements are placed in a 3 D space instead of a layer INTRODUCTION As we know from the course of discrete mathematics in logics and in mathematical logics in particular we build statements using operators such as conjunction (and), disjunction (or), negation (not), and (existential or universal) quantifiers. We formalize statements using the laws of logics or mathematical logics to simplify them and manipulate in accordance Since 19 th century especially with the works on algorithms of discrete mathematics and contributors such as Original Research Article George Bull ((1815-1864), F. Freher , Bertran Russel (1872Russel ( -1970, Peano , Whitehead (1861Whitehead ( -1947 this formal approach laid the foundation for the new science named discrete mathematics and in the second half of the 20 th century with the appearance of first programming machines and use of binary math the contribution of those mathematicians is invaluable Boole [1], . With the development of computers however there stood a need in faster data processing and efficient use of hardware We all experienced development of computer device from primitive electric circuits and microcircuits to the integrated circuits and potentially powerful Nano computers Mendelson [3]. With that idea in mind there appears to be a need in new approach towards data processing and we are trying to present such an approach. METHODOLOGY Let's take a look at some of the propositional operators and the truths tables of conjunction for example Hilbert [4]. The compound statement in the last row is A&B which is only true if both literals A and B are true. Table 1. List of compound statement Let's take a look at the disjunction table The conjunction disjunction and negation statements are the abstract representations of the algorithms of building processing machines or circuits in integral integrated circuits Shannon [5]. That is why the propositional logic is so important in engineering and computer maths. As it appears modern data processing requires algorithms in a form different from 1D linear circuits It might stand in need for 3 Dimensional or 3 D placement of the logical elements for which the Truths tables will look different Gould, We propose a new type of relation between the elements of the logical expression. They are built using principles of association and distribution for example. In this abstract the author just tries to justify the use of triples of literals to make logical statements Further research is necessary for a transition from binary to ternary maths Lukasiewicz [6]. Here we go Table 3. A distributive principle One should mention here that instead of two operators True and False in our ternary table we use three operands T, T ,F (Truth,Truth negation,False) It will become clear when we substitute those by the corresponding numbers (1,-1,0). In other words for the literals here we use not binary but the ternary which can be easily converted to the Bullean math as subtraction. (1) Example: Let's say we have a string of 01 numbers presented by the sequence 010010+our last element A(B+C) Let's write it as 010010 -1 = 010000 And so on we can add subtract multiply binary numbers converting our ternary formulas into the binary What for? So that we could place our logical elements on the two planes Plane of binary numbers (x,y)=(0,1) and the plane of -1's. As previously mentioned There are two offered principles used for logical operations in the ternary system: one is distributive mentioned above and the other one is associative Before we start to talk about the second principle lets remind ourselves of the main maths operations used in discrete maths when dealing with logical expressions They are logical addition and logical multiplication These two are most easily recognizable when joining statements by the symbols  and  We are trying to do something similar and use two principles Gallier [7]. Back to the associative principle we have three operands ABC that can be presented as combinations. ABC ACB CBA CAB BAC BCA =3! Now when we substitute them by numbers that is 1,-1 and 0 we will obviously get one result provided of course that we use multiplication as an operation for our elements that is 0. 0 1 1  0 but let it not mislead you as we can have a chain ABB= 111 which equals 1. And if we combine our first method that is distributional and our second approach then we can write any number written in binary code as a combination of the first and the second. -1 = 010000=010010+ A(B+C)+ABC= 010010+ AB+AC+ABC Let's have a look at what the new Truth tables with two operators we shall call them ternary addition and ternary multiplication. Let's take a look at ternary multiplication first the number of entries is different It will be 9 obviously Choosing an appropriate sign would be necessary For the time being let's mark it as (*) T T Ternary addition will be a challenge from the maths standpoint Let's use our distributive law -1 is a product of T and T therefore we would present the sum of T and T as the product of T and T . Let's see what we can accomplish by doing so: Note: We have to accept mathematical signs of addition + and multiplication as representatives of conjunction & and disjunction  in our Tables 1 and 2 So what will be the table of ternary addition? This is 9 entries table the same as ternary multiplication with the results column different of course. Coming back to our formula: ( T+T)=T; Respectively: T ( T+T)  T *( T&T)  T * T= T Now it's time to present our ternary addition table using logical operators A ternary addition sign is chosen randomly just to differ it from conventional addition In our table it's being presented as  Our next step is an attempt to derive the formula for the arrays of ordered pairs ( , ). As shown before our ternary tables consist of 9 rows corresponding to permutations with replacement n P r = n r In our case the total number of permutations with replacement n r equals a decimal number Let's call it d where d = n r where r=2,3,4…. depending on which system we use (binary, ternary, etc.) From here = √ ( ^2 = ) → = √ What does our number n stand for? It stands for the number of arrays we choose from Respectively we can present our decimal number as the number n r ; number of arrays n = √ or which is the same n = which we can also present as a binomial distribution To conclude what has just been said we present in our research a transition from one system of counting to another Whitehead & 1962 [8]. When we operate decimal numbers we might switch to binary ternary or any other system of numbers to convert from Such an approach is designed to make processors that implement new principles of transition and processing of the data In their turn these principles can facilitate the development of already existing computers and their architecture This example shows how logic can be updated to the needs of the developing industry and facilitate the development of existing technology. CONCLUSION The approach presented in this article aims at transition between two systems of counting binary and ternary This is not a secret that with the industry mathematics methods of calculations are developing and with their development there is a need in new ways of transmission storing and processing of the data I deliberately omit mentioning physical approach in my introductory research It's well known that development of new systems of counting needs new technical means and sometimes the gap between the first and the second to close can count decades of research calculations etc. However I might suggest that for the virtual machines and modeling as well as cryptography perhaps and adaptation to Nano computer systems this approach can add some value.
2021-01-07T09:06:36.088Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "526f8cd831ef95b558b5903613bc305d22d088c2", "oa_license": null, "oa_url": "https://www.journalajrcos.com/index.php/AJRCOS/article/download/30166/56610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8a648c99b1cbd7687cb1ff8ba0112a2a703bb86c", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
227217083
pes2o/s2orc
v3-fos-license
MAIT Cells in Barrier Tissues: Lessons from Immediate Neighbors Mucosal-associated invariant T (MAIT) cells are innate-like T cells present at considerable frequencies in human blood and barrier tissues, armed with an expanding array of effector functions in response to homeostatic perturbations. Analogous to other barrier immune cells, their phenotype and function is driven by crosstalk with host and dynamic environmental factors, most pertinently the microbiome. Given their distribution, they must function in diverse extracellular milieus. Tissue-specific and adapted functions of barrier immune cells are shaped by transcriptional programs and regulated through a blend of local cellular, inflammatory, physiological, and metabolic mediators unique to each microenvironment. This review compares the phenotype and function of MAIT cells with other barrier immune cells, highlighting potential areas for future exploration. Appreciation of MAIT cell biology within tissues is crucial to understanding their niche in health and disease. Barrier surfaces are sites of cross-talk between the host and diverse external environments. MAIT cells are found in the intestine, skin, respiratory, oral and female genital mucosa, which all house microbial communities adapted to the local environment and in symbiosis with the host (12) ( Table 1). MAIT cells are dependent on this microbiome and are part of a community of tissue immune cells anatomically close to epithelial surfaces (20,35), all poised for rapid effector functions to maintain tissue homeostasis (36,37). The range of MAIT cell effector functions is only just being explored. It is increasingly appreciated that resident macrophages and lymphocytes are in constant cross-talk with tissues, integrating cues from the local microbiome, cellular, environmental and metabolic milieus for their development and function (38)(39)(40)(41)(42)(43)(44). In this review, we show that MAIT cells occupy a similar niche, engage in similar cross-talk and could sense similar factors ( Figure 1, Table 2). Furthermore, we draw parallels with other barrier lymphocytes to explore tissue-specific factors which could modulate their function at mucosal surfaces, with the hypothesis that there are unexplored functional and metabolic adaptions for their survival and execution of tissue-specific effector functions. Understanding these factors is important to expand our knowledge of their role in shaping tissue function in health and disease. Examples of tissue factors which could modify human MAIT cells in an analogous manner to other resident immune cells. Transcriptional expression of purported sensors*, in some cases including relevant non-sensor genes. All transcriptional datasets are from human blood MAIT cells, except (29) which is from matched BAL and blood MAIT cells. Protein atlas, genes enriched in MAIT cells compared to other blood immune cells; Fergusson (45), genes enriched in CD161 + T cells; Park (46), genes enriched in CD161 + Va7.2 + compared to conventional T cells; Salou (47), genes enriched in MAIT cells compared to conventional CD8 + T cells; Hinks (48), genes enriched in MR1-tetramer + MAIT cells compared to CD8 + T cells; Hinks TCR (48), genes upregulated on 5-OP-RU activation of MAIT cells; Lamichhane E coli (49), genes upregulated on E coli activation of MAIT cells; Leng (20), genes enriched in MAIT cells stimulated with TCR-dynabeads; Sharma (50), genes upregulated with anti-CD3 or BCG stimulation; Lu (29), genes enriched in BAL MAIT cells from children with community acquired pneumonia (CAP). MAIT17 = type-17 MAIT cells; MAIT1 = type-1 MAIT cells. BARRIER TISSUES Mammalian barrier surfaces are diverse physiological, chemical and cellular niches. The skin is dry, lipid-rich and acidic, primarily due to epidermal fatty acids, with unique exposures to high-salt content, UV-radiation and large fluctuations in temperature (51,52). The female genital tract is even more stressed, driven by Lactobacillus sp.-derived lactic acid and hydrogen peroxide, with additional structural and immunological changes with the menstrual cycle, pregnancy and age (12,53). The digestive system varies along its length, both in terms of physical factors and the unique composition of the microbiome: the oral mucosa is subject to occasional physical trauma from mastication, while the gut has the largest microbial biomass in the body in addition to a nutrient rich environment consisting of bile acids and diet-derived metabolites. Finally, the lungs are subject to fluctuating mechanical shear stress from ventilation and gravitational posture dependent changes, and also differ in structure and microbial sterility from higher to lower order bronchi and alveoli (54). Most sites have underlying dense neural networks and varying concentrations of physiological parameters such as oxygen tension, lactate, glucose and amino acids depending on the specific location and inflammatory context; relative hypoxia is physiological in healthy skin hair follicles and the intestinal lumen, and induced in the face of increasing tissue demand during inflammation in all tissues. The inflammatory tissue microenvironment is further dysregulated by the catabolic processes necessary for generating an immune response, through excess nutrient consumption and generation of potentially toxic metabolic by-products. Moreover, these physical and biochemical changes interact with and shape the microbiome, therefore directly and indirectly interacting with resident immune cells, including MAIT cells. BARRIER MAIT CELLS Barrier tissue lymphocytes include innate, innate-like and adaptive immune cells (37). Activation is antigen-independent in natural killer (NK) and innate lymphoid cells (ILCs), and predominantly mediated through cytokines. Antigen-dependent mucosal protection is provided by CD4 + T helper 17 (T H 17), regulatory T (T reg ), and CD8 + tissue-resident memory (T RM ) and Tc17 cells. MAIT cells and other unconventional T cells, such as invariant natural killer T (iNKT) and gamma-delta T (gdT) cells, share features with both innate and adaptive cells. They can be activated in a TCR-dependent or independent manner (20,48,49,55), and are important for preserving tissue integrity and function in homeostasis. Ontogeny and Tissue Residency Human MAIT cells in blood have a tissue-homing effectormemory phenotype (3), but the ontogeny of cells in barrier surfaces is less clear. Murine tissue MAIT cells and iNKT are resident populations in homeostasis and following infection, expressing the transcription factor retinoic acid-related orphan receptor (ROR) gt (RORgt). After parabiosis, almost all thymic, and the majority of splenic, hepatic, lymph node and RORgt + lung MAIT cells are host-derived, with some recirculation of RORgt − lung MAIT cells (47,56). Following Staphylococcus epidermidis (S. epidermidis) challenge similar frequencies of murine skin MAIT, gdT, and iNKT cells are host-derived and thus tissue-resident (35). In humans however, it is unclear to what extent tissue MAIT cells are permanently resident. Expression of aE-integrin (CD103) associated with CD8 + T RM cells is rare in blood MAIT cells (<3%) and common, but not universal, among MAIT cells within the oral and gastric mucosa (13,15), skin (31,57), and lungs (27-29) ( Table 1). Thus tissues in health may represent a mixture of resident and recirculating MAIT cells, which could vary micro-anatomically: although most epidermal MAIT cells express CD103 and CLA (cutaneous lymphocyte antigen), only half of dermal MAIT cells express CD103 (31). Resident populations may behave differentially in diseasefor example, among bronchoalveolar CD8 + CD161 + + Va7.2 + cells, which are mostly MAIT cells, only the CD103+ fraction is depleted in HIV infection (28). Studies on the formation and longevity of tissue MAIT cells in humans are challenging. Remarkably, some MAIT cells do migrate into fetal small intestine, liver and lung as early as the 2 nd trimesterbefore exposure to the conventional commensal flora (25). Fetal tissue MAIT cells, unlike circulating MAIT cells in adults, are cycling in the steady state with appreciable Ki67 expression (25). It is unclear if these early tissue resident MAIT cells could persist into adult life, similar to other fetal tissue-resident cells (58). One study using HLA allele mismatching to discriminate between donor-and recipient-derived T cells following small bowel transplantation showed that although the majority of tissue MAIT (CD161 + Va7.2 + ) cells are derived from the host long term, donorderived cells can be found over a year after transplantation (59); this suggests that adult mucosal MAIT cells can persist for prolonged periods. The dynamics may of course vary in tissues with different cellular turnover, and analogous to the differential proliferative capacity and function of resident and monocyte-derived macrophages in tissues (60), the function of long-lived and newly formed MAIT cells may be distinct. It is also unclear if MAIT cells can leave tissues. Thoracic duct lymph-derived and blood MAIT cells are CCR7 − and present at comparable frequencies with overlapping TCR clonotypes (61), which suggests that MAIT cells in lymph directly derive from blood. This could imply either migration after transit through tissues, or direct CCR7-independent migration from blood through high endothelial venules. Further work is needed to understand the drivers for tissue migration, residency and persistence, including potential functional differences between long lived and nascent tissue MAIT cells. Tissue Phenotype and Cytokine Production MAIT cell expression of the NK-cell marker CD161 and the transcription factor RORgt (RORC) are features shared with other barrier tissue-resident cells, including T H 17, peripherally derived T reg (pT reg ), ILC3, iNKT and gdT cells (62)(63)(64)(65)(66)(67). CD161 + T cells share a transcriptional program for innate-like cytokineresponsiveness in the absence of TCR triggering, through high expression of cytokine receptors (e.g., IL-12R, IL-18R) driven by the transcription factor promyelocytic leukemia zinc finger protein (PLZF) (45,55). RORgt is crucial for barrier protective IL-17 production common to many mucosal lymphocytes (43,68). Unusually, in homeostasis many human MAIT cells coexpress the type-1 transcription factor T-bet (TBX21) and RORgt, whereas naïve SPF mice have two almost mutually exclusive populations (69,70): nearly all murine tissue resident MAIT cells are RORgt + T-bet − IL-17A producing MAIT17, with less numerous T-bet + RORgt − IFN-g producing MAIT1 found in the circulation (47). Following infection with intranasal Salmonella or Legionella, an increase in RORgt + T-bet + lung MAIT cells is observed, suggesting that the diverse stimuli that human MAIT cells receive may explain part of the differences between species (71). Functionally, human tissue MAIT cells are usually more activated than their circulating counterparts in homeostasis and capable of increased cytokine production (19, 23, 72) ( Table 1). Additionally, barrier MAIT cells in the female genital tract and oral mucosa are predominantly CD8 + cells biased towards type-17 function (13,32). Another study found that CD4 − CD8 − MAIT cells in healthy endometrium have a more mature phenotype with higher RORgt and lower T-bet expression (33). This skewed tissue biased phenotype is present from early development as fetal small intestine MAIT cells produce more cytokines (IFNg, IL-22) than their circulating counterparts in response to Escherichia coli (E. coli) (25). It is unclear however if this phenotype varies between barrier tissues in adults. Barrier specific heterogeneity is observed in mice: murine skin MAIT cells display a transcriptional profile distinct from lung or liver (35), and both colonic and lung MAIT cells produce higher IL-17 than non-barrier tissues, such as the liver and spleen (73). Ultimately, as IL-17A and IL-22 promote barrier integrity, it will be important to understand the relative contributions of pre-programmed transcriptional and environmental cues to MAIT cell heterogeneity observed in human tissues. Tissue Homeostasis and Tissue Repair Given their anatomical location and similarity to other tissueresident lymphocytes, it is perhaps unsurprising that MAIT cell effector function is important in tissue-homeostasis. This parallels tissue homeostatic roles for H2-M3 restricted CD8 + T cells (and T regs ) in mouse skin (74,75), and gdT cell populations in the lung and gut (76)(77)(78). In NOD mice, Mr1-deficient animals have impaired intestinal barrier integrity, implicating MAIT cells in maintaining surface homeostasis (79). This concept was further supported in a model of colonic graft versus host disease (GvHD); Mr1-deficient animals had increased proinflammatory donor-derived CD4 + T cell expansion and reduced tight junction expression (73). GvHD in human bone marrow transplant recipients is also associated with lower MAIT cell frequency (80,81), again potentially implicating a role in maintaining mucosal health. Further recent data from both mouse and human studies has identified an important role for MAIT cells in tissue repair. Constantinides et al. found that murine skin resident MAIT cells are enriched for a distinct tissue repair transcriptional signature, similar to that observed in the previously described H2-M3 restricted CD8 + T cells responsive to S. epidermidis-derived Nformylated peptides (35). To assess MAIT cell specific tissue repair in vivo without confounding skin gdT and H2-M3 restricted CD8 + cells which can perform analogous functions, Tcrd −/− mice were infected with a strain of S. epidermidis that fails to induce CD8 + H2-M3-recognizing T cells. Tissue repair in response to S. epidermidis, measured by epidermal tongue length growth after skin punch biopsy, was higher in Tcrd −/− compared to Mr1 −/− Tcrd −/− mice, implicating the additional MAIT cell deficiency. How do these data relate to human MAIT cells? Recently, murine and human MAIT cells were also found to have a shared tissue repair transcriptional profile (as seen with the H2-M3 restricted CD8 + T cells) on resolution of infection from Legionella longbeachae and with re-infection, suggesting significant functional parallels (48). Two additional studies in human MAIT cells showed activation of such tissue repair gene expression patterns predominantly following TCR-mediated triggering (20,49). In vitro assays of wound healing also revealed a functional repair role for MAIT cell derived soluble factors, which could be blocked using anti-MR1 antibodies (20). Taken together all these studies suggest that a local repair program analogous to other tissue resident cell types is active in MAIT cells and likely triggered in vitro through encounter with microbiota. This is supported by the remarkable observation in vivo that direct topical application of the MAIT cell TCR ligand 5-OP-RU alone prior to skin injury, in the absence of additional cytokines, is sufficient to selectively induce cutaneous MAIT cell expansion and expedite tissue repair (35). Much more work is needed to define the importance of this in human disease and also the exact mechanisms through which this large panel of soluble mediators exert their impact. In addition to tissue-repair functions in response to commensals in the absence of inflammation, there is evidence that innate-like T cells can regulate barrier surface homeostasis by shaping the microbial landscape. CD1d and intestinal iNKT cells influence murine intestinal homeostasis and microbial colonization, with reduced Bacteriodales colonization in iNKTdeficient mice (82). Mr1-deficient animals, which lack MAIT cells, have reduced intestinal microbial diversity, similar to that found in IL-17A deficient animals (73). Conversely, in obese mice MAIT cells seem to promote microbial dysbiosis and ileal barrier dysfunction. Fecal transplantation from obese Mr1deficient animals reduces barrier permeability in mice fed a high-fat diet, and the microbiome of obese Va19 +/− mice with a high MAIT cell frequency has lower Bifidobacteriaceae and Lactobacillaceae species (83). Given the importance of a diverse microbiome to human health, further work is needed on the interactions of MAIT cells and a healthy microbiome in maintaining tissue homeostasis. These expanding tissue-specific functions raise the tantalizing possibility that similar to other innate-like lymphocytes, MAIT cell barrier functions may be more diverse than initially appreciated (84,85). We know that gdT cells can remarkably promote stem-cell remodelling (86), adaptive thermoregulation in response to cold stress (87), and sympathetic nervous innervation (88). Furthermore, cytokine activated ILC3 can promote antigen specific CD4 + T cells responses directly in vitro through cell surface MHCII and co-stimulatory molecule expression (89). Indeed MAIT cells have the capacity to indirectly manipulate tissue adaptive responses, through dendritic cell maturation (90), and could potentially act as a sink for IL-7 similar to IL-7R + ILC to limit homeostatic proliferation and preserve TCR diversity in neighboring tissue T cells (91). In summary, MAIT cells in tissues are distinct from the population most frequently studied thus far in blood. The array of effector functions is expanding beyond the traditional cytotoxicity and cytokine production first described, and further understanding of the context in which these effector programs are engaged together with knowledge of how to modulate them are key to enable translation of MAIT cell biology to effective human therapeutics. Metabolism T cell development, differentiation, activation, function and survival are regulated by the cell intrinsic metabolism of glucose, amino acids and lipids (92)(93)(94). Nutrient availability differs between T cell compartments: circulating lymphocytes function in secondary lymphoid organs with high glucose and amino acid concentrations, whereas barrier tissues are more restrictive anatomically with variable nutrient composition. Permanent tissue-resident cells must therefore adapt to these niches for continued survival and proliferation which are also required for their tissue-specific effector functions. Activated peripheral blood MAIT cells upregulate glucose uptake as glycolysis is required for their cytotoxicity and IFN-g production (95,96), and human circulating MAIT cells have reduced mitochondrial activity compared to CD161 − CD8 + T cells (95). Amino acid metabolism is also crucial -peripheral MAIT cells express high levels of the L-type amino acid transporter, SLC7A5, and L-amino acid oxidase (96,97). The metabolism of tissue MAIT cells has however not been explored. Tissue MAIT cells may rely on OXPHOS, which correlates with TNF and IL-17 production in their circulating counterparts (96,98). Given low tissue glucose concentrations, many resident immune cells are adapted to oxidative phosphorylation of fatty acids abundant in the skin and intestine (99). Tissue-resident but not circulating memory CD8 + T cells require these exogenous fatty acids for their survival, and upregulate transporters and fatty acid binding proteins (FABP) necessary for long term maintenance (100,101). FABP isoform expression is tissuespecific, shared among resident T RM , IEL, ILC and gdT cells, with T RM able to modulate isotype specific expression on relocation to a new environment (102); Fabp4 and Fabp5 expression are enriched among skin resident cells, whereas Fabp1 is enriched among liver resident cells, including invariant NKT cells. Endogenous fatty acid metabolism is also important in IL-17 producing T H 17 (103), with murine Tc17 cells and human IL-17 producing bronchoalveolar MAIT cells also enriched for genes in fatty acid and lipid metabolism (29,104). It would therefore be logical to explore tissue MAIT cell mitochondrial and lipid metabolism in understanding their barrier specific effector functions. Tissue Stress Tissue stress includes homeostatic perturbations in metabolic or environmental factors. Examples include insufficient nutrients or accumulations of potentially toxic byproducts, including oxidative stress from excessive free radicals. Barrier tissues with an active immune response frequently have minor homeostatic perturbations tolerated by resident immune cells. Autophagy is a metabolic stress response that recycles intracellular proteins and provides an additional nutrient source advantageous in tissues and activating environments (93,105). MAIT cells in the liver have higher basal autophagy compared to their circulating counterparts, which may be required given the higher mitochondrial depolarization observed in stressed tissue-resident cells (106). In vitro, inhibition of autophagy reduces acquisition of a tissue-resident phenotype in circulating CD8 + T cells (106), thus enhanced autophagy may be a requirement for MAIT cell tissue survival. Tissue oxidative stress can also be mitigated in barrier tissues by xenobiotic transporters, such as multidrug resistance protein 1 (MDR1). MDR1 (ABCB1) is an ATP-binding cassette B1 drug resistance transporter expressed on IL-17 producing CD4 + T cells in the ileum, which protects against bile acid induced oxidative stress to maintain intestinal homeostasis (107); a subset of patients with ileal Crohn's have loss of function in MDR1, highlighting an important role in controlling tissue inflammation. MAIT cells and other CD161 + T cells also express high levels of MDR1 (3,45,108), and it would be interesting to evaluate in future studies whether this has similar implications for their survival and function in toxin rich niches. Tissue oxidative stress also produces free radicals and hydrogen peroxide (H 2 O 2 ). H 2 O 2 can be transported by the plasma membrane water channel, aquaporin 3 (AQP3), which is part of a core transcriptional signature shared among type-17 secreting NKT17, ILC3, gdT, and T H 17 cells in mice (109). Aqp3deficient T cells have impaired chemokine mediated trafficking to the skin (110), and AQP3 expression is higher in human CD161 + Va7.2 + MAIT cells compared to circulating CD161 − cells expressing the same TCRa (46). It is therefore possible that tissue stress regulates MAIT cell migration and survival in inflammatory tissues such as the skin. Finally, given the often overlapping functions, some adaptations may serve to prune the tissue response to only the most appropriate cells. Murine T H 17, iNKT and T RM are enriched for the purinergic receptor, P2RX7, which recognizes extracellular purines (ATP, NAD + ) released after cell lysis and has cell type specific effects (111)(112)(113). Purines released from high microbial turnover can regulate barrier specific immune cell function, promoting murine T H 17 differentiation (111) but inhibiting human ILC3 IL-22 production (114). P2RX7 expression on tissue iNKT and T RM however induces pyroptosis and limits immunopathology from bystander activation (113,115). As purinergic receptor expression is downregulated on TCR engagement, release of purines from tissue damage preferentially depletes bystander activated T RM (115). Over time this could serve to shape the barrier immune response by conserving predominantly T cells specific to regularly encountered antigens. Given their presence in tissues and potential for bystander activation in response to cytokines, MAIT cell function may also be shaped by variable adaptations to tissue damage. Indeed, one could imagine that as a mammal ages, MAIT cell TCR-dependent responses could be superseded by antigen-specific T RM populations. Microbiome Barrier surveillance of the commensal microbiome is essential for tissue immunity in homeostasis through shaping the composition and phenotype of innate and adaptive cells (116)(117)(118). Numerous mucosal cell types are perturbed in germ-free (GF) and antibiotic treated mice, including RORgt-expressing tissue T reg , T H 17 and innate-like IL-17 producing gdT cells (119). Conversely cohousing laboratory mice with wild mice to induce a more diverse microbiome promotes a human adult-like immune composition, with increased tissue innate and differentiated memory CD8 + T cell populations (120). We have known MAIT cells are also reliant on the microbiome since Treiner et al. discovered that they were absent in the lamina propria of GF mice (2). It was subsequently found that metabolites from riboflavinsynthesizing commensals, which engage the MAIT cell TCR, are necessary for most stages of MAIT cell intra-thymic development and subsequent peripheral expansion (56,121). Accordingly, the dominant murine population of RORgt + MAIT17 dependent on TCR-triggering for proliferation and function are depleted in the thymus and tissues of GF-mice, skewing the response to IFN-g production (56). Recolonization of GF mice with microbes can rapidly restore this RORgt + MAIT population. Remarkably, metabolites from skin riboflavinsynthesizing commensals, even in the absence of bacteria, can drive intra-thymic MAIT cell development remotely (56), in addition to sustaining the development and function of local skin-resident MAIT cells (35). There is however a narrow neonatal window until 3 weeks of age where recolonization of GF mice can restore MAIT cell development; recolonization of adult mice with bacteria after 7 weeks does not increase MAIT cell frequencies in the skin. Although they have complementary functions, this may be due to a finite niche for competing innatelike T cells shaped by the early microbiome; Tcrd-deficient animals have increased tissue-resident iNKT and MAIT cell populations (35), Cd1d-deficient animals have increased splenic and thymic MAIT cells (121), while GF mice have increased iNKT cells (122). A competing or compensatory interaction is also supported by the massive expansion of Vd2 + T cells in a patient with a homozygous point mutation in MR1 and MAIT cell deficiency (123). Supporting complementary functions, human blood MAIT cell and iNKT cell frequencies positively correlate (124,125). Additional work is needed to clarify the relationship between innate-like T cells, and whether this is influenced by age, disease and ligand abundance. In humans, with a much lower frequency of tissue iNKT and gdT cells, it remains to be seen if this window period for reconstitution and niche exist in adulthood. BAL MAIT cells depleted in HIV are increased with ART; as ART partially restores a dysregulated lung microbiome (126), it is tempting to speculate this contributes to MAIT cell reconstitution. Peripheral MAIT cells however are not reconstituted by ART (21,24). Long term reconstitution is also seen after allogeneic hematopoietic stem cell transplantation (81), dependent on the microbiome and potentially continued thymic output given the negative correlation with age. Microbial Diversity and Pathogenicity The mammalian microbiome is diverse and heterogeneous, varying between sites with different microenvironments. Skin hair follicles, sweat glands and sebum promote distinct commensals and immune responses compared to the intestine (52, 127, 128); diet, age and antimicrobials all shape the gut microbial landscape (129). Furthermore, dysbiosis can result from changes in the tissue microenvironment which drives the expansion of more suitably adapted commensals. The appreciation of different microbial phyla to MAIT cell expansion and function is expanding. A screen of bacterial species in vitro found that the capacity to stimulate MAIT cells correlated with riboflavin secretion (130). Colonization of GF mice with Proteus mirabilis alone in the neonatal period is sufficient for MAIT cell expansion in the skin and lungs (35). In reality we have communities of microbes and there is evidence that increased microbial diversity is associated with improved MAIT cell reconstitution after allogeneic hematopoietic stem cell transplantation (81). This could partly be through a reduction in their activation induced cell death as in vitro microbial diversity has been shown to reduce MAIT cell activation (131). Testing common intestinal commensals in vitro has demonstrated that MAIT cell activation correlates with net riboflavin secretion, with higher diversity resulting in predominant riboflavin uptake and thus lower presentation to MAIT cells (131). This is supported by observations in apical periodontitis oral mucosa, where prominent riboflavinexpressing taxa correlate negatively with Va7.2-Ja33 and IL17A transcripts (132). Furthermore, Il17a-deficient mice, which have microbial dysbiosis and reduced barrier protection, actually have increased MAIT cell frequencies in the lung and colon. As IL17a-deficient mice have increased Candidatus Homeothermaceae and Bacteriodaceae, it is tempting to speculate that the composition of the microbiome is crucial for MAIT cell expansion (73). As microbial diversity varies between tissues in health and disease, normally high in healthy colon and reduced in dysbiosis and metabolic diseases, this could be a mechanism to manipulate MAIT cell function and through which they may contribute to pathology. In addition to diversity, healthy tissues are characterized by an intact barrier, disruption of which induces inflammation and could impact MAIT cell function. Most microbes are commensals living in symbiosis with the host; pathogens induce barrier disruption, inflammation and cytokine production. For example, colonization with commensal S. epidermidis does not induce inflammation and is important for tissue homeostasis and a MAIT cell tissue repair signature (35). However, stimulation of human MAIT cells in vitro with cytokines in addition to TCR engagement promotes an antimicrobial program, including the cytokine IL-26, capable of directly killing extracellular bacteria (133), and effector recruiting chemokines CXCL9 and CXCL10 (20,49). Thus, the pathogenicity of the human microbiome could tune the MAIT cell response, and further studies should assess the various pathogen-specific factors which induce antimicrobial rather than more tolerant repair effector programs. Microbial Metabolism and Environment Microbial metabolism is also shaped by the tissue microenvironment and could tune MAIT cell activation. One mechanism is through availability of TCR ligands derived from riboflavin synthesis: heat stress in Streptococcus pneumoniae induces expression of the riboflavin operon (134); and acid stress increases purine and folate metabolism, as well riboflavin uptake (19,131). Bacterial co-culture conditions influence their capacity to activate human MAIT cells in vitro (19); hypoxia, simulating the low oxygen tension in colonic crypts, stationary growth phase and hypercarbia increase MAIT cell activation, whereas hypoglycemia, acetate and pyruvate inhibit bacterial control (19). Even chemicals and pesticides, which cause gut dysbiosis on ingestion of food, have been found to increase E. coli-induced MAIT activation (135). MAIT cells could therefore survey the nature and state of the microbiome as a proxy measure of tissue health. Non-riboflavin microbial metabolites could also modulate tissue MAIT cells, contributing to tissue homeostasis or pathogenic inflammation. Lactobacilli, enriched in the female genital tract, produce high levels of L(+)-lactate; and both lactate and Lactobacilli-derived factors dampen Staphylococcus aureus (S. aureus)-induced MAIT cell activation in whole PBMC (136). Given that the antibacterial response of MAIT cells resident in the female genital tract is biased towards IL-17 and IL-22 production compared to their circulating counterparts (32), it is tempting to speculate that microbial derived factors at barrier surfaces might directly skew MAIT cell responses. There is evidence that other products of bacterial metabolism, short chain fatty acids (SCFA), can directly modulate barrier RORgt + immune cell responses (137). Acetate, propionate and butyrate are products of dietary fiber fermentation which signal via G-protein coupled receptors (GPCR) to inhibit histone deacetylases (HDACs) (137). These SCFA directly promote barrier preservation and can reverse some of the immune dysregulation in GF-mice; SCFA rescue the colonic T reg depletion seen in GF mice (138) and are capable of augmenting RORgt + T reg expansion (139)(140)(141) and ILC3 IL-22 production (142). Barrier cells such as ILC3 can directly sense acetate through Ffar2 (free fatty acid receptor 2) (142), which interestingly is also enriched transcriptionally in murine skin MAIT cells compared to CD4 + T cells (35). As SCFA reduce MAIT cell antimicrobial function in vitro (19), it would be of interest to determine if microbial metabolites could be sensed and tune MAIT cell responses in a TCR-independent manner. Dietary Factors In addition to the microbiome, mucosal immune cells are capable of directly sensing chemicals, including nutrients in the mammalian diet (39). Dietary-derived metabolites, particularly lipophilic compounds, rapidly diffuse and bind to intracellular ligand-dependent transcription factors and can regulate tissue resident cells (143); these include receptors for vitamin A (retinoic acid receptor, RAR), vitamin D (vitamin D receptor, VDR), and tryptophan metabolites (aryl-hydrocarbon receptor, AhR). Given their often-shared function and location, MAIT cells may also be regulated by these dietary factors. Vitamin A The fat soluble Vitamin A is enriched in human intestine and is important for mucosal health (39). Dietary vitamin A, as alltrans-retinol, retinyl esters, or b-carotene, is metabolized to bioactive all-trans-retinoic acid (ATRA) and 9-cis-retinoic acid (144), which bind to nuclear retinoic acid receptors RARa (RARA), RARb (RARB), and RARg (RARG) (39). In mice RA maintains barrier homeostasis by directly modulating T reg , T H 17 and ILC3 function. RORgt + pT reg development, homing and differentiation are RA dependent (145)(146)(147)(148), with dietary deficiency promoting T H 17 mediated tissue pathology (141). Similarly ILC3 gut homing (149), plasticity (150), and IL-22 mediated protection in DSS colitis are RA dependent (151). In humans, ATRA directly increases CD161 and gut homing CCR9 expression in a population of RORC + CD161 + colonic T reg associated with tissue repair (64). Vitamin A deficiency is also strongly associated epidemiologically with severe mucosal infections, which may partly be through direct effects on barrier protective immune cells. The role of RA in innate-like T cells is less clear. RA reduces invariant NKT induced-sterile tissue damage by inducing P2RX7 expression, rendering bystander but not TCR-activated cells more susceptible to extracellular ATP-induced pyroptosis (152). gdT cell function is also reduced: CD27 − gdT cell IL-17 production is inhibited by RA through reduction in IL-1R, IL-23R, and pSTAT3 expression (153). In IBD tissue however, RA levels correlate with increased gdT and MAIT cell function (IL-17, IFN-g) (154). As RARG is upregulated in blood MAIT cells relative to conventional T cells (46), vitamin A could conceivably modulate intestinal MAIT cell migration and function to ultimately maintain mucosal integrity in an analogous manner to neighboring CD161 + T cells. Vitamin D and Cholesterol Metabolites The lipophilic oxysterol derivate Vitamin D can be derived from the diet or photochemically synthesized in the skin (39), and binds to its heterodimeric receptor, composed of VDR and the retinoid X receptor (RXR). Immune cells, particularly those in the intestine and skin, are enriched for expression of the nuclear VDR which is reduced in inflammatory bowel disease and implicated in the moderation of mucosal inflammation (39,155). VDR expression is upregulated on TCR signaling and decreases type 17 associated immunopathology in humans and mice, increasing the ratio of T reg : T H 17 (156,157), inhibiting ILC3 IL-23R expression (158), and directly competing with NFATc1 for binding to the IL17A promoter (159). Vdr −/− mice also have impaired iNKT and CD8aa + IEL development, suggesting a broad role in tissue immunity (159)(160)(161). MAIT cell frequency and function may also be subject to regulation by vitamin D. MAIT cells triggered through their TCR upregulate VDR, either in vitro in humans or during acute Legionella longbeachae infection in mice (48). In asthmatic patients, seasonal fluctuations in peripheral MAIT cell frequency correlate with serum phytochemically derived vitamin D 3 levels and peak in August (162). In cystic fibrosis patients however, although baseline serum vitamin D3 correlates with peripheral MAIT cell CD38 expression, there was a trend for reduced MAIT cell frequency in those receiving oral vitamin D supplementation (163). Dietary and photochemically-derived vitamin D may differentially regulate MAIT cells in different compartments and activation stateswhether responding to commensals in homeostasis or during active inflammation. Other cholesterol derivatives of host or microbial metabolism, including oxysterols, are abundant in the intestine and can act as RORgt ligands to promote development and function of RORgt + intestinal cells (164). Stromal cells produce 7-a,25hydrocycholesterol, which binds to GPR183 expressing ILC3 to promote their migration in homeostasis (165). As tissue IFN-g producing MAIT cells transcriptionally express the oxysterol receptor GPR183 (29), oxysterol sensing may also functionally regulate MAIT cells. Bile acids are cholesterol-derived surfactants crucial for fat digestion that bathe the ileum as part of the enterohepatic circulation and regulate both the microbiota and mucosal immunity (166). Secondary bile acids derived from microbial metabolism, including deoxycholic acid (DCA) and lithocholic acid (LCA), can directly promote mucosal homeostasis by increasing colonic FOXP3 + RORgt + T reg (167). A screen of secondary bile acids also found that LCA derivatives can reduce the T H 17:T reg balance in the intestinal lamina propria, by directly blocking RORgt-induced T H 17 differentiation and promoting T reg Foxp3 expression and differentiation in a mitochondrial ROS-dependent manner (168). MAIT cell activation and PLZF expression negatively correlate with serum concentrations of conjugated bile acids in teenage children, and in vitro bile acids inhibit MAIT cell activation in response to E. coli (169), so it would be important to explore whether intestinal bile acids promote homeostatic MAIT cell responses against commensals within a healthy functioning symbiotic intestinal environment. Aryl Hydrocarbon Receptor Aryl hydrocarbon receptor (AhR) is a conserved ligand activated transcription factor highly expressed by cell types at barrier surfaces, in keeping with its role as an environmental sensor promoting mucosal integrity. Physiological AhR ligands include: indolederived ligands from dietary cruciferous vegetables; host and microbe-derived tryptophan-metabolites (e.g., kynurenine); and exogenous chemicals (40). Initially discovered and enriched in intestinal T H 17 and T reg (41,170), AhR signaling is also important for the function of mucosal IL-17 producing gdT, iNKT and ILC3 (66,171,172). Sensing of diverse environmental signals promotes T reg differentiation, IL-22 production, ILC3 survival and IEL homeostasis, thus promoting barrier integrity (173). Ahr-deficient mice have dysfunctional skin and intestinal gdT cells and absent IEL (172,174), with reduced capacity for T H 17 differentiation and IL-22 secretion. AhR expression in CD8 + T cells is crucial for T RM and IEL persistence in tissues (175,176), while cytokines upregulate AhR expression in NK cells and iNKT to promote cytotoxicity and IL-22 production respectively (66,177). The role of AhR in MAIT cells has only been tentatively explored. Ahr is dispensable for MAIT cell thymic development in mice (56). In HIV patients on ART, increased tryptophan catabolism and generation of the AhR ligand, kynurenine correlates with lower peripheral blood MAIT cell frequency and higher frequency of T reg (178). As AHR expression is higher in bronchoalveolar MAIT cells compared to matched circulating cells in children with pneumonia (29), further work should explore specifically whether tissue MAIT cells are selectively regulated by AhR in a similar manner to other IL-22 producing cells in particular. Lipids The predominant calorie source of diet can influence barrier immunity in mice. A high glucose diet exacerbates colitis by increasing mitochondrial metabolism to drive T H 17 differentiation (179). Mice fed a high fat diet also have increased T H 17 differentiation through induction of the lipid sensitive kinase, acetyl co-A carboxylase 1, crucial for de novo FA synthesis and oxidative phosphorylation (103,180). A ketogenic high fat diet however protects against influenza challenge and promotes improved lung barrier integrity associated with early lung gdT cell recruitment, expansion and barrier type-17 function (181). In addition to diet, tissue free fatty acids have also been shown to induce a regulatory phenotype in iNKT (182). Lung type-17 MAIT cells in the context of pneumonia are enriched in genes for OXPHOS, glycolysis, lipid efflux and translocation, while other MAIT cells are enriched in genes for steroid metabolism, fatty acid synthesis and lipid uptake (29). Further studies should investigate the regulation of MAIT cell function by lipids and metabolism. Tissue Environment Tissue immune cellular and soluble mediators, particularly cytokines, manipulate the function of MAIT cells and other resident populations (4). This is further nuanced by the confined shared niche occupied by resident cells which compete for space and local survival signals (175). Tissue inflammation and infiltration of metabolically active, cytotoxic cells into this niche can disrupt homeostatic regulation by depleting nutrients (glucose, amino acids) and oxygen, producing waste products such as reaction oxygen species and lactate which contributes to tissue acidosis (183,184). Resident immune cells are themselves in turn tuned by these non-immune tissue parameters. Oxygen Sensing Oxygen tension and regulation varies in vivo: blood and primary lymphoid organs have tightly regulated levels, whereas physiological hypoxia is observed in tissues such as the skin and intestine (38,52,185). Microbes can also indirectly induce colonic oxygen consumption through SCFA (186). Hypoxia regulates immune cells directly by preventing cytosolic degradation of the oxygen sensing transcription factor, hypoxia-inducible factor (HIF1A). T cell upregulation of HIF1A expression is STAT3-dependent and promotes CD8 + T cell effector functions (187), T reg plasticity (188,189), and T H 17 differentiation through induction of glycolysis, Rorgt expression and Foxp3 proteasomal degradation (190,191). In vitro sorted human MAIT cells co-cultured with proximal tubular epithelial cells are more activated in hypoxic conditions, with increased cytotoxicity albeit no difference in cytokine p r o d u c t i o n ( 1 9 2 ) . I n c h i l d r e n w i t h p n e u m o n i a , bronchoalveolar MAIT cells have higher HIF1A expression compared to their blood counterparts which correlates with their capacity for increased IL-17 production (29). Furthermore, the tissue repair signature enriched in MAIT cells engaged through their TCR includes upregulation of HIF1A in addition to factors associated with angiogenesis (VEGFA, PDGF2, CSF2) (48). It would seem plausible that tissue MAIT cells, likely to experience local hypoxia during the course of an immune response, could tune their effector functions accordingly to ultimately induce tissue repair and improve oxygenation. pH Although circulating pH is homeostatically maintained around pH 7.4, deviations are seen in tissues: healthy skin is acidic due to a high free-fatty acid content; and inflammation drives tissue acidosis through glycolytic products (52). Many immune cells possess mechanisms for proton sensing, including acid-sensing ion channels (ASIC), transient receptor potential (TRP) channels, and GPCRs (193). Among T cells, human MAIT cells and other CD161 + T cells share functionality and a conserved transcriptional signature by bulk microarray, which includes enrichment for two candidate GPCR proton sensors, GPR65 and GPR68 (45,194,195). GPR65 may play an important role in RORgt + T cells; the Gpr65 promoter has a RORgt binding site (196), and Gpr65 expressing T cells regulate the development of EAE in mice which is driven by type 17 inflammation (197). Naïve Gpr65 −/− CD4 + T cells differentiated under T H 17 conditions, or memory Gpr65 −/− CD4 + T cells reactivated with IL-23 produce less IL-17A in vitro, and the adoptive transfer of Gpr65 −/− CD4 + T cells into Rag1 −/− recipients prior to EAE induction markedly delays and reduces disease (197). Another study however found that Gpr65-deficient mice develop exacerbated EAE, which was lost in the absence of iNKT (198); functionally deficient Gpr65 gfp/gfp but not Cd1d -/-Gpr65 gfp/gfp mice develop more severe disease compared to wild-type. It is particularly interesting to note that murine Gpr65 expression is important for CD4 + T survival in culture and highest in iNKT, followed by gdT and NK cells, suggesting a homeostatic role for acid sensing in innate-lymphoid cells. MAIT cells were not assessed in this study, but in humans share similarities with and are a hundred times more common than iNKT (67), thus may represent the most prominent GPR65 expressing cell. Indeed type 17 bronchoalveolar MAIT cells in children with pneumonia are enriched for GPR65 expression so future studies should explore whether acid-sensing can promote MAIT cell mucosal function (29). Lactate Increased lactate is concomitantly seen with acidosis in inflammation, and can be directly sensed by CD4 + T and CD8 + T cells through SLC5A12 and SLC16A1 transporters respectively; these function to inhibit T cell migration and potentially promote tissue retention (199)(200)(201). However in mice, cells with low glycolytic capability, including murine iNKT, show reduced survival under high lactate conditions in vitro (202). MAIT cells and other IL-23R + lymphocytes could also be indirectly regulated by lactic acid augmentation of TLRinduced IL-23p19 production (203), which would skew towards type 17 responses in tissues (32,204). Temperature Fever is a conserved response to infection and autoinflammatory disease across species. Although core temperature is rigorously regulated, peripheral tissues such as the skin where MAIT cells reside, are prone to deviations (205). High temperatures have long been known to enhance human lymphocyte proliferation and cytotoxicity in vitro (206), as well as CD8 + T cell differentiation and CD4 + T cell activation through increased membrane fluidity and reduced co-stimulation thresholds (207,208). In mice, antipyretics (aspirin, ibuprofen) inhibit T H 17 differentiation, with high temperatures selectively promoting inflammatory T H 17 differentiation and increased lung neutrophil recruitment (209). Pulmonary MAIT cells and IL-17A producing innate-like T cells clearly protect against bacterial and viral infections in mice, which become pyrexial during the course of a normal immune response (210)(211)(212). It is unclear if pharmacological or pathological alteration of this normal febrile response, or significant exposure to cold environments, could modulate the response of tissue MAIT cells. Electrolytes and Osmotic Stress Similar to pH, electrolytes such as sodium, potassium, and chloride are normally tightly regulated in blood. Elevated extracellular potassium is, however, found in necrotic tissues and tumors, which paralyses human cytotoxic T cell responses (213). It is also appreciated that salt (NaCl) concentration can be enriched in barrier tissues such as the skin, particularly during inflammation (44,52,214). High salt diet increases EAE severity in mice due to increased T H 17 differentiation from naïve precursors; direct salt-sensing ultimately induces T H 17 IL-23R expression (215,216) and inhibits T reg differentiation (217,218). In humans, high salt in vitro augments both naïve CD4 + T H 17 polarization and memory CD8 + T cell IL-17A production in response to TCR-activation in the absence of polarizing cytokines (219). Interestingly, the skin of patients with atopic dermatitis has higher salt concentrations, and salt promotes both T H 2 and T H 17 cytokine production and skin-homing CCR8 expression by TCR-activated memory CD4 + T cells, thus potentially linking the environment with pathogenic mucosal T cell responses (219). Salt also indirectly regulates mucosal T cell function through differential production of polarizing cytokines: osmotic stress increases macrophage IL-1b production and Th17 generation in mice (220); and humans with a fixed high salt diet have increased plasma IL-23 (221). As MAIT cells are IL-23 + T cells in the skin, it would be interesting to determine whether their responses are also skewed in a similar way by the tissue electrolyte composition and if this contributes to disease. Neuroendocrine System The dense peripheral neuronal network underlying barrier surfaces co-ordinate rapid often reflex responses to external insults, such as itch, pain or cough reflex. Remarkably peripheral nerves directly regulate tissue immunity through soluble factors, including neuropeptides: neuromedin U (NMU) modulates ILC2-mediated tissue protection (222,223); vasoactive intestinal peptide (VIP) increases ILC3-mediated epithelial barrier protection (224,225); and catecholamines have inhibitory and stimulatory effects on ILC2 and NK cells , and proteins associated with tissue repair, such as the endoprotease furin. MAIT cells effector functions are controlled by the transcription factors PLZF, RoRgt, and Tbet. Importantly, while PLZF expression within MAITs is stable, expression of the homeostatic effector program is associated with increased expression of RoRgt and decreased expression of Tbet. Finally, TCR-mediated MAIT cell activation also leads to expression of HIF1A, another transcription factor associated with tissue repair. In addition to TCR-ligands and cytokines, several other factors have the potential to modulate MAIT cell activation. Bile acids and L-lactate were shown to generally reduce MAIT cell responses, while binding of Vitamin D to its receptor (VDR), the expression of which is upregulated in MAIT cells in response to TCR-signaling, has the potential to specifically inhibit the homeostatic response. In contrast, recognition of several other metabolites including AhR ligands, Vitamin A and lipids was associated with the expression of homeostatic effector molecules in other T cell populations and hence, could positively influence the expression of these molecules in MAIT cells as well. Similarly, short-chain fatty acids (SCFA), a product of bacterial metabolism, were shown to stimulate production of IL-22 and expansion of RORgt-expression lymphocytes in other immune cells, while reducing the antimicrobial function of MAIT cells, which could overall present a mechanism to preserve tissue homeostasis. Created with Biorender.com. respectively via adrenoceptor beta-2 (ADRB2) (226). The enteric nervous system can also indirectly control resident lymphocytes, through nociceptor induced IL-18 and IL-23 expression (227,228). Given their location within this neuroimmune network, and expression of relevant receptors for cytokines and neuropeptides transcriptionally, MAIT cells could be subject to rapid manipulation by the nervous system. Growth factors also regulate tissues and could influence MAIT cells. One example is insulin-like growth factor 1 (IGF-1) signaling, which promotes STAT3 signaling and aerobic glycolysis to increase type-17 effector functionality of T H 17 and ILC3 (229). IGFbp4, an important modulator of IGF1 signaling, is enriched in murine RORgt + T H 17, T reg , and ILC3. In humans, IGFBP4 is enriched in CD161 + T cells, so may be an important regulator of MAIT cell type-17 functionality (45). Indeed insulin resistance and fasting insulin levels in obese children correlate with circulating IL-17A producing MAIT cells (230), which may imply that feedback circuits regulating tissue glucose metabolism play an important role in skewing MAIT cell function. External Environment UV-light can regulate the immune system (231), partly through photochemical synthesis of vitamin D. Additionally, UV light dampens inflammatory pathology in psoriasis and has been shown to degrade numerous photosensitive MAIT cell ligands, including folic-acid derived 6-FP (6-formyl-pterin) (232). Given the unstable nature of MAIT cell ligands, the impact of light on skin MAIT cell responses to commensals in particular deserves further attention. Finally, the circadian rhythm has a role in entraining barrier RORgt + cells (233). Clock genes regulate the RORC promoter to dictate T H 17 differentiation (234) and ILC3 function (235)(236)(237); and disruption of the light-dark cycle in mice exacerbates T H 17 IL-17A-dependent DSS colitis. Pathway enrichment for innatelike T cells suggests that circadian clock regulation is a shared feature among human innate-like T cells, with transcription factors ARNTL (encoding BMAL), RORA, PER1, and CRY1 enriched among MAIT, iNKT, and Vd2 + gdT cells (238). As circadian biology regulates mammalian behavior and exposure to environmental factors, including food, this could be particularly relevant to mucosal MAIT cell function. CONCLUSION MAIT cells serve an increasingly appreciated role in barrier tissues, yet the full range of effector functions remain to be determined. Similar to other mucosal lymphocytes, they are engaged in crosstalk with the tissues and microbiome via their TCR and through cytokine receptors (Figure 2). The context of this cross-talk in tissues, in addition to the array of increasingly recognized signals sensed by resident lymphocytes, suggest that other factors may influence MAIT cell activation, function, and plasticity. Indeed, these environmental factors were identified with mouse models that may have missed the impact on MAIT cell biology as these cells are infrequent in murine mucosal tissues. Humans, however, have an abundance of MAIT cells and in contrast to laboratory mice, are exposed to phenomenally diverse environmental factors unique to each individual. Exploring local environmental factors in addition to fixed pre-programmed factors in the investigation of MAIT cell tissue biology will be crucial to understanding the variability in humans and could pave the way for personalized therapies in the context of disease. AUTHOR CONTRIBUTIONS AA conceived the review, conducted the literature review, and wrote the bulk of the manuscript. DP contributed to writing of the manuscript. C-PH edited and revised the manuscript, and created the figures. PK contributed to the planning, editing, and scope of the review. All authors contributed to the article and approved the submitted version.
2020-11-30T14:09:08.016Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "e04d7a6ef7f4677f786643931e435f9ea9b8477e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.584521/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e04d7a6ef7f4677f786643931e435f9ea9b8477e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
261443192
pes2o/s2orc
v3-fos-license
Investigating the mediating influence of distress tolerance on the relationship between existential thinking, sense of coherence, and the severity of mourning among families who lost a loved one to COVID‐19: A cross‐sectional study Abstract Background and Aims The objective of the current study was to examine how distress tolerance functions as a mediator in the relationship between existential thinking, sense of coherence, and the severity of mourning experienced by families who have lost a loved one to COVID‐19. Methods The present study employed a descriptive correlational research design, targeting family members of those who passed away due to COVID‐19 in the city of Mianeh in 2022. A sample of 160 individuals was selected for statistical analysis. The research instruments used in this study consisted of Flensberg's sense of coherence questionnaire (2006), Simmons and Gaher's emotional distress tolerance questionnaire (2005), Sugbart and Scott's grief experience questionnaire (1989), and Branton Scherer's existential thinking questionnaire (2006). The collected data were analyzed using path analysis, as well as SPSS and Amos software. Results The findings of the study revealed a significant correlation coefficient between existential thinking (r = 0.465), sense of coherence (r = 0.401), and distress tolerance (r = 0.521) with the severity of mourning experienced by families who lost a loved one to COVID‐19. Moreover, the results indicated a positive and significant relationship (p > 0.01) between distress tolerance and sense of coherence (r = 0.126), as well as between distress tolerance and existential thinking (r = 0.059) among the bereaved families. However, the bootstrap test results suggested that distress tolerance did not mediate the relationship between sense of coherence and the severity of mourning in the families of COVID‐19 victims. Conclusion Consistent with prior research, the current study's findings indicated that both existential thinking and sense of coherence had a direct impact on the severity of mourning experienced by families who lost a loved one to COVID‐19. Additionally, the results revealed that the influence of existential thinking on the severity of mourning was mediated indirectly by increasing distress tolerance. The structure of spiritual and existential thinking is considered to be one of the primary protectors of mental health during crises. Existential thinking refers to an individual's desire to contemplate fundamental life issues, such as the meaning and purpose of life, death, emptiness, and alienation. 11Researchers suggest that exploring these existential issues may help individuals endure distress by increasing their awareness of existential issues and emphasizing the importance of discussing and reflecting on these issues. 12Furthermore, the existential attitude can stimulate healthy behaviors by encouraging individuals to search for the meaning of life and suffering. 13Studies have also demonstrated that existential thinking plays a mediating role in the relationship between death anxiety and empathy with COVID-19 patients 14 and can enhance the quality of working life for those involved in caring for dying patients. 15e lifestyle changes and intense grieving experienced by individuals who have lost loved ones due to COVID-19 have led to an increase in mental health-related problems among the family members of the deceased, which is a cause for concern.It is important to protect these survivors from the potential hardships associated with losing a loved one.However, possessing certain psychological capabilities can help reduce the duration and intensity of the grieving process, or mitigate its negative impact on an individual's life.Despite this, few studies have explored the psychological mechanisms that affect the bereavement experienced by families of COVID-19 victims, suggesting that the importance of this issue has been understudied in the field of mental health. Therefore, there is a need for research that can shed light on the psychological factors influencing the grieving process among these families.In light of this gap in the literature, the present study was conducted to investigate the mediating role of distress tolerance in the relationship between existential thinking, sense of coherence, and the severity of mourning experienced by families who have lost a loved one to COVID-19. | MATERIALS AND METHODS The present study was a descriptive correlational study conducted intermittently starting from the beginning of 2022 in the city of Tabriz.The study population consisted of family members Key points • The severity of grief among families who lost a loved one due to Covid-19 can be affected by the psychological factors of different people. • Existential thinking had a direct and indirect effect on the severity of grief, so existential psychotherapy can be a good help to increase psychological capacity and accelerate the process of grief treatment. of individuals who had died due to COVID-19 in Tabriz, including spouses, parents, children, and siblings of the deceased.A purposive sampling method was employed, whereby one of the main family members of the deceased, including the spouse, child over 18 years old, father, mother, sister, or brother, was contacted and informed about the research objectives and the confidentiality of the results.A total of 160 participants completed the questionnaires in person. To be eligible for the study, participants had to meet certain criteria, which included: the deceased must have died with a confirmed diagnosis of COVID-19 in 2022 or later; the participant must not have sought psychological help for a psychiatric disorder in the year before the incident; the participant must have had a firstdegree blood relationship with the deceased; and the participant must have provided consent to participate and complete the questionnaire. | Distress tolerance questionnaire Simmons and Gaher developed a self-report index consisting of 15 items and four subscales, which include emotional distress tolerance, absorption of negative emotions, estimation of mental distress, and adjustment of efforts to relieve distress.Respondents rate each item using a five-point Likert scale ranging from 1 (indicating complete agreement with the desired option) to 5 (indicating complete disagreement with the desired option). Higher scores on the scale suggest greater distress tolerance. The ⍺ coefficients for the subscales were 0.72, 0.82, 0.78, and 0.70, respectively, and 0.82 for the whole scale.After 6 months, the intra-class correlation was found to be 0.16.The scale also demonstrated good criterion validity and initial convergence, as indicated by previous research. 5In local research, Cronbach's ⍺ coefficient for the whole scale was 0.93, and the coefficient obtained through the retest method was 0.61. 16 | Existential thinking questionnaire Branton Scherer developed the existential thinking questionnaire in 2006 to measure existential intelligence.The questionnaire uses a 6point Likert scale (always, almost always, often, sometimes, never, and I don't know) to assess subjects' engagement with existential concepts.The questionnaire consists of 11 items and a general subscale, and the total score ranges from 11 to 55, with higher scores indicating a greater degree of preoccupation with existential concepts.The original version of the questionnaire exhibited high reliability, with an internal consistency of 0.93.The questionnaire's single-factor structure has been supported by numerous studies. 17A local study found a Cronbach's ⍺ coefficient of 0.88 and a test−retest correlation coefficient of 0.75, indicating the questionnaire's good reliability. 11 | Grief experience questionnaire The grief experience questionnaire, first introduced by Barrett and Scott, consists of 34 questions designed to evaluate several aspects of bereavement.This questionnaire applies to various types of loss and death.The factors assessed include physical reactions, general grief reaction, search for an explanation, loss of support, labeling, guilt, responsibility, shame, rejection, self-destructive behavior, and unique reactions.In the original study by Barrett and Scott, the internal consistency of the questionnaire was found to be high, with a Cronbach's ⍺ coefficient of 0.97.Cronbach's ⍺ coefficients for the 11 factors were as follows: physical reactions (0.79), general grief reactions (0.68), trying to find an explanation (0.68), loss of support (0.86), being labeled (0.88), guilt (0.89), responsibility (0.88), shame (0.83), rejection (0.87), self-destructive behavior (0.79), and unique reactions (0.79). 18 | Antonevsky's sense of coherence Antonevsky developed the abbreviated version of sense of coherence in 1987, 19 which comprises 13 items measuring three dimensions: comprehensibility (5 items), manageability (4 items), and meaningfulness (4 items).The scale uses a 5-point Likert scale, and three items are scored in reverse.Scores on this scale range from 14 to 4, with higher scores indicating greater coherence.The scale provides not only separate scores for each subscale but also an overall score.In Iran, Mohammadzadeh et al. standardized this questionnaire for Iranian students after translating it.Cronbach's ⍺ values for male and female students were 0.75 and 0.78, respectively, indicating acceptable internal consistency.The concurrent validity of this scale with the 45-item mental toughness questionnaire was 0.54.The overall scale demonstrated good test −retest reliability with a coefficient of 0.66.To assess the validity of the questionnaire, the researchers examined the relationship between the understanding, management, and meaningfulness subscales and the total score of the questionnaire.The results indicated correlations of 0.86, 0.81, and 0.76, respectively, indicating that the scale is valid and reliable. 20As shown in Table 4, there exists a significant correlation between existential thinking and distress tolerance (r = 0.059) and the severity of mourning (r = 0.465) (p < 0.01).Furthermore, sense of coherence demonstrates a positive and significant correlation with distress tolerance (r = 0.126) and the severity of mourning (r = 0.401). | FINDINGS Additionally, distress tolerance exhibits a positive and significant relationship with the severity of mourning (r = 0.521). As shown in Table 5, the value of the χ 2 index is not significant (p > 0.050), and all fit indices of the model have also reached the reasonable fit criterion.Goodness of fit index and comparative fit index are above 0.90, which are reasonable for model fitness.The root mean square error of approximation was also 0.056 which is reasonable. According to Table 6 and the paths of the model, it can be observed that sense of coherence has a significant positive direct effect on the severity of mourning among the families of the deceased from COVID-19 (β = 0.480), but it does not significantly impact distress tolerance (β = 0.007).Furthermore, existential thinking has a significant negative direct effect on the severity of mourning (β = −0.297)and a significant positive direct effect on distress tolerance (β = 0.152).Additionally, the severity of mourning has a significant negative direct effect on distress tolerance (β = −0.372).The primary objective of this investigation was to explore the potential mediating role of distress tolerance in the relationship between existential thinking, sense of coherence, and the severity of mourning in the families of individuals who passed away as a result of COVID-19.The findings indicated that only the indirect influence of existential thinking on the severity of mourning through distress tolerance was significant, while no direct effect of sense of coherence was found.The results further showed that both existential thinking and sense of coherence, as well as distress tolerance, had a direct and statistically significant impact on the severity of mourning.Furthermore, the direct impact of existential thinking on distress tolerance was also found to be significant. According to The research conducted by Bolen and Connor 9 investigated the correlation between sense of coherence and the severity of mourning experienced by families who lost their loved ones due to COVID-19. The study found that attributing meaning to the events surrounding the loss can have a positive impact on bereaved individuals, which is consistent with the current research's findings regarding the relationship and direct and significant influence of sense of coherence on the severity of mourning. Antonovsky posited that individuals with a high level of sense of coherence are less likely to experience stress and anxiety during challenging situations, and they are also less prone to disappointment and depression when facing difficult circumstances. 21Consequently, sense of coherence is considered a vital factor in promoting health and positive outcomes, and it is an effective means of coping with problems. 22In this regard, having a sense of coherence enables the families of the deceased to perceive life's internal and external stressors in an orderly, predictable, and understandable way. The present study is in accordance with previous research that examines the direct and indirect impacts of existential thinking on the severity of mourning experienced by families of those who have passed away from Covid-19.Studies (Brassai et al., 13 Dargahi et al., 14 and Mason et al.) 15 have demonstrated that an existential mindset can promote beneficial behaviors, such as improving distress The results of the bootstrap test for the paths of distress tolerance in the relationship between sense of coherence, existential thinking, and the severity of mourning. The model for predicting the severity of mourning (standard mode).Figure 1 shows the direct and indirect paths in the standardized model. F I G U R E 2 The model for predicting the severity of mourning in the (significant numbers).Figure 2 shows the direct and indirect paths in the significance of numbers. tolerance, through individuals' search for the meaning of life. Moreover, research has suggested that existential thinking can heighten nurses' empathy toward Covid-19 patients by reducing health anxiety. 14w individuals perceive the loss of their loved ones following their passing is influenced by various factors.Individuals who engage in deep reflection are likely to exhibit greater acceptance and flexibility in coping with the loss.By confronting existential issues, individuals can develop a potential capacity for personal growth and increased awareness of the struggles associated with existence. 23is heightened awareness can lead to an increase in distress tolerance, ultimately resulting in a reduction in the severity of mourning experienced by the bereaved family members.Some studies have suggested that mediating factors, such as the meaning of life, coping strategies, and religiosity, may play a role in the relationship between existential thinking and grief-related problems. 24Giving meaning to life is a category of existential thinking that is dependent on an individual's attitude toward life. 25According to the conducted research, ascribing meaning to problems and suffering is a fundamental aspect of living purposefully, adapting to stress, and enduring distress. 26Research investigating the relationship between meaning in life and psychopathology has demonstrated that a low level of meaning in life is associated with various mental health issues, such as addiction disorders, depression, hopelessness, and suicide. 27Given the close link between existential thinking and the meaning of life, individuals who perceive life to be more meaningful are better equipped to face life's challenges, process new information, and maintain a broader and more positive outlook for their future.As such, having a sense of meaning in life and engaging in existential thinking can serve as beneficial coping mechanisms, enabling individuals to better endure difficult times, including the mourning period following the loss of loved ones. | CONCLUSION The present study's findings, which align with previous research, demonstrate that existential thinking can impact the severity of mourning experienced by families who have lost loved ones to Covid-19 by promoting distress tolerance.Therefore, emphasizing an existential perspective and ascribing significance to life as a psychological safeguard can assist bereaved individuals in processing and accepting their grief. Table 1 displays the age distribution of individuals who have died from Covid-19.Based on the results obtained in Table1, it is evident that the largest number of respondents falls within the age range of 61−80 years, while the smallest number is in the under-20 age group. Table 2 shows the frequency of the relationship between the participants with the deceased person caused by the Covid-19. Table 7 Age distribution of the individuals who died from Covid-19.The type of relationship between the participant and the person who died from Covid-19.Descriptive statistics.Correlation matrix of the variables.Model fit indices. passed away as a result of Covid-19.Nevertheless, the distress tolerance variable does play a mediating (indirect) role in the relationship between existential thinking and the severity of mourning experienced by the bereaved families affected by the pandemic (Figures1 and 2).T A B L E 1 T A B L E 5
2023-09-02T15:21:19.538Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "3e61b86910ceb4ef5861f685924412887b04026d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hsr2.1518", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0d224837136174293ab51ea2e2b32516ab87fcf", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
99932005
pes2o/s2orc
v3-fos-license
Electrocatalytic oxidation and determination of dexamethasone at an Fe3O4/PANI–CuII microsphere modified carbon ionic liquid electrode A novel, simple, sensitive and selective electrochemical sensor based on an Fe3O4/PANI–Cu II microsphere modified carbon ionic liquid electrode is constructed and utilized for the determination of dexamethasone. The synthesized Fe3O4/PANI–Cu II microspheres are characterized by routine methods such as X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FT-IR), thermo gravimetric analysis (TGA), and inductively coupled plasma atomic emission spectroscopy (ICP-AES). The Fe3O4/ PANI–Cu microspheres can significantly accelerate the electron transfer rate and represent excellent synergistic electrochemical activity for the oxidation of dexamethasone. Differential pulse voltammetry (DPV) was used for the quantitative determination of dexamethasone. As shown, the oxidation peak current is linear with the concentration of dexamethasone in the range of 0.05 to 30 mM with a detection limit of 3.0 nM which is more sensitive than most of the previously reported methods. The proposed method is successfully applied to the sensitive determination of dexamethasone in real samples with satisfactory recoveries. Introduction Dexamethasone-11b,16a-9-uoro-11,17,21-trihydroxy-16-methylpregna-1,4-diene-3,20-dione, abbreviated here as DXM, is a potent synthetic derivative of the glucocorticoid hydrocortisone. It is regularly employed as an anti-inammatory and immunosuppressive agent for the treatment of conditions such as inammation, allergy, and autoimmune conditions. 1,2 This medication has been added to the list of banned medications because of its misuse as a doping agent in sports such as cycling and horse racing to improve performance. 3 Therefore, development of new, sensitive, selective and simple analytical approaches and techniques for determining dexamethasone in biological uids such as plasma and urine is essential aer administration for efficient and safe use. Different reports have been published in the literature for determining dexamethasone which include LC-MS, 4-6 LC-MS-MS, 7 GC-MS detection, 8 HPLC chemiluminescence 9,10 thin-layer chromatography, 11 and electrochemical detection. 12 Although these approaches show some merits, they are disadvantageous due to the complicated instruments required and the associated time-consuming sample pretreatment. Electrochemical techniques make a good candidate for the analysis of dexamethasone due to their practicality, simplicity, low-cost, and ease of miniaturization for small-volume samples. 3,13,14 In recent years, carbon ionic liquid electrode (CILE), which is prepared by incorporating an ionic liquid as both binder and modier in carbon paste electrode (CPE) have been widely used in the eld of electrochemical sensing. [15][16][17] These types of sensors have different advantages such as low cost, easy preparation, high sensitivity, high conductivity, wide electrochemical windows, antifouling effect and renewable surface. Composites of conducting polymers (CPs) and metal nanoparticles have received much attention in the last decades because of their various applications such as catalysts, biosensors and capacitors. 18,19 Incorporation of metal nanoparticles would preserve the properties of low dimensional conductors, enhance the conductivity of polymers, and obtain high surface areas. The incorporation of Fe 3 O 4 nanoparticles into polyaniline (PANI) matrix has been extensively studied because Fe 3 O 4 -PANI composite reveals characteristics through the combination of both components including good biocompatibility, large surface areas, good electrochemical stability and enhanced electrocatalytic activity. 20 Recently, Fe 3 O 4 composites or hybrids doped with noble or other transition metals have attracted great attention, because of their doping or synergetic effect. These effects have meaningfully broadened the property and potential application value of Fe 3 O 4 nanomaterials. [21][22][23] Among them, Cu loaded onto Fe 3 O 4 improve the catalytic activity due to the synergetic effect. 24 In this research study, a novel electrochemical sensor was generated for determining of DXM based on Fe 3 O 4 /PANI-Cu II modied CILE. The results of voltammetric studies showed that the fabricated sensor has excellent advantages such as higher sensitivity and selectivity, wider linear ranges, simpler electrode fabrication process and stability. In addition, it can be applied to the sensitive detection of DXM in real samples as it has been shown. Experimental All experiments were carried out in accordance with the World Medical Association's "Ethical principles for medical research involving human subjects", and approved by the medical ethics committee at Kermanshah University of Medical Sciences. There is no experimentation with human subjects in this study. Apparatus and chemicals Electrochemical measurements were carried out with a potentiostat-galvanostat Autolab equipped with a three-electrode cell containing a saturated Ag/AgCl as a reference electrode and a platinum electrode as an auxiliary electrode. CILE modied with Fe 3 O 4 /PANI-Cu II microspheres was applied as working electrode. The system was run on a PC by NOVA and FRA 1.11 soware. A Metrohm 710 pH meter was used for pH adjustment. The synthesized Fe 3 O 4 /PANI-Cu II microspheres was characterized by power X-ray diffraction (XRD) was performed on a Bruker D8-advance X-ray diffractometer with Cu Ka (k ¼ 0.154 nm) radiation and thermo gravimetric analysis (TGA) was performed using a Shimadzu thermo gravimetric analyzer (TG-50). TEM analysis was carried out using TEM microscope (Philips CM30). FT-IR spectra were recorded on a Shimadzu Fourier Transform Infrared Spectrophotometer (FT-IR-8300). The morphology of the products was determined by using Hitachi Japan, model s4160 Scanning Electron Microscopy (SEM) at accelerating voltage of 15 kV. This system was equipped with a concentric hemispherical (CHA) electron energy analyzer (Specs model EA10 plus) suitable for X-ray photoelectron spectroscopy (XPS). The Cu loading amount was determined by OPTIMA 7300 DV ICP analyzer. Chemicals were purchased from Merck and Fluka Chemical Company. DXM powder (pure) was provided from Aldrich chemicals (Milwaukee, USA). All the reagents used were of analytical grade and double distilled water was used throughout the experiments. Daily-based fresh frozen blood serum samples were prepared from the venous blood of random healthy male and female blood donors, obtained from Imam Reza Hospital (Kermanshah, Iran). Urine sample obtained from healthy individuals were stored frozen until assay. Synthesis of Fe 3 O 4 microspheres Magnetite particles were synthesized by using a solvothermal method. 25 FeCl 3 $6H 2 O (1.4 g, 5 mmol) was dissolved in ethylene glycol (EG) (40 mL) to form a clear solution, followed by the addition of sodium acetate (NaAc) (3.6 g). The mixture was stirred vigorously for 15 min, and then was sealed in a Teonlined stainless-steel autoclave. The autoclave was heated to 200 C and maintained at this temperature for 5 h, and then was cooled to room temperature (RT). The resulting black product was washed with ethanol and deionized water several times, and was nally dried. Synthesis of Fe 3 O 4 /PANI composite microspheres The Fe 3 O 4 /PANI microspheres was prepared by in situ chemical oxidative polymerization of aniline in the presence of Fe 3 O 4 microspheres. In this method, 0.3 mL HCl (0.1 M) was dissolved in 10 mL of deionized water. Then, the Fe 3 O 4 (0.25 g) microspheres and 0.2 mL aniline monomer were added to the above reaction mixture and stirred at room temperature. Five milliliters of aqueous APS (2.2 mmol) was added dropwise to the solution of PANI/HCl complex containing Fe 3 O 4 microspheres under ultrasonic irradiation and the reaction was allowed to proceed for 3 h at RT. The resultant product was washed with deionized water, methanol and ether three times, respectively, and then dried in vacuum for 12 h to obtain green-black powder of Fe 3 O 4 /PANI microspheres. 26 Loading of Cu on Fe 3 O 4 /PANI (Fe 3 O 4 /PANI-Cu II ) The as-synthesised Fe 3 O 4 /PANI microspheres (100 mg) were rst dispersed in ethanol solution (50 mL) under ultrasonication for 0.5 h. The formed black suspension was ultrasonically mixed with (30 mL, 0.1 M) of CuCl 2 for 1 h. Aer this, the microspheres were harvested with the aid of a magnet and washed with deionized water three times and dried under vacuum. Preparation of modied electrode The traditional carbon paste electrode (CPE) was fabricated by mixing 30.0 w/w% of paraffin oil and 70.0 w/w% of graphite powder. The CILE was fabricated by mixing 20.0 w/w% of paraffin oil, 10.0 w/w% of solid I(EMIMPF 6 ) and 70.0 w/w% of graphite powder. The Fe 3 O 4 /PANI-Cu II /CILE was fabricated by hand mixing the optimal amounts of graphite powder, paraffin oil solid I(EMIMPF 6 ) and Fe 3 O 4 /PANI-Cu II (60 : 10 : 15 : 15% w/ w). Other modied electrodes were prepared using a similar procedure for comparison. Each paste was packed rmly into pipette tube in which electrical contact was made with a copper wire that runs through the center of the electrode body. Prior to experiment, the surface of each the prepared electrodes polished using a butter paper to produce reproducible working surface and then was used for electrochemical studying of DXM by voltammetric techniques. Real sample preparation Human urine and serum samples were taken from healthy donors and used shortly aer collection. The urine sample was centrifuged and diluted 10 times without any further pretreatment. The serum sample was treated with 2 mL methanol (as protein precipitating agent). The precipitated proteins were separated out by centrifugation for 3 min at 6000 rpm. The clear supernatant layer was ltered and diluted to a denite volume. Characterization of the Fe 3 O 4 /Cu(II) microspheres Scanning electron microscope (SEM) images of the resulting Fe 3 O 4 microspheres are shown in (Fig. 1a). As it can be seen, the generated Fe 3 O 4 microspheres have a spherical shape with a rough surface and an average diameter of 145 nm. The Fe 3 O 4 / PANI microspheres consisted of aggregates of small magnetite particles with sizes from 15 to 20 nm as calculated using transmission electron microscopy (TEM) (Fig. 1b). A continuous layer of PANI could be observed on the outer shell of the Fe 3 O 4 microsphere cores. The average thickness of the PANI shell was 25 nm. From the TEM image of Fe 3 O 4 /PANI-Cu II (Fig. 1c), it could be seen that the morphology of (Fig. 3a), which reveals that the crystalline phase of Fe 3 O 4 is well-maintained aer the coating process under acidic conditions. Compared with that for bare Fe 3 O 4 (Fig. 3a) Fig. 4. Peaks corresponding to oxygen, carbon, nitrogen, copper and iron were clearly observed (Fig. 4a). To determine the oxidation state of Cu, the XPS experiments were carried out and results are reported in Fig. 4b. As it can be observed, in Fig. 3b the Cu binding energy of Fe 3 O 4 /PANI-Cu II exhibited two strong peaks located at 932.8 and 952.5 eV, which were assigned to Cu 3d 3/2 and Cu 3d 5/2 , respectively. These values suggests that the oxidation state of copper in the Fe 3 O 4 /PANI-Cu II microspheres is +2. 28 Fig . 5 illustrates the results of the thermogravimetric analysis of the Fe 3 O 4 @PANI composite, for which the thermal degradation of the PANI occurs at 450 C. 29 The initial mass loss at lower temperatures is mainly due to the release of water and solvent molecules in the polymer matrix. The major weight loss is observed at 290 C and continues to 630 C, possibly due to a large scale thermal degradation of the PANI chains. From the Fig. 1 (a) TG analysis, the mass ratio of the PANI in the magnetic core/ shell composite is about 21%. Characterization of the modied electrode Surface morphologies of CPE, CILE and Fe 3 O 4 /PANI-Cu II /CILE were investigated by SEM, respectively (Fig. 6). The surface of the CPE (Fig. 6a) showed a homogeneous surface and the SEM image of CILE (Fig. 6b) shown an smooth surface appeared without separated carbon layer, which was due to the embedment of ionic liquids EMIMPF 6 between the layer of carbon and disperse the carbon powder homogeneously. When Fe 3 O 4 / PANI-Cu II microspheres were introduced into the paste (Fig. 6c), the uniformity of the surface was remained almost unchanged while the surface roughness seemed to be increased by appearing uniform layer of Fe 3 O 4 /PANI-Cu II microspheres on the surface which fairly distributed in paste. Electrochemical impedance spectroscopy (EIS) is an efficient analytical method to monitor the modication procedure of the electrode surface. The semicircle diameter of the Nyquist plots at high frequency is corresponding to the charge-transfer limited process and can be used to describe the interface properties of the electrode. The Nyquist plots of different electrodes including bare Fig. 7. The smaller semicircle portion at higher frequencies of CILE than the bare CPE indicates that ionic liquids can improve the electron transfer rate. The reason might be attributed to the excellent electrical conductivity of ionic liquids EMIMPF 6 . Fe 3 O 4 /CILE shows a small semicircle at the high frequency region when compared with the CILE, this can be attributed to the presented Fe 3 O 4 with good conductivity and large surface area in the modied electrode, which could effectively increase the rate electron transfer between electrode surface and [Fe(CN) 6 ] 3À/4À and decrease interface electron transfer resistance. Aer modifying of the Fe 3 O 4 microsphere with PANI, the semicircle portion at higher frequencies decreased visibly. It is due to good electrical conductivity of the PANI polymer. Finally, the semicircle portion at higher frequencies of Fe 3 O 4 /PANI-Cu II /CILE smaller than the Fe 3 O 4 /PANI/ CILE which may be due to presence of copper ions in Fe 3 O 4 /PANI-Cu II . It can facilitate the electron transfer. Determination of surface area The active surface area of the modied electrode was estimated, using the [Fe(CN) 6 ] 3À/4À redox system and applying the Randles-Sevcik equation for a reversible process. 30 For a typical reversible process, the following formula is can be employed: where I p is the peak current, D is diffusion coefficient (7.6 Â 10 À6 cm 2 s À1 ), n is scan rate (V s À1 ) and C 0 is the concentration of K 4 [Fe(CN) 6 ] in mol L À1 . n is the number of electron transferred (n ¼ 1), n is the scan rate and A is the effective surface area. Cyclic voltammetry experiments at different scan rates were carried out with the bare and modied sensors immersed in a solution of 1 mM K 4 [Fe(CN) 6 ] in 0.1 M KCl. The surface area could be calculated from the slope of I p versus n 1/2 plot, which were found as 0.08 cm 2 , and 0.4045 cm 2 for bare CILE and Optimization of the amount of modier in the electrode The effect of the amount of Fe 3 O 4 /PANI-Cu II on the Fe 3 O 4 / PANI-Cu II /CILE performance toward electrooxidation of DXM was examined by DPV. It was observed that the sensitivity of the sensor rst rapidly increases with increasing the Fe 3 O 4 /PANI-Cu II nanoparticles content in the paste up to about 15% and then started to level off and even slowly decreases with the higher loadings. Initially, the maximum peak current was obtained when the amounts of the graphite powder, paraffin oil, ion liquid EMIMPF 6 and Fe 3 O 4 /PANI-Cu II in the paste were 60 : 10 : 15 : 15% (w/w). Investigation of the scan rate Effect of the scan rate on the cyclic voltammograms of 100 mM DXM at different scan rates were shown in Fig. 9a. The results showed that the peak currents vary linearly with the square root of the scan rate (n 1/2 ) (Fig. 9b), which indicates a diffusioncontrolled process for DXM oxidation on the surface of the modied electrode in the studied range of potential sweep rates, with following equations: I pa ¼ 144.51n 1/2 + 3.7233 (R 2 ¼ 0.9947). The dependence of the peak potential and the logarithmic scan rate (ln n) showed a linear relationship with a regression equation of E pa (V) ¼ 0.0238 ln n (V s À1 ) + 0.7443 (R 2 ¼ 0.9941) (Fig. 9c). For an irreversible electrode process, according to Laviron equation, 31 E pa is dened by the following equation: where a is the transfer coefficient, k 0 is the standard rate constant of the reaction; n is the electron transfer number; n is the scanning rate; E 0 is the formal potential. Other symbols have their usual meanings. According to above equation, the value of (1 À a) n can be easily calculated from the slope. In our system, the slope is 0.0238, taking T ¼ 298.15 K, F ¼ 96 485 C mol À1 and R ¼ 8.314 J mol À1 K À1 , the (1 À a)n was calculated to be 1.07. According to Bard and Faulkner. 32 a can be calculated using the following equation: a can be given as: a ¼ 47.7/(E p À E p/2 ) mV where E p/2 is the potential where the current is at half the peak value. Thus, from this, the transfer coefficient (a) and the electron transfer number (n) are calculated to be 0.54 and 1.92 z 2, respectively. Also, if the value of E 0 is known, the k s value can be determined from the intercept of the straight line of E p vs. ln n. The E 0 value can be deduced from the intercept of E p vs. n plot by extrapolating the line to the vertical axis at n ¼ 0, when n was approached to zero, then E p was approached to E 0 . 33 Thus, using this information and Laviron equation the k s values obtained was 2.90 s À1 . Effect of pH The effect of pH of the buffer solution on the electrochemical behavior of DXM was studied by cyclic voltammetry using 0.1 M KH 2 PO 4 electrolyte in the pH range of 1.0-9.0 (Fig. 10a). The pH of the KH 2 PO 4 electrolyte solution was regulated by small amounts of NaOH and HCl solutions. As it can be seen, no voltammetric peak of DXM was observed at pH 5 and higher. On the other hand, the anodic peak current in the range of 1.0 to 5.0 increased with the decrease of pH, when the pH was less than 2.0, the oxidation current did not increase. Therefore, pH 2.0 was selected in the assay. In addition, the oxidation peak potential shis negatively with increase in pH, suggesting that protons participate in the electrode reaction process. The relationship between the peak potential (E pa ) and pH is expressed as: E pa (V) ¼ 0.0521pH + 0.7953 (R 2 ¼ 0.9963) (Fig. 10b). The absolute value of the slope (0.0521 V pH À1 ) is close to the theoretical Nernstian value of 0.0586 V pH À1 , indicating that electron transfer was accompanied by an equal number of protons in electrode reaction of DXM. 34 Calibration curve and detection limit The differential pulse voltammetry was used for the determination of DXM because of its higher sensitivity and selectivity than CV. Under the optimal experimental conditions, the oxidation peak current of DXM was proportional to its concentration in the range from 0.05 to 30.0 mM (Fig. 11). The linear regression equation can be expressed as I pa (mA) ¼ 0.5377 + 1.5957 (mA) (R 2 ¼ 0.9931). Based on the relation 3 (S/N), the detection limit was 3.0 nM. Compared with other electrochemical sensors (Table 1), the proposed methods in our work gave higher sensitivities with wider linear ranges for DXM detection. Chronoamperometric measurements The chronoamperometry as well as other electrochemical methods was employed for the investigation of electrode reaction at chemically modied electrodes. Chronoamperometric measurements of DXM at Fe 3 O 4 /PANI-Cu II /CILE were done (Fig. 12) for various concentrations of DXM. For an electroactive material (DXM in this case) with a diffusion coefficient of D, the current for the electrochemical reaction at a mass transport limited rate is described by the Cottrell equation: Under diffusion control, a plot of I versus t À1/2 will be linear, and from the slope the value of D can be obtained. The mean value of the D was found to be 5.15 Â 10 À4 cm 2 s À1 . Interference, stability and reproducibility In order to evaluate the ability of anti-interference, some ordinary compounds in biological media and drugs were selected. No signicant interference was found for the detection of 50 mM DXM from the following compounds: NaCl, KCl, KNO 3 , tryptophan, cysteine, uric acid, ascorbic acid, hydrocortisone, and phenazopyridine. The stability of the sensor was also investigated by examining its response current aer storage period of 30 days the current of DXM reduced 4.9%, indicating the excellent stability, aer a storage period of 30 days. The reproducibility of the proposed sensor was tested using ve different electrodes. The relative standard deviations (RSD) of the DPV response currents for these species were less than 6.8%. Thus, the modied electrode showed a high stability and excellent reproducibility and anti-interference ability. Real sample analysis The applicability of the proposed method was examined to determine DXM in the pharmaceutical samples using the calibration curve method. The sample treatment processes were described in the Experimental section. The obtained results are summarized in Table 2. As it is obvious, the recovery of DXM was found to be between 97.0-102.0% using DPV method, which conrm good sensitivity of the proposed procedure. This means that the proposed procedure should be applicable to the analysis of real samples with different matrices. The R.S.D. value for determination was less than 3.7% for n ¼ 3. Conclusions In the present study an efficient and sensitive catalytic system based on a Fe 3 O 4 /PANI-Cu II /CILE was introduced for nanomolar detection of DXM using DPV method. The results showed good stability as well as high electrocatalytic activity toward DXM. In comparison to other reported electrodes the proposed sensor has an acceptable limit of detection and linear range and can be used for monitoring of the drug in real samples with different matrices. In addition to its low detection limit, wide linear range, low cost, reasonable reproducibility and stability are the other advantages of this sensor.
2019-04-08T13:11:37.055Z
2017-02-13T00:00:00.000
{ "year": 2017, "sha1": "7286d95bb37ee08d7de0e30929c724e36cd2adb7", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra26125f", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f93f0c4765405462dc5b3640284736e114742049", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
151396905
pes2o/s2orc
v3-fos-license
Experiences of the step-out technique in emotion-focused therapy for clients with autistic process ABSTRACT Lower levels of experiential processing are associated with poorer therapeutic outcomes. Clients with autistic process are reported to experience sensory-body awareness processing problems which is recognized as an interoception marker. The Step-Out is a simple bodily technique, used within the Alba Method, to achieve an emotionally neutral, relaxed, and alert state. The aim of this study was to explore the experiences of clients with autistic process with the Step-Out. Eleven clients learned and spoke about the technique. A thematic analysis of clients’ responses produced an overall theme ‘In sensing me to connecting to you’. This contained three broad themes: In-and-out of interoceptive contact, letting go of tension and beyond self-experience. Clients were able to verbally describe their internal sensations and perceptions following the task. Responses ranged across an experiential continuum from emotional overwhelm, to no felt change to experiences of relational connection. Preliminary findings provide promising support for the utility of the Step-Out as a mini experiential task to help clients with autistic process shift their attention from an externalized to an internalized process, and to recognize, express, and regulate their internal states. Findings are tentative due to the exploratory nature, limited participants, and lack of assessment measures. Autism is a lifelong complex neurodevelopmental condition characterized by qualitative differences in social communication and interaction, paired with restricted, repetitive patterns of behavior, interests or activities (World Health Organization, 2018). Autism is associated with various sensory atypicalities across multiple domains (Nicholson et al., 2018;Shah et al., 2016). Studies estimate 45-96% of autistic people report experiencing difficulties with regulating sensation and perception (Schaaf & Lane, 2015). The interoceptive sense includes the sensations of pain, temperature, hunger, satiety (Craig, 2015). Autistic people have challenges with accurately sensing the body's physiological state (Elwin et al., 2012), displaying unusual thermoregulatory sensitivity (Cascio et al., 2008). Autistic autobiographies report persistent interoceptive atypicalities, especially hyposensitivities to internal cues such as a difficulty in detecting and recognizing bodily sensations (Elwin et al., 2012). Interoception is the ability to detect and attend to internal bodily sensations. A threepart neuropsychological model devised by Garfinkel et al. (2015) separates interoceptive processes into three measurable constructs, which can be distinguished from exteroception (external sensation) and proprioception (body position in space). The first process, interoceptive accuracy, includes measurable, discriminant processes such as heart rate, or stomach distension. The second process, interoceptive sensibility, is the subjective experience of internal processes and to date has been measured by self-report questionnaires. The third process is termed interoception awareness and is a metacognitive measure of interoceptive accuracy, or to what extent one is aware of their ability to accurately perceive internal processes (Garfinkel et al., 2015). Autistic adults experience difficulties in emotional functioning, including emotion recognition and emotion regulation (ER), and that these difficulties are responsible for high levels of diagnosed comorbidity (Mazefsky et al., 2014), trauma-related experiences (Robinson, 2018) and suicidality . Up to 60% of autistic adults have at least one co-occurring psychiatric condition such as anxiety disorders (Maddox & White, 2015) and figures of depressive symptoms as high as 47% have been reported (Wigham et al., 2017). Emotion-focused therapy for autism spectrum Emotion-Focused Therapy (EFT (Elliott et al., 2004;Greenberg et al., 1993) is an evidence based humanistic-experiential psychotherapy (HEP). EFT has been used as a treatment for major depression (Goldman et al., 2006;Greenberg & Watson, 2006), for complex trauma (Paivio & Pascual-Leone, 2010), as a treatment for social anxiety (Elliott & Shahar, 2017) and generalized anxiety (Timulak & McElvaney, 2015). With preliminary positive findings in clients with autistic process for enhancing emotional processing in both cognitive and affective empathy (Robinson & Elliott, 2016;Robinson, 2019) and in the treatment of psychological distress to alleviate trauma (Robinson, 2018). Emotion-Focused Therapy is process marker driven and involves dynamic case formulations with the client during therapy. In EFT Goldman and Greenberg (2015) propose three stages to case formulation with therapists forming a working hypothesis about the mechanisms underlying the client's problems. EFT case formulation provides an organizing framework to aid therapists in mapping out what to do next from moment to moment (Goldman, 2017). Further, client information is organized along a number of dimensions including (a) style of emotional processing, (b) narrative themes related to attachment and identity issues, (c) painful emotion compass, (d) problematic or maladaptive emotion schemes and (e) moment by moment markers and accompanying tasks that might be undertaken. Robinson and Elliott (2017) developed the first case formulation for clients with autistic process for working with misempathy and for working with trauma-related experiences Robinson (2018). It is based on the premise that this population experience qualitative differences in emotional processing, both of their own emotions and of the emotions of others. Further, that painful experiences occur through misempathy within interpersonal encounters as a result of neurotypical-neurodivergent intersubjectivity (Robinson, 2018). This affective misattunement manifest a fragile sense of self and lack of self-agency within interpersonal engagement that predisposes autistic people to trauma related experiences. It is this affective neurotypical-neurodivergent misempathy that underpins interpersonal ruptures, rejection by neurotypical others, social exclusion and ultimately a deep sense of aloneness. This manifests in both internalized (fragmentation and depression) and externalized (a need for a sense of control and anxiety) reactive responses. As such, autistic people often report experiences of never feeling connected with others leading them to seek psychological support in adult life. In EFT case formulation for clients with autistic process (see Robinson, 2018) describes the process of transformation of problematic emotion schemes across three stages where the therapeutic focus is on accessing (stage 1), expressing (stage 1 & 2) and strengthening (stage 3) cognitive-affective empathy for self-and-other. Stage one involves an emotion assessment phase where the therapist assesses such factors as capacity for an internal focus and degree of emotion regulation. At the beginning of therapy clients may demonstrate limited emotion discourse or discourse that reflects disconnection or dysregulation and emotion responses in-session may be flat or extreme: Therapist: 'and how does that feel for you just now?" Martin: I don't know. I don't know how I feel. I don't feel anything". Frequently, clients express being out of touch with their inner emotions, stating 'I don't feel my emotions'. This signals an interoception difficulty, of being out of connection with one's body or a disembodied self. This is an important phase as it aids therapist understanding of clients qualitative differences in emotional processing, their capacity for interoception, ability to label and express emotions and to narrate their life story. In stage two the therapist is assessing the emotion scheme and deriving a formulation narrative with the client. Therefore, the main tasks of therapy are to observe emotional processing style and identify qualitative differences, to observe neurotypicalneurodivergent intersubjectivity and identify these as triggers for painful experiences and to help clients with emotion transformation by accessing and processing their own emotions and the emotions of others. Initially, this often results in the client engaging in negative self-treatment dialogue: Martin: They talk about conversations that I don't know about and you feel guilty about not knowing about them . . . . I can't have conversations. During stage two the therapist aids clients in differentiation of core painful feelings stemming from neurotypical-neurodivergent intersubjectivity, such as aloneness, shame, and fear. The therapist is listening for an emotion compass recalled in the client's story, that is often expressed as feelings of helplessness or despair: Martin: I felt that there was something not right about me. Something wrong. I just didn't think there was anything to do and that there was no point in trying anymore, so I became more isolated and stuck in my flat on my own. In stages two and three the therapist uses deepening methods such as video Interpersonal Process Recall, as a process guide to evocative unfolding of recalled trauma, which can lead to tapping into feelings associated with the memory and to accessing the adaptive emotion that counters feelings of hopelessness, helplessness, and fear. The client can connect with anger at the violation by others: Martin: I could viciously attack them all . . . . I didn't. I feel like doing that now. Therapist: There is strength in your voice. It sounds, a strong anger. Martin: I am angry with all those people who called me the name is if that was ok, as I accepted it. I didn't. In stage three emotional transformation involves strengthening cognitive-affective empathy for self-and-other. This is achieved through engagement in enactment tasks, to help access the adaptive emotion and express it. The therapist helps the client to identify what is the core pain and to provide empathy as the client experiences this. The therapist asks the client, 'what does this part of you need?' When they are able to express this they engage in an imagined chair enactment of speaking one's truth: These deepening methods are used to attend to process markers during stage 3 that facilitate emerging narratives with the construction of new meaning. The final stage of task resolution is meaning creation, which is best achieved within interpersonal understanding and involves grieving and letting go: Therapist: And how do you feel about your Dad just now? Martin: It's still sad that he got it wrong . . . . I wasn't diagnosed then and that's why he didn't understand me. Therapists are looking for affective task markers that signal a client's readiness for therapeutic work which indicate a case formulation for a specific task. Therapists form a working hypothesis about the mechanisms underlying the client's problems (Goldman & Greenberg, 2015). For clients with autistic process an interoception difficulty marker represents one such task: this is an affective issue expressed through a sense of being disembodied with self or being out of touch with internal bodily sensations and feelings. Often, neurodivergent-neurotypical encounters results in clients arriving for group therapy in a state of high agitation, displaying behaviors of distress, dominating interactions with mono-dialogs and yet report being out of touch with their sensory-body awareness. This results as an interoception difficulty task marker as clients experience emotion dysregulation, of not being connected to one's body and internal sensory signals which indicates a need for an experiencing task. To date, there has been no enquiry into emotion-focused case formulation (Goldman & Greenberg, 2015;Goldman, 2017) exploring components of a working hypothesis about the mechanisms underlying interoception difficulties for clients with autistic process. In the present study, we explored adding the Step-Out as an experiencing task to aid in psychological transitioning from an externalizing process to an internalizing sensory-body process. The Step-Out is part of a broader method for working with emotions called Alba Emoting, Alba Method, or simply Alba. Below we describe the basis and application of the Alba Method, with emphasis on the Step-Out. The Alba method and the step-out technique The method is based on research by Susana Bloch and her colleagues (Bloch et al., 1987(Bloch et al., , 1991Santibáñez-H & Bloch, 1986), who found specific patterns of breathing, facial expression, and posture that distinguish joy, sadness, fear, anger, eroticism, and tenderness. They also found that intentionally performing the emotion-specific somatic patterns induced genuine emotions in the person doing them (Bloch et al., 1987(Bloch et al., , 1994Santibáñez-H & Bloch, 1986). Subsequent studies by researchers unaffiliated with the Alba Method have by and large confirmed these findings (Kalawski, 2020). Specifically, anger, fear, sadness, joy/laughter, eroticism, and tenderness can be distinguished by respiratory patterns (Filippelli et al., 2001;Kreibig et al., 2007) and combinations of postural and facial expressions (Cordaro et al., 2020). Research also shows that the reproduction of respiratory patterns, body movements, and facial expressions can induce anger, fear, joy, and sadness (Coles et al., 2022). The following quote explains how the Step-Out technique was developed: We had observed right from the beginning of our research that people who reproduced the emotional patterns had a tendency to 'stay', so to speak, within the induced emotion. For instance, when our first experimental subjects returned to the laboratory, they often reported having had dreams or moods which were connected to the exercises performed in the previous session. In order to avoid what I call 'emotional hangovers', we developed a ' Step-Out' technique which consists essentially in ending each emotional reproduction by at least three slow, regular, and deep, full breathing cycles followed by a total relaxation of the facial muscles and a change in posture. Such a procedure brings the person back to a 'neutral' state. (Bloch, 1993, p. 128) Based on their findings, Bloch and her colleagues (Bloch et al., 1987) developed a program to train actors using the specific emotional patterns and the Step-Out technique. Bloch (1993) later called this method Alba Emoting. Currently, the Alba Method Association uses the simpler name Alba Method. Over the years, Bloch and her students began teaching Alba not only as an actor training approach but also as a general method for working with emotions. Originally, this was not based on any theoretical understanding of how or why this work could be useful, but rather on a general idea of 'getting in touch with' emotions through the body. The training was promoted as a form of personal development rather than as a therapeutic method. Kalawski (1997) explored integrating the method into experiential psychotherapy by guiding clients through the emotional patterns and the step out. He found that clients had a slightly deeper level of experiencing immediately following these exercises. Following EFT theory, Kalawski (2013) proposed that therapists could use Alba to facilitate emotion awareness, regulation, and transformation. Most recently, Schilling (2021) found that Alba training improved emotion recognition ability. The present study focused only on one element of the Alba Method: the Step-Out technique. The Step-Out is in a way similar to deep breathing techniques often taught to clients. The main difference is probably that sometimes deep breathing is used as a part of a deep relaxation routine. The objective of the Step-Out technique, by contrast, is to achieve a relaxed yet alert state. In the Step-Out technique, the person is standing up and with their eyes open. The idea is to remain aware of one's environment, as opposed to drifting off into a trance. Finally, the Step-Out technique also has similarities and differences with meditation and mindfulness techniques. There are many varieties of such techniques, but some of them probably lead to a state of relaxed alertness, just like the Step-Out technique does. In this case, the main difference is in the means for achieving such state. Techniques such as sitting still and observing one's breathing rely on the person's ability to understand the task and perform it independently. This is hard for some people. Furthermore, some people feel aversion to sitting still. And this may not be possible at all when the person is already in an emotionally aroused state. The Step-Out technique is physically concrete, and the therapist can visibly demonstrate it and coach the client through it step-by-step. These are important advantages in some contexts. There are times when deep breathing or meditation techniques will be just what is called for. We believe that the Step-Out technique, because of its unique characteristics, fills a void in terms of the available tools for emotion regulation. The Step-Out technique was introduced to clients with autistic process as an experiencing task which served as a transition into the therapy (see Table 1 Process-Experiential Task: Marker, Intervention, End State). The rationale was two-fold. First, we wanted to see if it could have an emotion regulation function by helping clients develop access to internal bodily sensations and feelings and symbolize their inner, emotionally tinged experiences. Thus, this experiencing task serves to move clients into a receptive frame of experience for using the therapy. Therefore, we wanted to explore whether it could help clients with autistic process to achieve a level of emotional arousal which is useful for therapy. The Step-Out regulates emotional arousal without repressing emotions, and it does this through the body. Thus, we reasoned that this exercise might open clients to interoception and enhance their embodied self to help them be more emotionally available in the session. Second, we wanted to see if it served a relational function by supporting neurodivergent-neurodivergent intersubjectivity to help clients make contact with each other and by doing so helping to create a therapeutic space for emotional processes and healing to take place. In this small scale qualitative study, we wanted to explore how useful the Step-Out technique was for our interoception marker working hypothesis for clients with autistic process. Specifically, how useful the Step-Out could be as a mini experiencing task that Step-out Expressions of sensory signals and feelings located in awareness (externalized to internalized sensory shift; bodily awareness; embodied self) supports difficulties of being out of touch with internal sensations and disembodiment reported by clients with autistic process to shift their attention from an externalized process to an internally focused process with enhanced sensory-body integration. To the best of our knowledge no qualitative research to date has been carried out using the Step-Out technique to ascertain experiences of clients with autistic process (or with any other population). This study set out to fill this gap. We introduced the Step-Out task to clients with autistic process and asked them to report their experiences immediately after completing the task. Participants Participants were 11 young adults (six male and five female) with autistic process ( Robinson & Elliott, 2017) groups (n = 3, n = 4, n = 4). Eight achieved competency in performing the Step-Out, whilst three achieved an emerging level of competence. These three participants had a co-occuring diagnosis of Dyspraxia. This small scale study is part of a large process-outcome protocol for Emotion-Focused Therapy for autism spectrum, which was granted full ethical approval by the University Ethics Committee. Procedure The therapist (the first author) guided the participants in small groups (three groups of three to four participants) through the Step-Out task (see Step 1-4). Step 1: Finding Space: At the beginning of each weekly Emotion-Focused Therapy group session participants were asked to find a space in the room so they could stand alone, with space between themselves and other clients and therapists. Then each participant was asked to stretch out to see that they had enough room to move without touching anyone standing close to them. Step 2: Introducing the Guided Step-Out Exercise: Next, the therapist explained to the group that they were going to begin the session by talking them through a guided Step-Out exercise. Step 3: Guided Step-Out Procedure: A guided Step-Out task was conducted by following the same procedure: The therapist asked participants to listen to the instructions and or to observe the therapist demonstrating the actions (or both) which ever they found most helpful. Below are the standard Step-Out instructions: Stand in an upright position with feet parallel, aligned with the hip bones, facial muscles relaxed and eyes open looking straight ahead at the level of the horizon. In this posture, you breathe in through the nose and out through the mouth with a quiet, easy and relaxed rhythm, without forcing the breath, trying to keep inhalations and exhalations equal in time. The respiratory rhythm is then synchronized with a continuous movement of the arms: while inhaling, the extended arms are lifted in front of the body, with hands interlocked loosely, tracing a sort of 'generous arc' over the head, bending the elbows as the hands reach behind the neck. During this action, inhalation is synchronized with the speed of the lifting arms. Then, after a brief pause, the air is gently expelled through slightly open lips (as if blowing out an imaginary candle), while the arms descend in synchrony with the exhalation, until they return to the initial position. At this moment all the air must have been expelled. This cycle is repeated at least three times, very consciously. Then the face is gently touched, both hands giving small massage-like movements, from the center of the face outwards. Finally, the exercise is concluded by shaking the whole body and then changing posture. (Bloch, 2017, pp. 124-125) Step 4: Reflecting upon Experience: Following the exercise clients are given time to reflect upon their experience and to share this with the group. (i) The therapist asked the group (avoiding asking individual clients directly at first) "How are you feeling just now after doing that?" Each client was given an opportunity to respond to the initial prompt. (ii) This was followed by a second more directed prompt "What does it feel like inside your body?" (iii) This was followed by directly responding to each client by reflecting their voiced experience back to them for example "For you [ Participants carried out the Step-Out task between 3 to 18 times in total to varying degrees of proficiency (see 1 for proficiency checklist) from emerging (n = 3) where the person attempted the technique, but failed to master it fully or inconsistently, to competent, where they were able to independently master the technique (n = 8). Analysis All therapy sessions were video recorded. Data from each group was extracted in chronological order. Video material from Step 1 -Step 4 was extracted for analysis. The first author and two independent raters, who were Masters in Autism graduates and experienced autism practitioners, observed the level of skill competency for each client. Raters were trained over a 12-week period to rate video footage using the Client-Emotional Processing Scale for Autism Spectrum (Robinson & Elliott, 2016). The instrument used to measure competency for the Step-Out was an observation checklist (see Appendix 1). Transcriptions of each client's verbal responses as well as descriptions of physical responses were recorded. For initial analysis, the researcher familiarized themselves with the data through watching the video footage whilst actively rereading each client's verbal response and making observations. The video footage was watched by the researcher and two independent raters and the researcher observations were shared and discussed. A consensus agreement was formed on the most salient client responses. For initial coding, line-by-line coding was employed, using semantic codes to stay close to the content of the data (Braun & Clarke, 2012). First, the first author read through each group cohort systematically to gain an overall sense of how participants were responding within each group. Following this, the researcher read through the data line by line to familiarize themselves more closely with the data. Second, the researcher highlighted key terms and descriptions and then removed highlighted text. The first highlighted text was compared to the second and clustered together if similar or separated if different. The third highlighted text was compared to the previous and clustered if similar or separated if different. This process was repeated until the text was sorted into similar clusters. Following this, the researcher coded the clusters. The first author then generated themes from the codes. Finally, both authors reviewed and refined these. In the final step, both authors defined and named the main themes with subthemes putting these in a logical order. Results Data were organized into themes and subthemes, with the final thematic framework presented in Table 2. An overarching theme emerged: In sensing me to connecting to you. Three themes, comprising 7 subthemes, were identified. The three major identified themes are as follows: (1) In-and-out of interoceptive contact (2) Letting go of tension (3) Beyond self-experiencing The themes and subthemes are expanded with participant illustrations. The analysis of the verbal responses of the eleven young adults are reported with verbatim comments below and are presented across a main overarching theme representing an interoception continuum in-sensing me to connecting with you. This main overarching theme subsumes three main themes, the first being in-and-out of interoceptive contact, which itself contained three subthemes. The first being a central point is reflected as no felt change with two of the young men stating that 'it had no effect', that they 'felt nothing' or that they 'didn't have a clue' as 'it's hard to say how it feels internally'. This expands with the second subtheme as a heightened internally located expanding body awareness was reflected by three of the young adults, with two young men stating that they 'feel it in my chest' that 'it helps control my heart rate' and that they were 'breathing better'. This was in contrast to the third subtheme as an emotional overwhelm with one young woman who had extreme physical reactive signs of 'facial flushing' and 'rubbing eyes' stating, 'Oh my god! I can't believe that. I'm not feeling great, because my heart is beating fast and I can feel it beating fast'. The second main theme, letting go of tension, contains three subthemes each moving toward positively expressed responses. The first being a pain reliever one young man responded on a number of different occasions to the Step-Out as relieving painful experiences, such as 'that's actually helped my back' and 'it feels good as the pain in my back is less'. Also, that they could locate the pain 'my neck hurts' with the 'it's the tension as soon as I go anywhere on public transport it locks up' and that 'it's good as it helps relieve tension and pressure'. The second subtheme being regulatory sensibility with four of the young adults experiencing a cooling sensation from physical 'not as warm, improved respiration' and 'not sweating as much, which is always good' to one young woman stating that she felt 'good' that 'when I exhaled it was like a cooling feeling. Like a fresh sensation'. The third subtheme being harmonious state with four adults reporting a sense of peacefulness with two young men stating that 'it feels good, more peaceful'. This moves toward expressions of a calming relaxing feeling, with four of the young adults stating that they had a sense 'I feel calmer, more relaxed' whilst 'it's a net improvement of more awake and more relaxed'. With one young man stating that he felt benefits 'Cathartic, a relief. Both the last session and the Step-Out and last week. Last week I was just really agitated and the train station barrier wouldn't lift and this created stress. The Step-Out helps to feel a bit calmer. Helps you to control your heart rate'. Whilst most responses were self-experience to interoception, two young men made comments which generated the third theme beyond self-experiencing as they specifically referred to relational connections with others. From feeling connected with others such as 'I feel connected. I know it sounds bonkers, but I feel connected. I almost feel the energy in the room. Sort of, we are all individuals but we're all on the same wavelength' and 'You see, I can feel tension and I don't feel any tension, it's all away' to more of a shared experience, and another example, such as 'I know what you mean by neutral. I'm the same as you [points toward another male group member]. It is a latent feeling most of the time and I don't feel anything at all, which is good'. Finally, besides categorizing clients' comments according to themes, we would like to convey the temporal progression of clients' responses over time. Table 3 presents the change process across therapy following the introduction of the Step-Out technique. It Table 3. Client's trajectory from fear response to relaxation. Step-out Task The same as last week. Good. Feels fresh. Session 5 Yeah. It feels good. Feels cooling and refreshing. Session 6 It feels good. Cooler and refreshing. Session 7 The same as last week. When I exhale it feel a fresh sensation. It feels good. Session 8 Good. I feel calmer and it cools me down. Session 9 The same as the last few weeks. Cooler when I exhale and calmer, more relaxed. summarizes one client's trajectory from fear through the initial interoceptive overwhelm to heightened awareness of internal processing to a sense of relaxation. Table 4 presents another client's experience of the Step-Out demonstrating an increased sense of awareness of thermoregulation over time. The client seems to describe their experience with increased differentiation of regulating sensation and perception. Discussion The purpose of this study was to explore our interoception marker working hypothesis for clients with autistic process. We recognize the need for a mini experiencing task to scaffold sensory-body integration to enhance self-embodiment as a consequence of interoceptive difficulties. We hypothesized that the Step-Out technique would act as a process guide to interoception by helping clients shift their attention from an externalized to an internalized process, as well as enhancing their ability to access and report interoceptive body awareness. To the best of our knowledge this is the first time the Step-Out has been used as an interoception marker for clients with autistic process within an emotion-focused case formulation. We found that the Step-Out was a helpful process guide for clients with autistic process, helping to shift to a more internal experiential focus and transition aid into therapy. We propose that the EFT Step-Out technique can be added as a useful experiencing task within stage one of (Goldman & Greenberg, 2015) case formulation framework, specifically for clients with autistic process or those who report being out of interoceptive experiencing. The Step-Out has been used for decades as part of the Alba Method. The core of the method is the somatic patterns for six emotions, identified in basic research. There is research on Alba training focused on these patterns (e.g. Bloch et al., 1994). However, we are not aware of any previous research on teaching the Step-Out specifically, either with individuals with autistic process or with any other population. Based on the experiences of Table 4. Client illustration of interoceptive sensibility and interoceptive awareness. Step-out Task Immediate Response Session 1 I'm not as toastie (warm) as I was. Quite good. More realxed. Session 2 I'm not as warm as I was, I'm not sweating as much, which is always good. Like you said improved respiration. Session 3 That's actually helped my back a bit. Stretching usually does. My back hurts less which is good. Session 4 [stroking bead] Or stroke your beard as it may be. It felt good. I've actually been able to synchronise my sweating a little bit. Breathing is really good for that. It's hard to do if you're in a rush. Session 5 [You know I can't feel my hands through my beard] I feel a bit cooler, which helps. Which I see as a net improvement because it was pretty toasty when I got in. It feels kinda like aerobics, I was thinking that when my hands were going up and down. It seemed to help. You sweat a lot when you're warm, but if you can regulate your breathing you can control your rate of sweat. It did seem to have a good net effect for that. Session 6 [twisting neck from side to side] My necks cramped. I've been getting that quite a lot. Its always in the same spot. I've been thinking about going to casualty to see if they will x-ray it. I feel more relaxed. I think the breathing is really good for that. It controls your sweat and tension. Session 7 Cathartic, a relief. Both the last session and the S-O and last week. Last week I was just really agitated and the train station barrier wouldn't lift and this created stress. The S-O helps to feel a bit calmer. Helps you to control your heart rate. Session 8 Less sweaty Session 9 Better. I'm breathing better. I think it is really good for that, to control pace. To control the pace of your breathing. the second author teaching the Alba Method to a variety of persons, we think that the experiences reported by the participants in this study are similar to those reported by people without autistic process as well. Future studies might explore the responses to the Step-Out with other populations. We found using the Step-Out followed by immediate self-report allowed us to report on two of the processes in Garfinkel et al'.s (Garfinkel et al., 2015) three-part neuropsychological model. First, this method of reporting was useful in being able to relay interoceptive sensibility, the subjective experience of internal processes which is usually reported through self-report questionnaires. Contrary to previous findings that people with autistic process have challenges with accurately sensing the body's physiological state (Elwin et al., 2012;Quattrocki & Friston, 2014), we found that most clients in this small-scale study reported possessing varying degrees of interoception and were able to verbally describe their internal sensations and perceptions following the Step-Out task. Further, clients with autistic process were able to access and express a range of bodily sensations, such as pain, temperature, muscle tension, heartbeat perception and affective touch. It is possible that the difference between our and previous findings is due to an important difference in procedure. Before asking clients how they felt, we guided them in performing a specific physical task. This structured activity can reduce the potential anxiety that may accompany an open-ended request to describe feelings and sensations. Also, similar to more recent findings indicating that emotional difficulties in autism is associated with alexithymia rather than autism per se Shah et al., 2016) our study found that the clients with autistic process who expressed a lack of interoceptive connection also reported accompanying alexithymia difficulties as they were unable to locate, label or express their emotions. We did not set out to test this and future studies employing the Step-Out technique to assist clients in accessing and reporting interoceptive sensibility should consider using alexithymia measures. However, as part of a larger outcome study the first author is tracking emotional change across treatment in both cognitive and affective empathy for self and other using the experiential observer measure, the Client Emotional Processing Scale for Autism Spectrum (CEPS-AS, Robinson & Elliott, 2016). Second, we found that, when using this technique, clients with autistic process who were able to relay interoceptive sensibility demonstrated varying degrees of interoceptive awareness and could expand upon their internal experience through metacognitive explanations. Our clients engaged in complex explanations of how they were able to understand and use this to regulate their states and to monitor the changing impact this had on their interoceptive processing. We did not test the accuracy of interoceptive awareness and future studies may wish to explore this aspect further. In fact, this was an exploratory study presenting clients with autistic process with a technique to use which was not attached to an assessment of their competence, with no pressure to perform accurately or judgment given to their responses. We simply wanted to understand how clients with autistic process experienced the Step-Out to ascertain its utility as an interoception difficulty task marker within Emotion-Focused Therapy adapted for clients with autistic process. To this aim, we found that introducing the Step-Out task at the beginning of each therapy session not only allowed us to gain an insight into each client's experiential processing, an indicator associated with positive outcomes in therapy, but also it served as a helpful aid to transition. We found that the Step-Out technique did facilitate clients with autistic process to transition from the stress and anxiety of getting to the therapy group and social gathering demands to the main task of group psychotherapy. Hence we propose the Step-Out technique is a useful aid in supporting clients with autistic process to transition from a high anxiety state to a calmer, more internally focused oriented state in preparation for working in psychotherapy. For this client population, this is a potentially useful micro-task within group psychotherapy process, which is the focus of ongoing research. As noted above, one client felt overwhelmed the first time she tried the Step-Out. In our experience with embodied work, we have noted that sometimes clients are very cut off from their bodily felt experience. For them, the first contact with that experience is novel and can be frightening. As we can see, as the client continued practicing the Step-Out, the overwhelming reaction went away, and instead she reported a cooling feeling. We speculate that the Step-Out can give clients a sense of mastery over their bodily experience, which no longer feels alien, and thus becomes safe. This has the potential to helping clients with autistic process to become more embodied with self, which may lead to enhanced expression of needs, desires, fears and wants through the body and an increased ability to self-soothe when feeling escalated or agitated. Another client reported pain relief, notably along with an awareness of its location (neck and back), trigger ('as soon as I go anywhere on public transport') and process ('it's the tension'). We think that the Step-Out helps reduce muscle tension, which in turn reduces pain. Another way in which the Step-Out can help reduce pain is through promoting awareness of the body, which can both reduce somatization and improve coping. The client who said, 'I know what you mean by neutral; I'm the same as you', was using his interoceptive emotional awareness to connect more deeply with another client. The addition of the Step-Out in Emotion-Focused Therapy for clients with autistic process (Robinson & Elliott, 2017) provides a space for shared experience of neurodivergent intersubjective connection. This can also be viewed as an example of resonance. In clientcentered and experiential therapies, resonance is an element of therapist empathy. It refers to a way of experiencing, whereby what the client expresses elicits a bodily felt sense in the therapist (Vanaerschot, 2007). Agazarian provided a definition that is not restricted to therapists: 'Resonance is an undefended experience that is generated within the self and triggered in response to oneself or another; resonance is most like attunement (Stern, 1985)' (Agazarian, 1997, p. 53). She emphasized the role of resonance among members in group therapy. This is apparent in the example of the client who expressed, 'I know how you mean . . . '. He was using his interoceptive awareness not only to become aware of his own reactions, but also to connect with another group member's experience. The benefits here are two-fold. First, as clients with autistic process are vulnerable to trauma-related experiences due to difficulties recognizing their own and others' emotions (Robinson, 2018), it creates space for enhancing interpersonal attunement. We found that the Step-Out created space for both neurotypical-neurodivergent intersubjectivity (therapist-client) which promotes a sense of caring acceptance and self-validation as well as neurodivergent-neurodivergent intersubjectivity (client-client) which ultimately promotes a shared sense of experience, which reduces ones sense of aloneness. Second, this allows the group process to move forward with cohesion. Finally, we would like to draw a contrast between the Step-Out and other bodily focused interventions. As noted in the introduction, some interventions involve instructing clients to direct their attention to their bodies (e.g. to their breathing, to their trunk overall, or to a specific sensation). This kind of intervention is often beneficial. However, they involve the crossing of a psychological boundary. It is as if the therapists' instructions enter inside the client's body-mind. Clients with autistic process, and perhaps other clients, can experience this as intrusive. The Step-Out technique does not have this drawback. We are not asking people to do anything with their attention. We are only asking them to move their bodies in a particular way. However, the process does have the effect of directing clients' attention to their bodies. It thus facilitates the process without pressure. There are a number of limitations inherent in this study. First, one major limitation is that the first author delivered the Emotion-Focused Therapy as well being the main researcher in analyzing the data. We are aware that this potentially limits the credibility of our findings. However, it should be noted that our aim is to report our findings which are tentatively drawn from this initial explorative enquiry. We do not make any causal claims that the Step-Out is responsible for increasing interoceptive functioning, just that it appears to have potential as a useful experiential task and a process guide to deepening ones sense of embodiment. It could also be argued that the authors were invested in portraying only positive experiences of doing the Step-Out. However, this is not the case as we used all responses in the analysis and that for some they expressed they did not experience any internal changes. A further limitation may be due to the demand characteristics of the task as an individual and group pressure. It is recognized that clients are asked to say 'what do you feel'? and therefore it may be that they feel pressure to provide a verbal response as opposed to it being the Step-Out that elicited body awareness. Further, by asking people collectively may lead to a group think phenomenon. Both these issues require further investigation. This study has limited generalizability to other young adults with autistic process due to the small number of participants and heterogeneity of the spectrum. However, these preliminary findings provide promising support for the utility of the Step-Out task to help clients with autistic process recognize, express, and regulate their internal states. We tentatively propose that within stage one of case formulation for clients with autistic process comments the Step-Out acts as a useful experiencing task that supports our interoception marker working hypothesis. As such, it not only helps shift client attention across therapeutic tasks but to integrate sensory-body awareness and become more embodied with self. Appendix 1: Step-out observer checklist Individual steps of the Step-Out Task Perceived level of Competency Failing Emerging Passing 1.Find a space in the room where you can stand alone, with space between you and other group members 2.Stretch out your arms to see if you have enough room to move without touching anyone standing close to you 3.Listen to my instructions and or look at me (the therapist) demonstrating the actions (or both) which ever you find most helpful, now: 4.Stand in an upright position with your feet apart and parallel, aligned with your hip bones. 5.Allow your facial muscles to relax and have your eyes open looking straight ahead, at the level of the horizon. 6.I (the therapist) am going talk you through the following action and when doing this you should try to have your breathing (respiratory rhythm) synchronized (in time) with the movement of your arms: 7.Inhale and extend your arms, and lift them in front of your body with your fingers interlinked loosely, tracing a sort of "generous arc" over your head, bending your elbows as your hands reach behind your neck. [the therapist demonstrates this action while the inhalation is synchronized with the speed of the lifting arms]. 8.Pause and hold the pose 9.Release your breath through slightly open lips, and now bend your arms in synchrony with your exhalation until they return to the original position. All the breath should be exhaled. 10.Inhale and extend your arms, and lift them in front of your body with your fingers interlinked loosely, tracing a sort of "generous arc" over your head, bending your elbows as your hands reach behind your neck. 11.Pause and hold the pose 12.Release your breath through slightly open lips, and now bend your arms in synchrony with your exhalation until they return to the original position. All the breath should be exhaled. 13.Inhale and extend your arms, and lift them in front of your body with your fingers interlinked loosely, tracing a sort of "generous arc" over your head, bending your elbows as your hands reach behind your neck. 14.Pause and hold the pose 15.Release your breath through slightly open lips, and now bend your arms in synchrony with your exhalation until they return to the original position. All the breath should be exhaled. 16.Inhale and extend your arms, and lift them in front of your body with your fingers interlinked loosely, tracing a sort of "generous arc" over your head, bending your elbows as your hands reach behind your neck. 17.Pause and hold the pose 18.Release your breath through slightly open lips, and now bend your arms in synchrony with your exhalation until they return to the original position. All the breath should be exhaled. 19.Inhale and extend your arms, and lift them in front of your body with your fingers interlinked loosely, tracing a sort of "generous arc" over your head, bending your elbows as your hands reach behind your neck. 20.Pause and hold the pose 21.Release your breath through slightly open lips, and now bend your arms in synchrony with your exhalation until they return to the original position. All the breath should be exhaled. 22.Touch your face with both hands, apply gentle massage-like movements from the center of your face outwards. 23.Shake the whole of your body
2022-09-09T16:51:37.014Z
2022-09-05T00:00:00.000
{ "year": 2023, "sha1": "508ee99c80f9f5da0bbceafb60b30b5822db6edf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/14779757.2022.2115941", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "fcfa13f2c1ec5f2f5dcf23ffb4f7ed2aba200796", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
249840255
pes2o/s2orc
v3-fos-license
An Exploration of Dance Learning Stress Sources of Elementary School Dance Class Students with Artistic Abilities: The Influences of Psychological Capital and Self-Concept The purpose of this study is to explore the factors which may cause the increase of students’ stress in dance class in elementary school. In this study, students’ demographic variables, psychological capital (which includes four sub-constructs), and self-concept (which includes five sub-constructs) were used as predicting variables to estimate their influences on dance class students’ stress level. A structured questionnaire was distributed to 450 elementary art talent class students with 412 valid responses. Structural equation modeling was used to test the relationships proposed by the study. As for demographic variables, the results show that the grade, gender, and the dance class hours per week had no significant influences on stress, while the seniority level had a negative influence, which indicated that junior dance students had more stress than senior students. As for psychological capital, self-efficacy and optimism had negative influences on stress, while the other two sub-constructs, hope and resilience, did not have a significant influence on stress. As for physical self-concept, the worry of overweight had positive influences on their stress, while appearance, physical ability performance, health status, and satisfaction of body parts had no significant influence on stress. Based on the research findings, suggestions were made to reduce students’ pressure in learning dance. Introduction In Taiwan, dance classes are offered in elementary, junior high, senior high schools, as well as in colleges and universities, which inspired and trained a large number of outstanding dance-related talents [1]. However, in recent years, the number of students enrolled in dance classes has been decreasing. One factor related to the decrease is the declining birthrate and the other is the learning stress which stopped many potential dancers from continuing [2]. Therefore, this study explored the factors associated with the dance learning stress among students in dance classes in order to provide practical suggestions for both students and teachers and prevent students from quitting. In addition to the same academic subjects in regular classes, students in dance classes are also burdened with learning to dance. For example, dance presentations are held annually as a big show for which parents, teachers, and students have high expectations. To avoid letting them down, dancing students need to spend extra time in practicing [3]. As only the best performers can appear on the stage and play important roles, dance practice is demanding and highly competitive among peers. As a result, students suffer from physical strain and soreness due to constant movement repetition while maintaining their optimal physical performance during extensive dance practices [1]. For elementary school students, these stresses and burdens are not easy for them to cope with. How can schools, parents, and students themselves work together and cultivate a less-stressed environment so students can enjoy the classes and continue to go on? These are the main goals of this study. Therefore, the study aims to find related sources which may increase students' stress in dance learning and practices and students, teachers, and parents can apply the findings and implications in this study to reduce their stress in their career pursuits. There are several factors that contribute to a decrease or increase personal stress. In this study, we focus on the factors that might directly affect elementary dance class students' learning stress based on previous studies. Specifically, in this study, we were able to retrieve some related factors which include psychological capital, self-concept, and demographics. Luthans, Youssef, and Avolio [4] defined psychological capital as a positive psychological state that individuals show as they grow and develop, and a resource that positively enhances the power of the mind. Therefore, this study was designed to determine whether psychological capital can reduce the stress of dance learning students in dance classes with artistic abilities in elementary schools. Yeh [5] suggested that the physical self-concept is an individual's perception of the capabilities of various elements of the body, including the perceptions of motor ability (motor skills, physical fitness, or physical ability) and physical appearance (tall or short, thin or fat, or personal attractiveness). However, in developing self-conception, children may feel stressed to learn when they feel the expectations of their parents and teachers, and competition among their peers. Previous research [1][2][3] pointed out that students' background variables may also have an impact on dance class students' learning stress. Therefore, in this study, we employ students' demographics as control variables and explored whether psychological capital and physical self-concept might influence their dance learning stress. The study used a structured questionnaire to survey students' perception related to upper mentioned constructs and used structural equation modeling to test the relationships among those variables which might contribute to students' dance learning stress. The hypotheses proposed in the study were based on previous studies and are described in the following sections. Relathionships among Students' Demographics and Dance Learning Stress As for elementary school students registered in dance classes, the study considered gender, grade, class, practice hours per week, and dance learning experience (years) as background variables (also known as control variables). The following will introduce previous studies related to those background variables and establish the corresponding hypotheses according to previous findings. First, we considered students' gender. Gender stereotyping regarding dance often occurs in today's society and culture [6]. Taking the dance classes of an elementary school in Chiayi City, Taiwan as an example, there are 140 students in the four classes of Grades 3, 4, 5, and 6, including 15 boys and 125 girls, which indicates that dance has been classified as a feminine sport, resulting in fewer boys engaging in dance activities. Lu [7] analyzed that among elite athlete students, girls were more distressed than boys in academic situations and were more likely to be stressed about further education than boys. Ko and Jheng [2] conducted a study on the stress of students from the dance classes of public senior high schools in Taiwan and found that female students felt more academic stress than male students. Therefore, this study made gender a control variable for the stress of dance learning. As for grade, Ko and Jheng [2] suggested that students in senior high school dance classes felt more stress than students in lower grades. Therefore, this study also considered grade a control variable, which may have some sort of impact on students' dance learning stress. As for dance classes or practice hours per week, the Ministry of Education in Taiwan sets an average of 6 to 8 lessons of artistic talent specialization courses per week for students of Grades 3 to 6 [8]. However, many schools increase the number of practice times to more than 8 lessons per week; for example, in the case of presentations or summer and winter training. Therefore, the number of practice times per week was also included as a control variable in this study. As for dance experience, Ko and Jheng [2] also indicated that students in senior high dance classes with more than 10 years of dance experience were significantly less stressed than those with 5-9 years, 2-4 years, and less than one year of dance experience; the longer they danced, the heavier they felt they were, and the less stressed they were in maintaining their body functions to qualify them as good dancers. Therefore, the number of years of dance was included as a control variable. In summary, we propose the following four hypotheses based on previous studies: Gender affects students' stress in dance. H1-3. The number of practice hours per week affects students' stress in dance. H1-4. Years of experience in dance affect students' stress in dance. The Relationship between Psychological Capital and the Stress of Dance Learning In addition to their dance training, students also attend general classes, and they are often required to give performances representing their schools. Therefore, they often receive more attention and undertake more stress as compared with general students [2]. At a stage of basic skills learning, students in the dance classes of elementary schools have a lot of new dance movements to learn and rigorous dance tests to attend, which leads to their great psychological stress [1]. According to Chiang and Chen [9], psychological capital is a positive psychological ability and attitude that is built over an individual's life development. Psychological capital was well-explored by Luthans, Luthans, and Luthans [10]; according to their studies, psychological capital comprises four aspects, namely, self-efficacy, hope, optimism, and resilience. In the following, we will present some previous studies of each aspect relating to dance learning stress and propose related hypotheses accordingly. Self-efficacy: self-efficacy, also known as self-confidence, is the belief that an individual's assessment of their ability to achieve a specific goal leads to their desired outcome [11]. Individuals with higher self-efficacy in sports are more willing and consistent in attempting to do more than they are capable of, thus, gaining higher achievements in sports [12]. Consequently, people with higher self-efficacy can improve their skills or perform regular tasks under pressure [13]. Therefore, the higher the self-efficacy of dance students, the lower the stress in dance learning. This study deduced H2-1, as follows: Self-efficacy negatively affects students' stress in dance learning. Hope: hope, which is an individual's positive motivation, is a positive state based on the interaction between goals and paths [4]. Cho [14] noted that dance students in elementary schools were very concerned about outcomes that directly affect their perceptions of performance success and failure. For example, baseball is a sport with a high failure rate, and if players cannot immediately adjust their mindset to forget the previous failure and resume a positive offense and defense, they are not performing well [15]. Chen and Chi [16] studied 192 athletes in universities and found that the higher the perception of hope, the better the stress coping strategies of the athletes. Therefore, the higher hopes of dance students may lead to their lower stress in dance learning. This study deduced H2-2, as follows: Hope negatively affects students' stress in dance learning. Optimism: optimism is that individuals view events from a positive perspective or attribute and face their inner world with positive emotions, while negative events are attributed to external, unstable, and specific factors [4]. To attain certain success, athletes need to be trained and practice for years, and as Chang [17] pointed out, during their training, athletes are often faced with expectations, competition, injury, fatigue, and failure during their athletic careers. Optimistic people embrace stress positively and proactively, which helps to cushion them from personal stress. In the face of failure, they are less likely to become anxious and depressed; instead, they are motivated to persevere and train harder [18]. Following the logic, we can say the higher the optimism athletes withhold, the better their mindfulness, and they are capable of reducing personal errors under stress and sustain a high level of performance [13]. Therefore, for dance class students, we proposed the following hypothesis (H2-3): Students' optimism is negatively related to dance learning stress. Resilience: resilience is an individual's psychological ability to bounce back from adversity and have the will to go beyond the original state during positive and challenging events [4]. Lin and Chen [19] investigated senior students in the physical education classes of elementary schools and found that students with more frustrating experiences were less tolerant of frustration because they were affected by failures. All good athletes possess the resilience to bounce back from failures; for example, Jones, Hanton, and Connaughton [20] studied 10 excellent international athletes (in swimming, track and field, gymnastics, middle-distance marathon, triathlon, and golf), and found that they had good mental resilience. Therefore, the higher resilience of dance students may lower their stress in dance learning. This study deduced H2-4, as follows: Resilience negatively affects students' stress in dance learning. The Relationship between Physical Self-Concept and the Stress of Dance Learning Physical self-concept is an individual's perceptual evaluation of the ability of body elements and an important predictor of motor engagement behavior [5]. According to Yeh [5], physical self-concept consisted of five constructs, namely, appearance, physical abilities, health status, worry of overweight, and satisfaction with body parts. The relationship between physical self-concept and stress in dance learning is described as follows and the related hypotheses are proposed accordingly. Appearance: appearance refers to the importance students place on their appearance, clothing, and style. According to Yeh and Chang [21], a great body image is a goal for many young girls, and the demands for physical appearance are even higher for special groups, such as dancers, as they are often the center of attention and in the limelight, and their performances are often public displays that showcase their physical abilities, which makes the stage a unique environment to magnify their bodies. As a result, students in dance classes are more demanding of their physical appearance than other students of the same age [22]. Therefore, the higher appearance demands of dance students will increase their stress in dance learning. This study deduced H3-1, as follows: Appearance positively affects students' stress in dance learning. Physical abilities: Marsh and Peart [23] found that children's perceptions of their physical abilities not only influenced their overall perceptions of self-efficacy, they also influenced later motor skills engagement and development. By definition, contrasting with mental ability, physical ability is the ability to perform some physical act which affects an individual's engagement in motor skills, and those who are more physically capable are able to exercise for longer periods of time and are less likely to tire during strenuous exercises [24]. A dance performance is often seen as a competitive sport that requires a strong physical ability due to the use of movement and the influence of its execution [1]. Therefore, students with higher physical ability may have lower stress during the dance learning period. In this study, we used students' self-concept of physical ability as measure of their actual physical ability, hence we proposed the following Hypothesis (H3-2): Students' self-concept of physical ability had a negative influence on students' dance learning stress. Health status: health status refers to the dance students' assessment of their physical health, such as whether they are often sick, whether they often feel muscle aches and pains, and whether they feel less healthy than their classmates. Dance students are often required to master a wide range of dance styles and techniques, which requires extensive training. Due to self-requirements and external factors (access to performance opportunities), students may suffer from acute and chronic injuries when they are fully committed to practice, and such injuries may result in students experiencing physical and psychological training difficulties, and cause the loss of performance opportunities [1,25]. Therefore, the poorer health status of dance students may increase their stress on dance learning. This study deduced H3-3, as follows: Health status negatively affects students' stress in dance learning. Worry of overweight: Athletes believe that weight loss is beneficial to athletic performance [26]. Past studies have found that athletes in many sports must constantly control their weight, such as female ballet dancers, ice skaters, and dance students who face weight control problems, which makes weight control a psychological stress for them [21,27,28]. Therefore, dance students' increased worry of overweight may increase their stress in dance learning. This study deduced H3-4, as follows: The worry of overweight positively affects students' stress in dance learning. Satisfaction with body parts: Atalay and Gencoz [29] suggested that anxiety occurs when individuals are dissatisfied with their body because they are concerned about how others perceive their appearance; for example, when dancers do not gain recognition for their elegance, slenderness, and lightness, they may perceive that others have a negative impression of their appearance, and they will feel depressed [30]. Bane and McAuley [31] also pointed out that individuals who are dissatisfied with their bodies are more likely to have social body physique anxiety. Therefore, when dance students have higher satisfaction with their bodies, it may lower their stress in dance learning. This study deduced H3-5, as follows: Satisfaction with body parts negatively affects students' stress in dance learning. The five hypotheses, as derived from the previous theories, are shown in Figure 1. Methods In this section, we introduce the procedures performed to test our research hypotheses. First of all, we introduce the study subjects, namely, the selected participants from elementary school dance class students; second, we introduce the data collection tool, namely, a structured questionnaire developed based on research hypotheses; and third, we will introduce the statistical analyses performed using the data collected from the participants to test our hypotheses. Participants This study selected six elementary schools' dance classes' students with artistic abilities in Taiwan. Before investigation, all the administrators in charge of the six dance clas- Methods In this section, we introduce the procedures performed to test our research hypotheses. First of all, we introduce the study subjects, namely, the selected participants from elementary school dance class students; second, we introduce the data collection tool, namely, a structured questionnaire developed based on research hypotheses; and third, we will introduce the statistical analyses performed using the data collected from the participants to test our hypotheses. Participants This study selected six elementary schools' dance classes' students with artistic abilities in Taiwan. Before investigation, all the administrators in charge of the six dance classes were well-informed of the purpose of this study and agreed to this survey. After agreements, the questionnaires were sent to schools to distribute to students. Students took the questionnaires home and got consent from parents then finished the survey. A total of 480 questionnaires were distributed and we were able to get 412 valid responses with a return rate of 85.8%. As for participants, only 31 (7.5%) were male and 381 were female (92.5%); 52 (12.6%) were third grade, 140 (34.0%) were fourth grade, 120 (29.1%) were fifth grade, and 100 (24.3%) were sixth grade. As for dance classes per week, 90 (21.8%) had six classes, 39 (9.5%) had seven classes, 195 (47.3%) had eight classes, and 88 (21.4%) had nine or more than nine classes. As for dance experience, 22 (5.3%) had 1 year of experience, 43 (10.4%) had 2 years of experience, 58 (14.1%) had 3 years of experience, 83 (20.1%) had 4 years of experience, and 206 (50%) had 5 years or more of experience. Measurement This study used a structured questionnaire based on research hypotheses for data collection. The questionnaire was comprised of four parts: the first part was demographical information, the second part was psychological capital scale, the third part was physical self-concept scale, and the fourth part was dance learning stress scale. Demographical Information The demographical information included gender, grade year, dance classes per week, and years of dance experience which were all measured by categorical scales. Psychological Capital Scale The psychological capital scale adopted the psychological capital scale for athletes developed by Chang and Chi [32], and the questions was rephrased to be in line with elementary schools' dancing classes' student scenarios. The scale was comprised of four constructs: self-efficacy (4 items), hope (4 items), optimism (4 items), and resilience (4 items). The scale was measured by a five-point Likert scale ranging from "strongly disagree = 1" to "strongly agree = 5". Physical Self-Concept Scale The physical self-concept scale adopted Wang and Wang's [33] "The revision of the translated multidimensional body-self relations questionnaire" was rephrased for dance class scenarios. The scale was comprised of five constructs: appearance (6 items), physical ability performance (3 items), health status (5 items), the worry of overweigh (3 items), and satisfaction of body parts (9 items). The scale was measured by a five-point Likert scale ranging from "strongly disagree = 1" to "strongly agree = 5". Dance Learning Stress Scale The dance learning stress scale adopted Wang's [34] primary students' learning stress scale and was rephrased for dance class scenarios. The scale comprised of two constructs: dance and performance stress (6 items) and cram school dance learning stress (4 items). The scale was measured by a five-point Likert scale ranging from "strong disagree = 1" to "strongly agree = 5". Results The study used structural equation modeling (SEM) to test the hypotheses. SEM is comprised of two models: measurement model and structure model. The measurement model basically reports the reliability and validity of the study instrument and the structural model reports the test results for hypotheses. The following sections report our results from the measurement model and the structure model, in that order. Measurement Model The reliability and validity of the study instrument were tested using WarpPLS 7.0 developed by Kock [35], which under PLS, provides two measures of item reliability: composite reliability and Cronbach's. The convergent validity and discriminant validity were conducted to test validity of the instrument according to Hulland [36]. The factor loading of all items from PLS measurement model were all greater than 0.70 indicating good indicators. Composite reliability and Cronbach's α values for all scales exceeded the minimum threshold level of 0.70 [37] indicating the reliability of all scales used in the study. As for convergent validity, the square root of average variation extract [37] of all values exceeded the minimum threshold level of 0.70 [37] indicating the reliability of all scales used in the study (Table 1). Fornell and Larcker's test [37] for discriminant validity revealed relatively high variances extracted for each factor compared to the interscale correlations, which was an indicator of the discriminant validity of the nine constructs (Table 1). Table 1. Reliability, convergent, and discriminant validity of measurement model. Structure Model According to Hulland [36], the structure model results should provide the analyses of the path coefficient test (hypotheses test) results and explanatory power. The following sections report the analyses of hypotheses test results and explanatory power. Hypotheses Test Results The evaluation of the structural model is used to examine the sixteen hypothesized relationships. The test results are shown in Figure 2. The test results are described in the following: H1-1. Students' gender had no significant influence on dance learning stress (β1 = 0.01, p > 0.05). H1-3. The number of students' practice hours per week had no significant influence on dance learning stress (β3 = −0.02, p > 0.05). H3-4. The worry of overweight had a significant influence on dance learning stress (β12 = 0.23, p < 0.05). Figure 2. The path analysis. Note: The dotted line denotes that the tested standardized path coefficient was not significant; the solid line denotes that the tested standardized path coefficient was significant; "*" denotes p < 0.05; R 2 is the coefficient of determination. Coefficient of Determination (R 2 ) R 2 measures a model's predictivity, which represents the explained variance and its influence on the structural model. The psychological capital, self-concept, and control variables all together showed an R 2 = 0.24. It was suggested that R 2 values must be above the threshold of 0.10 [38]. Therefore, the R 2 values were above the threshold level of 10%, indicating a good predicting model as shown in Figure 2. Discussion This study found that the sources of elementary schools' dance classes students' stress was not affected by most of the control variables, such as gender, grade, and the number of classes per week, while the dance learning years had a significant negative path coefficient toward their dance learning stress, which indicates that students who have learned dance for fewer years have more stress than seniors. In Taiwan, if any elementary school wants to offer additional dance classes, they need to hire professional coaches/teachers and get approval from the Ministry of Education, Taiwan. After approval, they can recruit students willing to join the dance classes from Grade 3 or above from different classes or even from different schools. New students may need time to adapt to an unfamiliar environment. Since the seniors had more experience and might feel more comfortable in the environment, they might have had less stress than the juniors. Regarding psychological capital, this study suggested that the self-efficacy and optimism of psychological capital negatively influence students dance learning stress, while hope and resilience's path coefficients were not significant. As suggested by Luthans, Youssef, and Avolio [4], self-efficacy is an individual's assessment of his or her ability to achieve a specific goal, as well as an expectation of his or her ability; if a student's selfefficacy is inadequate, it leads to a belief that he or she will not be able to successfully overcome the stress of learning or achieve the expected outcomes. In elementary schools, most students learn to dance out of love, but have not yet decided if they want to become professional dancers; however, during performances, tests, or competitions, under the Figure 2. The path analysis. Note: The dotted line denotes that the tested standardized path coefficient was not significant; the solid line denotes that the tested standardized path coefficient was significant; "*" denotes p < 0.05; R 2 is the coefficient of determination. Coefficient of Determination (R 2 ) R 2 measures a model's predictivity, which represents the explained variance and its influence on the structural model. The psychological capital, self-concept, and control variables all together showed an R 2 = 0.24. It was suggested that R 2 values must be above the threshold of 0.10 [38]. Therefore, the R 2 values were above the threshold level of 10%, indicating a good predicting model as shown in Figure 2. Discussion This study found that the sources of elementary schools' dance classes students' stress was not affected by most of the control variables, such as gender, grade, and the number of classes per week, while the dance learning years had a significant negative path coefficient toward their dance learning stress, which indicates that students who have learned dance for fewer years have more stress than seniors. In Taiwan, if any elementary school wants to offer additional dance classes, they need to hire professional coaches/teachers and get approval from the Ministry of Education, Taiwan. After approval, they can recruit students willing to join the dance classes from Grade 3 or above from different classes or even from different schools. New students may need time to adapt to an unfamiliar environment. Since the seniors had more experience and might feel more comfortable in the environment, they might have had less stress than the juniors. Regarding psychological capital, this study suggested that the self-efficacy and optimism of psychological capital negatively influence students dance learning stress, while hope and resilience's path coefficients were not significant. As suggested by Luthans, Youssef, and Avolio [4], self-efficacy is an individual's assessment of his or her ability to achieve a specific goal, as well as an expectation of his or her ability; if a student's self-efficacy is inadequate, it leads to a belief that he or she will not be able to successfully overcome the stress of learning or achieve the expected outcomes. In elementary schools, most students learn to dance out of love, but have not yet decided if they want to become professional dancers; however, during performances, tests, or competitions, under the high expectations of peers, teachers, and parents, these students become stressed during their endeavors to be favored and fulfill expectations. When students fail to meet the expectations, they may feel frustrated and depressed, and such emotions may accumulate over time, leading to the loss of enthusiasm for dance, and even serious psychological disorders [1]. According to Chen, Lin, and Lin [39], self-efficacy is the self-confidence to put in enough effort to succeed when faced with challenges. Self-efficacy is based on practice and mastery, and when students repeatedly practice or are more skilled in a task, they tend to have higher self-confidence or self-efficacy; therefore, students in dance classes can enhance their self-efficacy through practice. On the other hand, teachers can also help students to increase their self-efficacy to reduce dance learning stress. For example, they can guide students to make daily schedules and encourage them to follow the schedules, and teachers can also encourage students to recognize the value of their hard work, which can increase their self-efficacy and self-confidence, thus reducing stress. In addition, it is believed in Chinese culture that a strict teacher creates good students and that a teacher's achievement comes from their students, thus, teachers are encouraged to be strict in education, and may severely criticize or scold students when their performances do not meet the expectation. On such occasions, optimism becomes important because it allows individuals to see a matter in a positive view or attribution and face their inner world positively, thus, lowering the stress in learning. Scheier and Carver [40] suggested that optimistic individuals have positive expectations regarding future matters and are willing to put more effort and persistence into the pursuit of their goals. Therefore, dance teachers can guide students to achieve self-confidence, and as a result of their hard work, students can gain a sense of superiority in the process of learning dance, and the joy of their hard work will be indispensable in their future lives, and they could choose dance as a career; thus, they become more motivated for their future development. According to Wang [41], there are three strategies for developing optimism. The first is to learn to reorganize and accept past failures, mistakes, and setbacks, to make positive attributions, and to develop an attitude of tolerance for the past. The second is to learn to appreciate the present, to be grateful for and content with the current training and competition performances, and to accept the current performance and status. The third is to adopt a positive, welcoming, and confident attitude towards future opportunities, and believe that hard work and training will definitely help achieve better results and development. Dello and Stoykova [42] designed an intervention of psychological capital training students in Bulgaria, a one-month follow-up assessment of psychological capital training course to examine the durability of the training effects. The statistical analyses revealed significant improvements in the overall psychological capital after training as well as in each of its four dimensions, namely, self-efficacy, hope, resilience, and optimism. Therefore, it is suggested to add psychological capital training courses for students. An examination of the physical self-concept found that appearance, physical ability performance, health status, and satisfaction with body did not have a significant influence on students' dance learning stress. This is probably because students who attend dance classes in elementary schools share the same characteristics, such as the belief that appearance can be made beautiful by dressing up, and little difference was found among them. In elementary schools, students who are still growing in physique and physical ability performance hold that their physical ability performance will grow and their physique will improve as they grow up; thus, they may believe that their current satisfaction with their body and physical ability performance will not cause stress in learning. This study found that the worry of being overweight according to the physical selfconcept increased the stress in learning dance, as dance is an activity that uses the body as a language to convey specific ideas through a variety of movements in space, the performance of dance involves a lot of movement skills, and a lean physique is a key element in making the movements more skillful and graceful [43]. In addition, most audiences expect dancers to be light and graceful on stage, thus, dancers are constantly reminded to be light and graceful in order to fulfill the expectations and demands of society [44]. Therefore, as dancers usually practice in tight clothes and often look at their bodies in mirrors to correct their movements, for dancers, a slim body shape is like a "must", that dominates their perceptions, thoughts, and feelings. Dantas et al. [25] and Liu [45] suggested that many female athletes seeking to maintain a slim body shape often employ erroneous and inappropriate eating behaviors, which may lead to malnutrition, amenorrhea, osteoporosis, and eating disorders [43,46]. In a study of the School of American Ballet students, it was found that 55% of its students failed to complete their four-year academic program and that poor eating habits were a major cause [47]. Therefore, it is important to instruct dance students in elementary schools to hold a proper sense of body shape, as well as proper eating habits and attitudes, so they can maintain healthy eating habits so they can stay in a healthy and good physical condition. Conclusions This study found that the variables of gender, grade, and the number of dance classes per week in elementary schools did not affect the stress in learning dance, while the years of dance learning negatively affected the stress in learning dance, meaning the stress is higher for children with fewer years of dance study. Teachers should therefore pay attention to the younger students in dance classes and provide them with appropriate support to reduce the stress of learning dance. It was found that, among the four aspects of motor psychological capital, self-efficacy and optimism have a negative influence on the stress in dance learning, and motor psychological capital is a psychological resource that can be trained and developed. Therefore, teachers can reinforce the self-efficacy and optimism of the students, which will reduce their stress in learning dance. Finally, it was found that only the worry of being overweight in the physical self-concept positively affected stress in dance learning, which means that teachers must help students learn how to control their weight to avoid the stress of being overweight during dance learning. It is recommended that a dietitian can be introduced to provide children with the correct diet and exercise concepts to control their weight in a healthy way, which will prevent them from adopting incorrect weight loss methods that may lead to ill health. Recommendation According to the findings of this study, the recommendations for students of dance classes in elementary schools are that physiological capital can lower the stress to learn dance. Thus, it is suggested that students should improve their self-confidence and selfefficacy through repeated practice or by becoming proficient in dance techniques. It is also recommended that students should appreciate what they have learned, love what they have chosen, be determined in their goals, and build up their confidence in their commitment to the art of dance. In addition, they are encouraged to schedule their time, improve the efficiency of their studies and work, and recognize the value of their efforts to realize potential. In terms of optimism, students should learn to be tolerant of the past, appreciate the present and accept their current achievements and status, and remain positive about possible opportunities in the future. Finally, it is recommended that students learn to perceive their standard weight correctly, rather than just seeking a thin body shape, and that those who are truly overweight must learn to eat properly to control their weight. The recommendations for teachers of dance classes in elementary schools are that dance instructors should work with academic subject teachers to understand students' stress in dance learning and to care for students' psychological capital and physical selfconcept. By incorporating various aspects such as strengthening students' self-efficacy and optimism regarding teaching and learning, students' psychological capital can be enhanced, which is conducive to lowering their stress in dance learning and overcoming stress in life and academic studies. It is suggested that teachers offer psychological counseling to help students build up their self-confidence and maintain an optimistic attitude, and give them room to progress, which will gradually improve their self-efficacy. It is also recommended that teachers and parents understand whether the children worry about being overweight, and provide guidance on correct diets and exercise to control their weight through nutrition education. For children with fewer years of dance learning, in order to reduce the stress in learning dance, they should be given encouragement and have a modest attitude to become more confident. As this study targeted students in the dance classes of elementary schools, we suggest that more complete studies could be conducted by collecting information on the stress of learning dance among students in junior and senior high schools. In addition, negative emotions, such as stage fright, the desire to win, and performance anxiety may also be important factors that influence dance learning stress during live performances with an audience, thus, future studies could incorporate positive and negative emotions to learn about the influences of emotions on stress in dance learning. Dancers are all expected to be absolutely correct and perform without flaws; therefore, further data could be collected to analyze whether the perception of perfectionism of dancers causes anxiety or stress on dancers exposed to such a concept in the long term. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study and exempt from IRB was declared and signed by the corresponding author and submitted to the editorial board according to IRB regulations approved by local government. Data Availability Statement: Data can be provided upon request.
2022-06-19T15:21:51.647Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "9aa238c67a386a398621ae8298f346ac18713add", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/12/7398/pdf?version=1655372018", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df4588e311221ded53a9cfdae6ec69bee2b45511", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
236979110
pes2o/s2orc
v3-fos-license
Parsing as a Cue‐Based Retrieval Model Abstract This paper develops a novel psycholinguistic parser and tests it against experimental and corpus reading data. The parser builds on the recent research into memory structures, which argues that memory retrieval is content‐addressable and cue‐based. It is shown that the theory of cue‐based memory systems can be combined with transition‐based parsing to produce a parser that, when combined with the cognitive architecture ACT‐R, can model reading and predict online behavioral measures (reading times and regressions). The parser's modeling capacities are tested against self‐paced reading experimental data (Grodner & Gibson, 2005), eye‐tracking experimental data (Staub, 2011), and a self‐paced reading corpus (Futrell et al., 2018). Introduction Human parsing, that is, syntactic-structure building, relies on memory in at least two ways. First, it happens often that an element might be dependent in its interpretation and/or form on some other, non-adjacent phrase, and language users need to be able to access the phrase when constructing a correct parse. For example, in (1), the noun phrase (NP) a book is interpreted as the object of the verb love and if the parser is to correctly establish the relation, it has to be able to access the NP in its memory when the verb is parsed. It is a book that I think the American readership will love immediately. (1) There is another, even more basic role that memory plays in human parsing. Comprehenders have to rely on their memory of parsing rules when they try to understand a written or a spoken message. For example, for (1), they have to remember grammar conventions like (i) that can introduce a relative clause, (ii) subjects precede verbs in non-transformed structures, (iii) the object in object-relative clauses has to be recalled when one hears the main verb, etc. In recent years, research on memory in parsing has focused on the first role of memory during comprehension. The investigation of dependencies such as a book-love in (1) provides a growing body of evidence that this part of human parsing can be modeled as a case of cue-based retrieval. The evidence for cue-based retrieval of dependents comes from various experimental methods, from reading (self-paced reading and eye tracking; Cunnings & Sturt, 2018;Dillon, Mishler, Sloggett, & Phillips, 2013;Kush, Lidz, & Phillips, 2015;Van Dyke, 2007) to speed-accuracy trade-off (McElree, 2000;Mcelree, Foraker, & Dyer, 2003;McElree, 2006), and was further supported by Bayesian meta-analysis of experimental data (Engelmann, Jäger, & Vasishth, 2019;Vasishth, Nicenboim, Engelmann, & Burchert, 2019). The success of this research line, however, leads to a schism in the general theory of parsing. While psycholinguists currently have a detailed theory of memory structures for the processing of dependencies, the theory of how parsing rules are structured, stored, and recalled is arguably less specific. This schism is probably most apparent in computational psycholinguistic models. In models that focus on retrieval during parsing, that is, models of processing of dependencies, the role of parsing is either simplified (Dillon et al., 2013;Dubey, Keller, & Sturt, 2008;Kush et al., 2015) or is constructed in such a way that the (retrieval of a) parsing rule makes no clear and generalizable behavioral footprint (Brasoveanu & Dotlačil, 2018;Gibson, 1998;Lewis & Vasishth, 2005;Rasmussen & Schuler, 2018). In models that focus on parsing, a linking hypothesis that is responsible for connecting parsers to behavioral data is usually independent of memory assumptions. Computational psycholinguistic models of human parsing that predict behavioral measures commonly assume that other properties, for example, relative entropy of parsed structures, prefix probabilities, are relevant explanatory variables (Hale, 2001(Hale, , 2003(Hale, , 2011Levy, , 2011, not the same memory structures that account for dependency resolution and that are assumed in cue-based retrieval. To be sure, there are parsing models that do operate with memory and memory limitations but those assume a separate model for parsing and for the resolution of dependencies (Boston et al., 2011;Demberg & Keller, 2008). This paper represents an attempt to connect the two strands of research in parsing by developing a cue-based retrieval system of parsing. The goal of the paper is the following: 1. To provide a data-driven parser that postulates parsing rules in memory and assumes cue-based retrieval. It will be shown that there is a class of parsers in computational linguistics that are compatible with this position. 2. To show that the parser can be embedded in a cognitive architecture, ACT-R. Because the architecture simulates human behavior, this will enable the parser to predict behavioral data. 3. To study the predictions of the parser. The predictions will be investigated on three different data sets: • Grodner and Gibson (2005), in which parsing is intertwined with recall of dependents • Staub (2011), in which parsing is intertwined with lexical retrieval • Corpus data from Futrell et al. (2018) We will see that the model can fit the experimental results very well and provide a strongly significant predictor for reading data. The paper thus provides support for a psycholinguistic parser that is built on independently established properties of human memory. Since the parser is inspired by cue-based retrieval models, it will be labeled throughout as "the cue-based model of parsing." The structure of the paper is as follows. In Section 2, the general schema of cue-based retrieval models is presented and it is shown how the schema is implemented in the cognitive architecture ACT-R. In Section 3, a brief introduction into transition-based parsers is given. Section 4 is the core of the paper: it provides various modeling evidence for the parser. Section 5 compares the cue-based model of parsing to other related work in computational linguistics and psycholinguistics. Section 6 concludes. ACT-R and cue-based retrieval This section summarizes the main claims of the theory of cue-based retrieval and explains how cue-based retrieval is enforced in the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). The latter point is crucial for the follow-up sections, since the data-driven parser, introduced in Section 3, will also be implemented in ACT-R. Basic case of cue-based retrieval The basic idea of the cue-based model will be presented on the four-sentence paradigm in (2) and (3). The sentences investigate the retrieval of the subject noun in subject-verb dependencies. The examples in (2) are taken from Van Dyke (2007). The examples in (3) come from Wagers, Lau, and Phillips (2009) and are based on Pearlmutter, Garnsey, and Bock (1999). a. The worker was surprised that the resident who was living near the dangerous neighbor wascomplaining about the investigation. b. The worker was surprised that the resident who was living near the dangerous warehouse wascomplaining about the investigation. The key to the cell unsurprisingly wererusty from many years of disuse. b. The key to the cells unsurprisingly wererusty from many years of disuse. When readers parse the verb phrase was complaining in 2 and were rusty in 3, italicized in the example, they have to recall the subject for the correct interpretation of the argument structure. When searching for the correct noun in their memory, the cue-based model assumes , [+ANIMATE]. The features are either matched (black rectangle) or mismatched (gray rectangle). For this illustration, we assume that only two NPs compete for recall, the target (the resident), or the distractor (the neighbor/the warehouse). The overload of the cue [+ANIMATE] in the top example should cause inhibitory interference according to the presented theory of retrieval. that they can try to match searched nouns against several features. We will focus only on those features that are crucial for the predictions of the cue-based model. For our discussion of (2-a) and (2-b), the features we have to consider are [+SUBJECT] and [+ANIMATE]. The latter cue can be thought of as triggered by thematic restrictions of the verb complaining. For (3-a) and (3-b), the relevant features are [+SUBJECT] and [+PLURAL]. I will first discuss the predictions of the theory and then explain how they are derived in ACT-R from theoretical principles. Let us focus on (2-a) and (2-b) for now. The features relevant for the discussion are schematically represented in Fig. 1. When we compare the two cases in Fig. 1, we see that in (2-a), the cue [+ANIMATE] is overloaded: it is matched by the subject that should be recalled, the resident, as well as the non-subject distractor, the neighbor. This cue overload should, according to the presented theory of retrieval, lead to the inhibitory interference of the distractor in (2-a) compared to (2-b). In reading times, the inhibition should manifest itself by slowdown. Such slowdown was observed for subject-verb dependencies in the studies that investigated the syntactic and semantic overload effect (see Van Dyke & Lewis, 2003;Van Dyke & McElree, 2006;Van Dyke, 2007;Jäger et al., 2017). At least one study also found the slowdown effect caused by the overload of the morphological information, number (Nicenboim, Vasishth, Engelmann, & Suckow, 2018). The case of (3-a) and (3-b) is represented in Fig. 2. In this pair, neither sentence is grammatical since the target and the distractor result only in a partial match. Still, the theory predicts that the partial match of the distractor affects retrieval. The partial match should lead to the There are two nouns that could be recalled, the target (key) and the distractor (cell/cells). Note that in this case, neither the target nor the distractor fully match. The partial match of the distractor should lead to the facilitatory interference according to the presented theory of retrieval. facilitatory interference of the distractor in (3-b) compared to (3-a). In reading times, the facilitation should manifest itself by speedup. Such effects were observed for subject-verb number dependencies (see Dillon et al., 2013;Jäger et al., 2017;Jäger, Mertzen, Van Dyke, & Vasishth, 2020;Lago, Shalom, Sigman, Lau, & Phillips, 2015;Tucker, Idrissi, & Almeida, 2015;Villata, Tabor, & Franck, 2018;Wagers et al., 2009, among others). Declarative memory in ACT-R and cue-based retrieval There are two dominant theories in psycholinguistics that generate the just-summarized predictions for (2) and (3): ACT-R and the Direct Access model (for comparison, see Vasishth et al., 2019). Let us see how the findings are captured in the former theory. I focus on ACT-R because it is much more encompassing and general than the Direct Access Model. ACT-R is not just a model of cue-based retrieval. It is a cognitive architecture, which can simulate the interaction of memory with execution, planning, visual perception, motor control, etc. (see Anderson & Lebiere, 1998;Anderson et al., 2004;Anderson, 2007). For this reason, it will also be suitable for the construction of the cue-based model of parsing. ACT-R assumes two types of memory: procedural memory and declarative memory. I focus here on the latter and very briefly describe how retrieval from declarative memory works (for more details, motivation and a more beginner-friendly introduction, see Brasoveanu & Dotlačil, 2020 and Vasishth & Engelmann to apear). It is assumed that what is retrieved from declarative memory is a small, encapsulated piece of information. These pieces are called chunks and should be thought of as attribute-value matrices, or, in the parlance of ACT-R, slot-value matrices. Four examples of such chunks, corresponding to the relevant nouns from (2) and (3), are given in (4) and (5). It is assumed that the nouns have four slots: Form, (Syntactic) Function, Number, and Semantic information. These particular slots are assumed for the sake of illustration, with no claim that such a slot-value matrix exhaustively and fully captures the characteristics of these elements in memory. Target : When the processor parses word after word, it builds up a syntactic parse and stores each parsed element in its declarative memory. When it encounters the verb, a subject noun parsed previously has to be retrieved from the declarative memory. The retrieval is activation-driven: all chunks are evaluated in parallel and the chunk with the highest activation is recalled. The activation of a chunk i is evaluated according to the equation in (6), where B i is the base activation of the chunk i, S i is the spreading activation of the chunk i, and is noise. ACT-R activation of a chunk in declarative memory: We will now consider B i and S i in detail, with an eye on how the activation captures the cue-based properties of retrieval, summarized in Section 2.1. The base activation B i of a chunk is given in (7). It is the log of the sum of t −d k , where t k is the time elapsed between the time of presentation k and the time of retrieval. d is a negative exponent (decay). This is a free parameter of ACT-R, which, however, is almost always set at its default value of 0.5. "Presentation" in ACT-R means two things: (i) the chunk was created for the first time, or (ii) the chunk was recalled from memory. For example, if we are to measure the activation of the target key, one t k would be the time elapsed between the creation of the noun key, that is, the moment the word was parsed and the structure chunk was built, and the time at which the model attempts to retrieve the noun from memory. Other t k time elements would represent the time elapsed between previous recalls of that noun and the current recall. In this case, it is likely that there are no such other t k times. ACT-R base activation: The base activation decreases with the time elapsed since the presentation of the chunk. As such, it captures the decay of activation as the time progresses from the use of the chunk. It does not play a role in the interference pattern discussed in Section 2.1, and it only models how decay affects recall. For more details, see Anderson (1990) and Anderson and Schooler (1991). The second element in the calculation of activation, spreading activation, is more relevant for us. Generally speaking, it captures the effect of the current cognitive state on retrieval. In particular, it represents the spread of activation from the current cognitive state to chunks in declarative memory. The spreading activation for a chunk i is defined in (8). It is the sum of the multiplication W j S ji for every cue j that accompanies recall. ACT-R spreading activation: S i = j W j S ji (W -weight, free parameter) (8) In (8), W j is the weight for the cue j. The weight is a free parameter, with default value assumed to be proportional across cues, for example, 1 n where n is the number of cues. S ji is the associative strength between the cue j and the chunk i, and formally, it is modeled as the pointwise mutual information: ACT-R estimates the value in (9) as follows. First, in case j is not predictive of the chunk i, it assumes that S ji = 0. This happens, simplifying slightly, when the cue j is not present in chunk i. When the cue is present in chunk i, S ji is calculated as: Estimated associative strength: S ji = S − log(fan j ) (S -free parameter) S is the log of the size of the declarative memory, but commonly, it is hand-selected as a large enough value to ensure that S ji is always positive (see Bothell, 2017). fan j is simply the number of chunks that have the cue j as its value. The formula S ji should make an intuitive sense: the associative strength will be large when j appears only in few chunks since in that case, j is highly predictive for each of those chunks; the associative strength will decrease with the increase of chunks that carry j as its value. Finally, the formula in (11) shows how A i , the activation of a chunk i, is related to the time it takes to retrieve the chunk i from declarative memory, T i . The relation between A i and T i is modulated by two free parameters, F , latency factor, and f , latency exponent. When both parameters are set at 1 (their default value), the retrieval time of a chunk i is just the exponential of its negative activation. Retrieval time: Now, with this background, let us see how we capture the data summarized in Section 2.1. Before doing so, let us stress that none of the properties were constructed to describe the findings in Section 2.1. They just fall out from independently motivated properties of declarative memory and retrieval in ACT-R. Let us start with the inhibitory interference from Fig. 1. In this case, we assume that the two cues [+ANIMATE] and [+SUBJECT] form (a part of) the current cognitive state.1 Thus, they affect retrieval through spreading activation. The spreading activations for the target and the distractor in Fig. 1 are: The spreading activation for the target, the subject resident, is higher than the activation for the distractors since both addends in the first equation are greater than zero. However, how high S resident is depends on whether we are in case (2-a) or (2-b). In (2-a), see the top figure in Fig. 1, [+ANIMATE] is shared by the target and the distractor. Since two chunks in the declarative memory carry this value, the associative strength of S [+ANIMATE],resident will be (assuming for the sake of concreteness that no other chunks carry the value): In (2-b), see the bottom figure in Fig. 1, [+ANIMATE] exclusively singles out the target. Since the value is not shared across chunks in the declarative memory, the associative strength will be higher: The activation of the chunk resident will be higher in (2-b) compared to (2-a) and the increased activation will result in a decrease in retrieval time, see 11. Thus, the model of declarative memory in ACT-R can capture the inhibitory interference of partially matching distractors as a case of decreased activation strength between a cue and a chunk. The decrease, in turn, is caused by the fact that the cue is shared across different chunks, that is, the fan of the cue is larger. Let us see how the facilitatory interference from Fig. 2 is derived. In this case, the two cues forming (a part of) the current cognitive state are [+PLURAL] and [+SUBJECT] and the spreading activations for the target chunk and the distractor chunk are:2 ,key is 0 (because key is singular). Similarly, W · S [+SUBJECT],cell (s) is 0 (because cell(s) is an object) so we can simplify (15) into: cell (s) If the distractor appears as singular, see the top figure in Fig. 2, then the associative strength of the distractor is 0 (because cell receives no activation from [+PLURAL]). If the distractor, however, appears as plural, see the bottom figure in Fig. 2, then the associative strength of the distractor is greater than 0. Thus, in the bottom figure, the activation of the distractor is greater than in the top figure. This would result in decreased retrieval times if the distractor is recalled, which could happen if the activation of the distractor is higher than the activation of the target. The activation of the distractor can be higher than the activation of the target under several circumstances: • , the noise parameter, happens to increase the activation of the distractor over the activation of the target. • The base activation of the distractor is higher than the activation of the target, enough so that the distractor is recalled. This could happen if the distractor is more recently or very often presented/used. • S cells > S key , enough so that the distractor is recalled. This could happen if the cue selecting distractor is very selectively tied to just that chunk or the weight for S cells is higher. When the distractor is recalled over the target, this results in faster reading times (since the distractor will have a higher activation than the target and higher activations correspond to faster recall times, see 11). Thus, any of these circumstances are enough to capture the facilitatory interference of partially matching distractors. We see that ACT-R assumes a cue-based retrieval system that predicts a particular pattern of interferences due to distractors and the pattern is, at least to some extent, observed in the resolution of dependencies. We will now turn to the parsing system that can leverage this organization of memory. In Section 3, it is shown that there is a class of parsers (transition-based parsing) that can be directly built as a case cue-based retrieval model. Transition-based parsing In this section, transition-based parsers are introduced. As we will see, the parsers are compatible with the memory structures discussed in Section 2 and can be, to a large extent, embedded in ACT-R. This embedding will be tested in the following sections. Transition-based parsers are parsing systems that predict transitions from one state to another, following decisions made by a classifier. Since the classifier plays a crucial role in this type of parsers, these parsers are also sometimes called classifier-based parsers. Transition-based parsers are most commonly implemented for dependency grammars, and arguably, they are most successful and widespread when constructing dependency graphs (Nivre et al., 2007) but they have also been applied to phrase structure parsing (Kalt, 2004;Sagae & Lavie, 2005), including neural phrase-structure parsing (Kitaev & Klein, 2018;Liu & Zhang, 2017). This paper also implements transition-based parsing for a phrase-structure parser. We will look at a shift reduce variant of the transition-based parsing algorithm, which is arguably the most common type of transition-based parser for phrase structures and also comes closest to the transition-based parsing of dependency graphs (see Sagae & Lavie, 2005). Algorithm of transition-based phrase-structure parsing The parsing algorithm works with two databases, a stack of constructed trees S and a stack of upcoming words with their part-of-speech (POS) tags W. When parsing begins, S is empty and W carries the upcoming words as they appear in the sentence, so that the first word appears at the beginning of the stack, followed by the second word, etc. Parsing proceeds by selecting actions based on the content of S and W. Every parsing step P is a function from S, W to actions A, P : S × W A. Broadly speaking, three actions could be taken by the parser: The first action, shift, pops the top element from the stack W and pushes it as a trivial tree onto stack S. The element in W is a pair word, POS , the tree moved onto the stack is just the POS tag with the terminal being the actual word. The second action, reduce, pops the top element (if the reduction is unary) or it pops top two elements (if the reduction is binary) in the stack of constructed trees S and creates a new tree. If the reduction is unary, the new tree has just one daughter under the root, the tree that was just popped from the stack. If the reduction is binary, the newly created tree has two daughters, the two trees that were just popped from the stack. In either case, the newly constructed tree is pushed on top of the stack S and it is specified what label the root of the tree has. It is assumed that all trees are at most binary, so no further reductions beyond binary reductions are necessary. Finally, the third action, postulate gap, postulates a gap and resolves it to its antecedent.3 There are several restrictions on the three actions. First, no shift can be applied when W is empty. When S is empty, no reduce can be applied and when it has only one tree, reduce binary cannot be applied. Finally, no more than two postulate gaps actions can be applied between two shifts. This last restriction ensures that the system does not fall into the infinite regress of postulation gaps. Let us consider a simple example: parsing of which boy left?. The phrase structure is shown in Fig. 3 In this illustrative example, we assume that the parser knows what the right phrase structure is and parses toward that structure. Of course, the interesting question is what happens when the phrase structure is unknown and the parser needs to decide what action to take. This is where cue-based retrieval becomes relevant. Parsing steps as memory retrievals Generally speaking, the parsing step has to decide which action (among shift, reduce and postulate gap) should be taken, and, if reduce is selected, how should the reduction be done: should it be unary or binary? What should the root label of the newly constructed tree be? This is the point at which transition-based parsing developed in computational linguistics meets memory systems established in psycholinguistics. We will assume and test the following linking hypothesis: Linking hypothesis between parsing and memory: A parsing step is a cue-based retrieval from declarative memory. The retrieval uses as cues the information from S and W and the retrieved chunk specifies (17) the action (from actions inA) that should be taken as the parsing step. Why should the linking hypothesis hold? Because of the way, learning works in ACT-R. When language users are at some parsing step X , they are aware of the current context, represented by S and W. Their goal is to select the right action at that moment. Let us say they select one such action, fulfilling the goal of deciding what parsing step to take. This parsing step, consisting of the context and the action, is then stored as a chunk in declarative memory and can be recalled in the future to guide the same user through parsing steps with similar context. This is arguably the most common line of how ACT-R agents learn (Anderson & Lebiere, 1998;Lebiere, 1999). While it might be possible to think of the context as complete trees in S and all information in W, we will limit the amount of information in the two databases. It will be assumed that S and W carry only some features about the trees/upcoming words. The features are listed in (18). Thus, the parser itself never has a full snapshot of the phrase structure that it is deriving. It only carries some minimal, local information. The phrase structure can always be reconstructed through parsing steps the ACT-R agent (and humans) took but there is no single snapshot in which all the information is available to the agent. This position is common in ACT-R parsing, see, for example, Lewis and Vasishth (2005). These features should be easy to understand, maybe with the exception of the antecedent carried and lexical head. The antecedent-carried feature has only two possible values, yes or no. It is set to yes when an element has been parsed that needs to be resolved through a gap postulation and the gap has not yet been postulated. In this paper, it is assumed that only wh-phrases need to be resolved. That means that wh-phrases will be the only element that will form dependencies in the upcoming case studies. The lexical head is a terminal that projects its phrase (a verb is the head of a verb phrase, a noun is the head of an NP, etc.) and is relevant even beyond the phrase (e.g., verbs are heads of clauses, S; see Collins, 1997 on head projection in computational parsers, which this works follows). We will store lemmatized lexical heads. As an example, assume that the sentence which boy left? is the only sentence parsed and stored in declarative memory. Then the parsing memory would solely include the parsing steps listed above. For instance, the parsing step reduce (binary) with label SBAR would be stored in declarative memory as shown in (19). Only the slots that carry a value are listed. Last parsing step of whichboyle f t? stored in declarative memory: ⎡ As one can see, the parsing step chunk stores the action (e.g., reduce) and the corresponding label (SBAR) along with the context in which the action took place. When parsing a novel context, the retrieval will be attempted. The context will spread activation to parsing steps chunks in the declarative memory and select such a chunk that has the highest activation. For instance, assume that the current context is in (20). This context could represent, for example, almost finished parsing of the sentence which woman dances?. Example context The model would then very likely retrieve (19), since the current context will spread the activation from almost all features but lexical heads to (19), and no other parsing step chunk from the sentence which boy left? will receive a comparable boost in spreading activation. Under this view, undertaking a parsing step is a case of memory retrieval that follows the rules in Section 2.2. Consequently, it is predicted that parsing will be activation-driven and different parsing steps might require different amounts of time depending on the time it takes to retrieve them. Parsing steps with higher activations will be recalled faster than parsing steps with lower activations. Activations, in turn, are affected in the exactly same way as any other case of cue-based retrieval in ACT-R. Computational model of transition-based parser, training, and accuracy To test the cue-based model of parsing, we consider a concrete declarative memory structure with chunks that represent correct past parsing steps. For our purpose, we use Penn Treebank (Marcus, Marcinkiewicz, & Santorini, 1993). As is standard, we split the section of Penn Treebank as follows: all the sections up to and including section 21 are used to train the parser, that is, to collect the correct parsing steps; section 22 is used for development; section 23 is used to test the accuracy of the parser. Before training we pre-process and prepare the phrase structure by (i) transforming phrases into binary structures in the way described in Roark (2001) (see Roark, 2001;Sagae & Lavie, 2005 on why this is needed), (ii) annotating phrases with head information, (iii) removing irrelevant information (coreference indices on phrases),4 and (iv) lemmatizing tokens so that lexical heads are stored as lemmas, not as inflected tokens. Parsing of novel sentences consists of recalling the chunks from the declarative memory in the order of their activation. To calculate activation for each chunk, formulas in Section 2.2 are applied. The parser recalls three chunks with the highest activations and the action that has the highest activation, summed up over the three recalled chunks, is carried out.5 Even though it is not the goal of this paper to study the accuracy of the parser, it might be of interest that when tested on section 23, the parser shows Label Precision as 70.2, Label Recall as 72.4, and F1 as 71.3. When we restrict attention to sentences of 40 words or less, as is common, Label Precision is 73.7, Label Recall is 75.9, and F1 is 74.8.6 While these precision and recall values are far away from the current state of the art,7 this level of accuracy is sufficient for the modeling of the experimental items to which we now turn. Modeling reading data We will now go through the evidence for the cue-based model of parsing. Three cases will be discussed. 8 Case 1 and case 2 are reading experiments and case 3 consists of modeling self-paced reading corpus data. In case 1 and case 2, we will see that the parser can be combined with a few extra assumptions for reading to generate reading latencies that fit the actual data. In case 3, we will see that the activations of parsing steps are good predictors for reading measures. Case 1: Retrieval of dependents and retrieval of processing steps We start by modeling reading data from Experiment 1 in Grodner and Gibson (2005) (also used in Lewis & Vasishth, 2005). This is a self-paced reading experiment (non-cumulative moving-window; Just, Carpenter, & Woolley, 1982). Participants read word-by-word sentences in which the subject NP is modified by a subject or object extracted relative clause (RC). A subject-gap example is provided in (21-a), and an object-gap in (21-b). t signals a gap and it appears in the position in which it would be postulated according to Penn Treebank annotation rules and standard assumptions in linguistics. The gap shows where the dislocated argument, the relative pronoun who, is interpreted. a. The reporter who t sent the photographer to the editor hoped for a story. b. The reporter who the photographer sent t to the editor hoped for a story. (21) There are six regions of interest (ROIs) that we model, underlined in the examples above. The ROIs start at the first word of the relative clause and stop at the penultimate word of the relative clause.9 Grodner and Gibson (2005) have been chosen for several reasons. Parts of their data have been simulated by the first explicit linguistic model of ACT-R, Lewis and Vasishth (2005), and played a role in other cognitive models of reading (e.g., Chen & Hale, 2021). It is good to see that our model can replicate their results. Not only that, we will see that our model can also significantly extend the findings of Lewis and Vasishth (2005). Lewis and Vasishth (2005) studied only the difference between reading times on the verbs in (21-a) and (21-b), while our model will be able to simulate actual reading times, not just differences between conditions, and it will do so for 12 words in total. Moreover, (21) is an interesting case in which parsing interacts with another aspect of cue-based retrieval, the recall of dependents (wh-words). By simulating Grodner and Gibson (2005) we will have evidence that different forms of retrieval, be this wh-dependency in relative clauses of parsing steps, can be modeled by one and the same mechanism: cue-based retrieval. In Sections 4.1.1 and 4.1.2, it is shown how the parser can be combined with other components of reading, and in Section 4.1.3, we inspect what syntactic predictions the parser makes. In Section 4.1.4, it is shown how the model can be fit to reading times through the estimation of ACT-R free parameters. In Section 4.1.5, we turn our attention to two other models that ignore or modify the parsing component of the model and see that the changes result in a worse fit. That is, we will see that not only can our model fit the data, but slight modifications in the parsing component degrades the fit, suggesting that the parsing component as proposed is needed for the modeling of reaction times. Sequential model for reading The cue-based model of parsing has been specified in Sections 2 and 3. The procedure goes as follows. When the parser is at word n and a parsing step needs to be carried out, the parser retrieves three best fitting chunks from the declarative memory ordered by activation (calculated as a sum of base activation and spreading activation) and applies the most common Fig. 4. Sequential model of reading on one word. Each box represents one subprocess. The arrows represent the order of subprocesses. There are two arrows from retrieve parsing steps because retrieve wh-dependent is not always triggered (only when a gap is postulated by the parser). action shared by the three chunks. In case of a tie, the action from the chunk with the highest activation is used. The parser repeats this procedure until it encounters shift. At that moment, the parser is done with integrating word n and can move its attention to word n + 1. In self-paced reading that we are about to model, readers, however, do much more than just retrieving and applying parsing steps. It seems uncontroversial that a model simulating reading should, at very least, attend to visual information on word n, retrieve lexical information on that word, parse, press a key (to reveal the next word), and move visual attention to the next word. To have a chance at having a descriptively correct computational model, we should add at least these components. Each of the listed steps is a different process with its own properties. The processes are linked together and controlled by the procedural knowledge in ACT-R. We see how the processes fire one by one on a word n in Fig. 4. It is assumed that these processes are repeated on every word. Postulating these sequential steps for self-paced reading is relatively standard (see Brasoveanu & Dotlačil, 2020;Lewis & Vasishth, 2005). Firing each of the processes takes the same amount of time in the procedural system, specified in (22): Rule firing in ACT-R: r(r is a free parameter, default-50 ms). In addition to that, submodules involved in each process can incur extra processing time. The process attend word visually attends to a word. To keep the model simple, I will not try to model any details of visual attention, just assume that visual attention takes the fixed amount of time, in line with basic/default models of ACT-R (Bothell, 2017). It is assumed that attending takes 50 ms, the default value of rule firing in ACT-R. The processes retrieve lex. info, retrieve parsing steps, and retrieve wh-dependent will be discussed below. This leaves us with press key and move visual attention. Press key is modeled assuming the basic model of motor actions in ACT-R, which is inspired by the EPIC cognitive architecture (Bothell, 2017). It is assumed that readers have their finger prepared on the key to be pressed. In that case, the simple model of motor actions in ACT-R, followed here, postulates that it takes 150 ms to press the key. Crucially, during this time, the procedural system is free to carry out any other actions in the sequential model. That means that moving visual attention can happen concurrently with key presses. Since attending the next/upcoming word in the sentence should not take more than 150 ms, I will assume that moving visual attention does not add any extra time beyond 150 ms required by the motor module. Retrieval from declarative memory Let us now go back to the three processes that involve declarative memory and retrieval therefrom: retrieve lex. info, retrieve parsing steps, and retrieve wh-dependent. These processes take r amount of time each, but aside from that we want to know how much time it takes to retrieve an element. All relevant equations to calculate the retrieval time have been given in Section 2.2. Let us repeat that the retrieval time is a function of activation of a retrieved chunk and modulated by two free parameters, (23-a). Activation is calculated as the sum of base activation and spreading activation, (23-b). (We ignore the noise parameter , so that retrieval time becomes deterministic.) Base activation and spreading activation are repeated in (23-c) and (23-d). a. Note that, when a chunk does not share any cues with the context, S i becomes zero and can be ignored. The recall of syntactic information is driven by context cues and so is the recall of wh-dependents but the lexical retrieval has no cues that are of interest and for this reason, it is assumed that spreading activation is zero for this case of retrieval. It is most likely a simplifying assumption, but as we will see, it does not harm the fit of the model. The cues used for the spreading activation of parsing were described in 18. Since we now deal with self-paced reading, in which readers have no look-ahead possibility, it is assumed that no upcoming words are used as context cues (see 18-a). For the wh-recall, only the syntactic category of the wh-dependent is used as a cue to increase spreading activation (more could be added; see Arnett & Wagers, 2017;Kush, 2013;Kush et al., 2015;Patil, Vasishth, & Lewis, 2016;Parker & Phillips, 2017;Smith & Vasishth, 2020 for investigations of what features are relevant for cue-based retrieval). The parameter d from (23-c) is set at its default value, 0.5 (see Anderson & Lebiere, 1998), and S from (23-d) is set at 20, which is high enough to ensure that S − log(fan j ) is always positive for any j appearing in data (see Section 2.2). Apart from d and S, the formulas in 22 and (23) have four parameters: F, f , r, W j . These will be estimated according to the procedure described in Section 4.1.4. Before we turn to that, we need to decide another thing: how is t k from (23-a) found? For a retrieval of wh-dependent, this is easy: it is the time elapsed between parsing a wh-dependent, that is, parsing who in 21, and postulating a gap, that is, at the subject position in (21-a) or at the object position in (21-b). For lexical retrieval and parsing steps retrieval, we estimate t k in (23-a) from the frequency of words/parsing steps. The frequency of words is estimated from the British National Corpus. The frequency of parsing steps is estimated using the Penn Treebank corpus. The frequencies can be transformed into t k according to the procedure described in Reitter, Keller, and Moore (2011), Dotlačil (2018), and Brasoveanu and Dotlačil (2020). The procedure is summarized in Appendix A. Finally, we need to clarify one last issue. At each word, parsing is finished when shift is recalled, at which point the processes following parsing take place, see Fig. 4. However, the retrieval of parsing steps can consist of several parsing steps, and in this way, parsing differs from the retrieval of lexical information and the retrieval of wh-dependents, which usually retrieve only a single element per word. We could assume that retrieving each parsing step is a process in the sequential model on its own: that is, there could be several retrieve parsing steps processes per word. This position would be in accordance with ACT-R, which assumes the serial order in the procedural system if the same process type is involved. However, there is a serious drawback to letting every parsing step be a process on its own. If each parsing step would correspond to one process, we would predict that reading times linearly increase with the number of parsing steps (see discussion in Section 4.1.1). We do not want to go this route, for three reasons. First, we would add another factor that would affect reading times based on syntactic properties and this effect might completely obscure our main point of investigation, the role of memory in parsing. Obviously, it is preferable to not introduce confounds into our model. The second problem is that our results would become highly dependent on the type of parsing algorithm. We make use of the shift-reduce (bottom-up) parsing algorithm. In this algorithm, steps accumulate at the end of a phrase, so we would expect that ending phrases increases reading times. Top-down parsers accumulate parsing steps when a new phrase is started and generalized left-corner parsers can accumulate steps anywhere between these two extremes (Hale, 2014;Resnik, 1992). But it is not of our interest to investigate whether one algorithm is correct, rather, we want to see whether the transition-based parsing with the linking hypothesis 17 can be fit to data. Finally, it has been proposed that often repeated parsing steps are merged/compiled into one step through production compilation (Hale, 2014), so treating them as separate would probably be empirically inadequate (and too simplistic) even if we knew what the right parsing algorithm is. I will come back to this last issue in Section 5.2. For the just-listed reasons, another solution will be adopted. We assume that there is just one process, the retrieval of parsing steps, and the retrieval time is calculated based on the average activations of all the parsing steps recalled on that word. Other, more sophisticated relations between the number of parsing steps and the actual retrieval time have to be left for future investigations. Symbolic syntactic predictions of the model The syntactic parser constructs the correct phrase structure for the sentence, including the correct postulation of gaps for the subject and object relative clauses, see Fig. 5. As far as I know, this is the first data-driven parser that is built using assumptions of cue-based retrieval and, to a large extent, is compatible with the ACT-R cognitive architecture, yet it is able to parse sentences of this complexity correctly without any hand-coding of the syntactic rulesthe whole structure is generated by the data-driven transition-based parser. Fig. 5. The syntactic structure built by the parser. For readability, we transform binary trees into more common, n-ary versions. It is instructive to investigate how this parsed structure comes about. The full derivation is spelled out step by step in Appendix B. Here I only focus on wh-words and gaps since these are the crux of the investigation of Grodner and Gibson (2005) and Lewis and Vasishth (2005). The following is observed. When the wh-word who has been just parsed, the parser, which lacks any look-ahead possibility, assumes that it just entered a relative clause and postulates a subject gap. This is due to the fact that the parser relies on past parsing steps (collected from the PTB) and subject-relative clauses are most common types in the corpus (and arguably, English). When the relative clause turns out to be the subject-relative, the gap is postulated correctly and the transition-based parser does not attempt to postulate any gaps further downstream. However, when the relative clause is not the subject-relative, the parser again tries to discharge the dependency and guesses after processing the verb that the object gap should be postulated. This postulation of the gap is immediately followed by the retrieval of the whelement. The predictions are summarized in Fig. 6. The figure shows that the parser predicts gaps, in accordance with the theory of the Active Filler Strategy (Crain & Steedman, 1985;Frazier, 1987). The parser also matches the modeling assumptions of Lewis and Vasishth (2005), which derive slowdown in reading times of object-relative verbs by letting their ACT-R parser retrieve a wh-dependent at that position. However, unlike Lewis and Vasishth (2005) and its extension, Engelmann et al. (2013) and Engelmann (2016), this behavior of the parser is not manually created. The Active Filler Strategy is not assumed, it falls out as a consequence of the data-driven parsing and the fact that the cue-based retrieval at these positions favors postulate gap. Since the parser explores only one path, it would incorrectly predict that the wrong postulation of the gap in the object-relative clause at the subject position cannot be recovered from. To avoid this, a minor correcting behavior of the parser will be assumed. It is assumed that the parser checks at each word whether the structure postulated at the previous word is compatible with the new evidence (the new word). If not, the parser will reanalyze toward the new structure and continue in parsing. This means that the parser will reanalyze at the next word after who in object relative clauses and will remove the gap. Fig. 6. Two selected steps during parsing. After parsing the wh-word, the parser guesses that a gap should be postulated for the subject position (left side). This is correct for subject relative clauses, incorrect for object relatives. For the latter, when the parser moves to the next word (the), it reanalyzes to the correct structure with no gap. After parsing the verb, the parser postulates a gap at the object position (right side). More details about the parser's incremental construction can be found in Appendix B. The reanalysis itself is simplified. The parser simply takes over the phrase structure that it postulated based on the new evidence. It is assumed that the reanalysis incurs extra cost, as large as the time any subprocesses takes in the procedural system (the r parameter in 22). Another way to understand the parser's predictions is that it expects subject-relative clauses by default and switches to object-relative clauses only when the original expectation turns out wrong. Due to the cost of reanalysis, the parser thus has the ability to predict processing difficulties for object-relative clauses as a consequence of invalid expectations. Crucially, the prediction is generated by the memory system that can also predict processing costs of object-relative clauses due to the retrieval of the wh-dependent. Thus, we can derive two types of costs, an expectation-based cost and a wh-retrieval-based processing cost within one memory system, cue-based retrieval (see Staub, 2010 for arguments that both types of processing difficulties are observed in relative clauses). Bayesian modeling There are four parameters that we need to model to fit the reader to the data: F, f , r, W j . We will estimate them using Bayesian techniques. One should think about the Bayesian model that we consider as a Bayesian data analysis model that is used to provide the best fit of our cognitive model to the data of Grodner and Gibson (2005). I assume the structure of the model as shown in Fig. 7. In this graph, which follows notational conventions of Kruschke (2011), the top layer represents priors, the bottom part is the likelihood. The actual data that we try to model are mean reading time per region 3-8 in subject-relative and object-relative clauses.10 To calculate the likelihood, we run all stimuli from Grodner and Gibson (2005) using priors and the model described in Sections 4.1.1-4.1.3. We collect all reaction times per words 3-8 and take the mean; the mean is the Latency variable in the part described as ACT-R(F ; f ; r; W j ) in Fig. 7. The Latency serves as the mean of the likelihood of the model, which is a normal distribution with standard deviation being Fig. 7. Bayesian model for parameter estimation of Grodner and Gibson (2005). estimated as another parameter, SD. This last parameter is not part of the ACT-R model. The likelihood can be seen in the bottom part in Fig. 7. A similar way of modeling was successfully applied in Dotlačil (2018), Brasoveanu and Dotlačil (2018), Brasoveanu and Dotlačil (2019), and Brasoveanu and Dotlačil (2020). See Dotlačil, 2018 for reasons why it is preferable to use this method rather than rely on default values of ACT-R and partially tweak them by hand selection. The following prior structure for the ACT-R parameters is assumed: form(1, 100) SD, the parameter to model standard deviation of the likelihood, has the following prior: The priors for the first two ACT-R parameters have the mean values of 0.2 and 0.5, respectively, but, roughly speaking, the distributions are broad enough to not exclude any value between 0 and 1. Values in the range 0-0.3 are most likely but extremely low values are penalized. This takes into account previous findings that F and f , modulating retrieval times, are in language models almost always below 0.5 but not exceedingly small (Brasoveanu & Dotlačil, 2020). The third prior, r, has the mean of 0.05 (seconds). This is the default value for r in ACT-R. Finally, the prior for W j , measuring the weight of associative strength between a cue and a chunk, is set as a uniform distribution that takes any values between 1 and 100 as Fig. 8. Posteriors for the five parameters estimated in the Bayesian ACT-R model. equally likely. This flat prior takes into account that we have very little evidence a priori how cues are weighed for the retrieval of parsing steps and wh-dependents. The estimation is done using PYMC3 and MCMC-sampling with 5,500 draws, 2 chains, and 500 burn-in. The Rhat values (Gelman et al., 2013) for the four parameters are below 1.05, showing that the chains have converged. More details about the model are given in Appendix C. Results Let us first summarize the posterior distribution of the modeled parameters. See also Fig. 8: • F -median: 0.05, sd: 0.01 Fig. 9. Model 1 of reading-posterior predictive. The blue dots are predicted mean RTs and the blue bars provide the 95% credible intervals. The observed data are in yellow. The yellow triangles are observed mean RTs, and the yellow bars are +/− 2 standard errors, taken from Grodner and Gibson (2005). The posterior values of the first three parameters are not far off from previous estimations in psycholinguistics (Brasoveanu & Dotlačil, 2020). The posterior predictive distribution of the model is of the main interest. We want to see what our model predicts as mean reaction times and whether this fits the observed data. The predictions are plotted against the observed data in Fig. 9. The yellow triangles indicate the observed mean RTs for each word, the yellow bars indicate +/− 2 standard errors (means and SEs taken from Grodner & Gibson, 2005), the blue segments provide the 95% CRIs (credible intervals) for the mean RTs predicted by the Bayesian model, and the blue dots are the predicted mean RTs. The 95% CRIs cover the observed mean RTs, and moreover, the observed mean RTs are often close to the mean RTs of the model. That is, we see that the parameters can be estimated in such a way that the model very closely fits the data. Two things should be kept in mind in the evaluation of the model. First, all the parameters affect reading times of every word and most of the parameters affect multiple processes at the same time (e.g., F will affect lexical retrieval times, retrieval times of wh-dependents, and retrieval times of parsing steps). Yet, the parameters are estimated only once for all the processes and for the full run through the experiment-they are not estimated word by word and not process by process. Furthermore, in contrast to almost all previous works on ACT-R and linguistics (see Section 5.2 for a detailed comparison), there is almost no space for handcoding of the model. The only part of the model that is manually created is the sequence in Fig. 4, that is, the handful of the rules and their order. The translation of this sequence into reading times is derived by the computational cognitive architecture ACT-R, the parse is constructed by a transition-based parser (embedded in ACT-R), and the parameter estimation is generated by a Bayesian model. Syntax-free models of self-paced reading We see that the model developed in Section 4.1.1-4.1.3 can approximate mean RTs reasonably well. We now check whether the syntactic parser, which is the main point of this investigation, is at least partially responsible for this success. We investigate this question by comparing the model to two other models. In Model 2, it will be assumed, contrary to the symbolic predictions of the transition-based parser, that no Active Filler Strategy is present. That is, the parser will still retrieve parses but it will not postulate a gap at the wh-word/verb, rather, the parser will wait for the unequivocal evidence to do so. Let us see concretely what that means on the example sentences from Grodner and Gibson (2005), repeated from above: a. The reporter who t sent the photographer to the editor hoped for a story. b. The reporter who the photographer sent t to the editor hoped for a story. (24) If the parser waited with gap postulation, it would only posit the gap in the subject-relative clause, (24-a) when it reads the verb. For (24-b), the parser assumes a gap when it reads the preposition. That is, Model 2 is manually set to retrieve wh-dependents at different positions that the transition-based parser based on its learning corpus does. In Model 3, the syntactic component is completely switched off. This means we omit the step retrieve parsing steps in Fig. 4 (and with that, we also have to omit retrieve wh-dependent, since that step is dependent on the triggering of gap postulation). Apart from these changes, Model 2 and Model 3 are exactly the same as the first model. We now estimate the same parameters as for the first model and study posterior predictions for reading times per words 3-8. The posterior predictions are given in Figs. 10 and 11. We can see that both models are good enough to capture the general trend in the data. This should not be very surprising since the models still include lexical retrieval and other basic components, so that reading times can be approximated quite well. However, both models have a worse fit than Model 1. For Model 2, see Fig. 10, we see that the model fails at object relative clauses at the verb and the preposition. It is too fast on the former word since it does not postulate a gap, unlike Model 1, and too slow for the second word, since it postulates a gap, unlike Model 1. Compared to Model 1, this model also overestimates reading times for the subject-relative clause on the verb, which comes about because it tries to resolve the wh-dependency at this point, which will slow it down. In general, the model fails precisely in positions that we would expect it to fail. To be sure, the 95% credible intervals of posterior predictive distribution include most mean RTs, but this is at the cost of less precise posterior distributions for reading times, as can be seen when one compares the size of 95% credible intervals of this model and Model 1, see Fig. 9. Indeed, the median of SD, the parameter for the standard deviation of the likelihood, is estimated at 31, twice as large compared to Model 1. When we turn to Model 3, Fig. 11, we see again a worse fit in object-relative clauses. The verb in object-relative clauses is processed too quickly according to the model (the credible interval does not include the actual mean), arguably because no syntactic processes related to gap resolution slow down the reader. As was the case in Model 2, the 95% credible intervals include most mean RTs but at the price of being less precise about posterior distributions of reading times. This can be seen from the size of 95% credible intervals of Model 3 compared to Model 1 and from the fact that the median of SD is 25. Since the estimated parameters are tied to fewer processes in each word (F, f only affect the lexical retrieval), we see that even the best estimation of these parameters does not suffice to correctly predict the data-more is needed than just lexical retrieval. The best predictive accuracy of Model 1 is also clearly visible from its lowest widely available information criterion (WAIC) (Gelman et al., 2013) Summary The presented case study modeled the self-paced reading experiment from Grodner and Gibson (2005). It showed that the symbolic predictions of the data-driven cue-based model of parsing are in agreement with reading data. It also showed that it is possible to develop an endto-end model, which carries out the reading task just as participants of Grodner and Gibson (2005) had to do, and in which an estimation of four ACT-R parameters for the whole model is sufficient to fit observed mean RTs. This provides evidence that the cue-based model of parsing can be combined with other cognitive processes to simulate data from an experimental task like self-paced reading. Case 2: Lexical and syntactic processing In the first case study, we focused on the interaction between the retrieval of parsing steps and the retrieval of dependency. The second case study focuses on the interaction between the retrieval of parsing steps and the lexical retrieval. The interaction between lexical processing and syntactic processing has been investigated in the model of eye control in reading, E-Z Reader (in particular, E-Z Reader 10; see Reichle, Rayner, & Pollatsek, 2003;Reichle, Warren, & Mcconnell, 2009). E-Z Reader proposes a so-called staged architecture: the lexical process and the syntactic process are sequentially ordered; lexical processing precedes integration, which syntactic processing is part of. E-Z Reader provides a subsymbolic system that integrates the staged architecture assumption and allows psycholinguists to develop quantitative predictions for eye-tracking data. A disadvantage of E-Z Reader is that it leaves it unclear how symbolic systems translate to the subsymbolic equations. This is of less concern for lexical processing; however, symbolic processes like syntactic parsing cannot be straightforwardly linked to E-Z Reader calculations. The cue-based model of parsing does not face this challenge. We have seen that parsing steps can be translated into a quantitative measure (activations), and we have seen that this measure can be translated into reading times. Moreover, this translation is not postulated ad hoc. It is not created for this case of retrieval or just for this model. Rather it builds on the independent ACT-R findings. Thus, there is a possibility that the cue-based model of parsing can advance the impressive research on eye control in reading developed by the E-Z Reader community. I will use the third eye-tracking experiment of Staub (2011) to study whether the model can simulate lexical and syntactic processing and the interaction thereof. The experiment consisted of 2 × 2 conditions, summarized in (25). There were two manipulations: (i) in the critical region, italicized in (25), either a high-frequency word (walked) or a low-frequency word (ambled) was used; and (ii) the same word could either be integrated with the previous words (the Grammatical condition) or it could not be integrated (the Ungrammatical condition). In the example, the ungrammaticality is driven by the fact that the preceding word was a preposition which in this sentence cannot be followed by an -ed word. a. The professor saw the students that walked across the quad. The professor saw the students that ambled across the quad. The professor saw the students over walked across the quad. The professor saw the students over ambled across the quad. (Ungrammatical, Low Frequency) Three ROIs were measured in Staub (2011): the pre-critical word (that or over in (25)), the critical word (walked or ambled in (25)), and the spillover, the three words following the critical word (across the quad.). Of the standard eye-tracking measures, I will focus on first-pass reading times and regressions, which revealed the effect of lexical and syntactic manipulations (see Staub, 2011 for details). In Section 4.2.1, we consider the structure of the model with eye control. In Section 4.2.2, we look at the structure of the Bayesian model for free parameters. In Section 4.2.3, the results of the model are discussed. Sequential model for natural reading The model is almost identical to the model used for Grodner and Gibson (2005), see Section 4.1.1. There are two differences: no motor module for controlling key presses is involved since we do not model self-paced reading but eye tracking; second, we now have to be explicit about the eye control module. Fig. 12. Sequential model of reading on one word for eye-tracking simulation. Each box represents one subprocess arrows the order. When arrows branch, this signals parallel processing, that is, two processes running concurrently. The scaffolding of the eye control module is taken over from E-Z Reader. Our model is built on EMMA, which generalizes and simplifies the assumptions of E-Z Reader (Salvucci, 2001). Its general structure is shown in Fig. 12. This structure is compatible with the sequential model used for self-paced reading, see Fig. 4.11 What is important to observe is that linguistic processing is split into the lexical processing and syntactic processing and the two parts are interspersed with eye-movement/attention control: the attention and eye movements are programmed to move after the lexical retrieval is finished and at the same time that syntactic processing starts. This is largely similar to the position of E-Z Reader, with one simplification: E-Z Reader postulates another lexical access after eye movement to the next word was programmed. Just as in E-Z Reader, eye movement control is split into two stages: the initiation phase, in which eye saccade is planned; the execution of the movement. And just as in E-Z Reader, it is assumed that independently of eye movement control, visual attention is organized. The attention moves to the next word at the same moment as the eye movement is programmed and the move is instantaneous. However, attending to an object takes more time when the object is further away from the eye focus. The visual encoding time is calculated as shown in (26). d is the distance between the object and the current eye position, measured in degrees of visual angle. D is the visual properties of the object. Following Dotlačil (2018), I take D to correspond to word length, measured in the number of characters. For more details on EMMA and E-Z Reader, see Salvucci (2001) and Staub (2011). The lexical processing is the same as for the previous model in Section 4.1. The syntactic processing is almost identical. As was the case for model in Section 4.1, we calculate retrieval times from the average activation of parsing steps. Two modifications are added and they both are connected to the fact that the model will now simulate regressions, not just reading times. It is assumed that regression takes place in those two cases from word n: • when the activation of a parsing step is below a retrieval threshold, t, the parsing step on word n is not retrieved and eyes are programmed to regress; • when, on a word n, a reanalysis takes place (i.e., the syntactic analysis of n is not compatible with the analysis proposed on the word n − 1, see also Section 4.1.3), the regression is triggered with the probability p. These cases signal that the word n cannot be straightforwardly incorporated either because no parsing step can be recalled (case 1) or because a reanalysis is triggered (case 2). The regression interacts with eye control just as in E-Z Reader. It launches from the word that the eye focus is on, unless the eye movement control is in the non-labile saccade phasein that case, regressions wait for the end of execution and are triggered at the next word. Bayesian modeling Five ACT-R parameters are modeled. There are two parameters affecting the (lexical and syntactic) retrieval: F, f . Two parameters model regressions: t (the threshold) and p (the probability of regression due to reanalysis). One parameter controls eye movements: e, the amount of time it takes to prepare an eye shift. The other parameters are kept at their default values (as was the case for the previous model) and r is kept at the median value observed in the previous study (33 ms).12 Five parameters might seem like a lot, but keep in mind that we develop a cognitive model, we are not trying to fit the data to a regression model. This means that the parameters are kept the same across all three regions and both measures (24 data points in total) and the cognitive model has to model the whole process of reading with (just) these parameters. The structure of the Bayesian model is shown in Fig. 13. As was the case with the previous model, the model runs all stimuli from the experiment, collects all reaction times and regressions, and compares that the actual mean first pass reading times and mean probability of regressions. Apart from the five ACT-R parameters, we also model the SD parameter, the standard deviation of normal distribution that models the likelihood for the RT data (see also Section 4.1.4). The following prior structure for the parameters is assumed: The ACT-R parameters that were modeled in Section 4.1 have the same priors as the previous model (see Section 4.1.4 for justification). Let us turn to free parameters that are unique to this model, starting with p. We have very little evidence for any value of p, the probability of regression, apart from the fact that it cannot be an extremely large value, given that the highest mean of the probability of regression is 0.59. So, we keep the prior uniform and assume that it cannot be higher than 0.5, slightly lower than the highest mean (keep in mind that there are two ways to trigger regressions and p plays a role only in one of them). The threshold t is by default set at 0 (measured on the activation scale). We assume the prior to be a normal distribution with mean 0 and sd 10. This is a very broad, unrestricted prior since no recalled elements have an activation smaller than −10 and greater than +10. Finally, the prior of e is a gamma distribution, whose mean is the default value. e is measured in seconds. We assign most weight to values between 0 and 0.2-it seems very plausible that eye movement preparation should not be larger than 0.2 s (200 ms). The estimation is done using PYMC3 and MCMC-sampling with 3,000 draws, 2 chains, and 200 burn-in. The Rhat values of the samples for all the parameters were lower than 1.05. Results The results for first-pass times are summarized in Fig. 14. The triangles indicate the observed mean first pass reading times for the pre-critical, critical, and spillover region, the segments provide the 95% CRIs (credible intervals) for the first pass reading times predicted by the ACT-R model using the posterior distribution of the ACT-R parameters, and the dots are the predicted mean first pass reading times. Two things are worth observing. First, the model is able to predict first pass reading times per region: the pre-critical region is the fastest, the critical region is in between, and the spillover region is the slowest. This model correctly derives this behavior even though there is no "intercept" or "region" condition in the model-all the measures have to fall out from its simulation of reading. Second, the model, correctly and in accordance with E-Z Reader, generates increased reading times on the critical word as a factor of frequency, not grammaticality. The effect of frequency is washed away in the spillover region, largely in accordance with the data (there is a small effect of interaction between frequency and grammaticality, which cannot be modeled by the current model and which is reported as non-significant in Staub (2011)). Fig. 14. First pass reading times-predictions and data. The dots are predicted mean first pass reading times. The bars provide the 95% credible intervals. The predictions come from the ACT-R model using the posterior distribution of the parameters. The triangles are observed first-pass reading times, taken from Staub (2011). Let us look at the model of regressions, which is driven by the syntactic processing. Before we turn to details, I will make some general observations. The data in Staub (2011) show that there are more regressions in the critical region in the ungrammatical condition compared to the grammatical condition. How could the cue-based model of parsing simulate that? There are two possibilities. First, it would fall out from our model if the activations for the ungrammatical sentences were smaller than the activations for the grammatical sentences when parsing the critical word. Second, this could happen if the ungrammatical condition triggered reanalyses more often. Let us start with the first one and reason about why we observe it. Transition-based shift reduce parsers are quite robust, in the sense that they seldom halt (McDonald & Nivre, 2011). However, there is one difference between ungrammatical/hard-to-parse sentences and grammatical ones. In the ungrammatical case, the declarative memory will not carry chunks that will match as many cues in the current context as in the case of grammatical sentences. After all, since we are dealing with an ungrammatical sentence, we are building a structure that has most likely not been observed before. Since ungrammatical sentences will find chunks that match the current context in fewer cues, they will spread activation less, and consequently, the activation of the retrieved chunks will be lower than the highest activation of the chunks retrieved for grammatical sentences. We also observe the reanalysis in ungrammatical sentences because the grammatical parse proposed up to then turned out to be incorrect. Due to both reasons, we expect that we should observe increased regressions in the ungrammatical sentences on the point at which the ungrammaticality is triggered. Let us now check the quality of the quantitative fit. The results are shown in Fig. 15. The mean regressions are largely captured correctly. The data-driven model definitely correctly captures the contrasts between grammatical and ungrammatical conditions. However, there is room for improvement. The model underestimates regressions in the critical region (Region 2) in the grammatical condition, as if it expected the grammaticality effect to be larger than actually observed. Apart from the effect on the critical word, the model also predicts that the grammaticality will affect regressions on the pre-critical word. This is in accordance with the data but was not predicted by Staub (2011) and E-Z Reader. The cue-based model of parsing correctly predicts this contrast because it happens so that the (27-a) has a higher activation of parsing steps at a. The professor saw the students that . . . b. The professor saw the students over . . . The estimated values of the parameters are summarized below. See also Fig. 16. • The first two parameters have also been estimated in the first case study. The medians for both parameters, F and f , are within one standard deviation of the median values found in Case study 1 and the posterior distributions are similar (cf. Figs. 8 and 16). The convergence is very encouraging. p and t are hard to interpret on their own.13 e, the preparation phase for eye movement, is at 0.006 s. This is a very low value and as far as I can see, the most worrisome issue with the model that future research could improve upon since the preparation phase for eye movement is commonly taken to be much greater (around 100 ms). The low value is likely caused by the fact that parafoveal attention is limited in the current model, so eyes have to move rather quickly to attend to upcoming words. The challenge is that increasing parafoveal attention results in the model skipping words, which significantly increases the complexity of Bayesian modeling (see also footnote). Summary The second case study modeled the eye-tracking reading experiment from Staub (2011). We saw that the ACT-R architecture allows us to build an E-Z Reader style model for eye control and eye movement that interacts with language comprehension. It was shown that the cue-based model of parsing links symbolic properties of the parser to subsymbolic values and generates detailed quantitative predictions for eye movements that are to a large extent correct. This provides evidence that the cue-based model of parsing can be combined with lexical retrieval and eye control to simulate data from an experimental task like eye-tracking reading. Case 3: Modeling corpus data So far, we saw that the syntactic parser constructed as a cue-based retrieval can to a large extent correctly match reading data from individual experiments. We now go beyond selected experiments and show that the model generalizes to a larger pool of data. In the current section, we will look at the predictions of the model for the Natural Stories corpus (NSC, Futrell et al., 2018). The Natural Stories corpus is a corpus containing 10 English narrative texts with 10,245 lexical tokens in total. The texts were edited to contain various syntactic constructions, including constructions that are very rare. The corpus was read by 181 English speakers using a self-paced reading moving-window paradigm and the self-paced reading data were released along with the texts. Furthermore, all the sentences were annotated according to Penn Treebank notational conventions by the Stanford Parser (Klein & Manning, 2003) and hand-corrected. The fact that the NSC has a plethora of syntactic constructions and includes manually controlled PTB-compatible syntactic parses makes the corpus particularly suitable for our goal. Parsing model Unlike in the previous cases, we do not try to model all the processes in reading to fit the model as closely as possible to reading time data. We only observe whether the syntactic processing model is a good predictor for reading data. We proceed as follows. Per word, we collect the average activation of retrieved parsing steps from the same declarative memory model used in the previous case studies. Since the NSC is self-paced reading corpus data, the parsing model assumes self-paced reading, that is, at the moment of retrieving parsing steps, it cannot look ahead and collect information about the upcoming words. We expect that the level of activation should negatively correlate with reading times (see Section 4.2). This finding would strengthen the evidence for the cue-based model of parsing. One worry might be that the syntactic processing might go astray, even more so because the NSC uses infrequent syntactic constructions. To avoid this, we collect at every word the correct syntactic parse at that word, as provided by the NSC. This correct parse is used as the context for retrieval: based on this parse, the retrieval of parsing steps is attempted and the average activation of the retrieval is recorded. That means that the parser will have the correct syntactic structure at every word and will use that context for retrieval. Thus, we can be sure that whatever we are to find, the finding is not obfuscated by the fact that the parser built an incorrect cognitive context that it uses for cue-based retrieval. Results The results are summarized using mixed-effects models with the dependent variable logtransformed reading time (logRT) and random factors subject (n = 181) and text (n = 10). We start with a simple model with just one fixed effect, ACTIVATION (z-transformed), the averaged activation of retrieved parsing steps per word, and by-subject (n = 181) and bytext (n = 10) random intercept and random ACTIVATION slope. The results are summarized in Table 1. The model shows that the effect of ACTIVATION is significant and goes in the expected directions: higher activations of retrieved parsing steps correspond to a decrease in logRTs. A more complex model in which various low-level confounding factors are included is also considered. The following confounds are taken into account: (i) POSITION (the word position in a sentence, z-transformed), (ii) ZONE (the word position in the whole text, ztransformed), (iii) WORD LENGTH (the length of the word as the number of characters, ztransformed), (iv) LOG(FREQ) (log-unigram frequency), (v) the interaction of word length × log unigram frequency, (vi) LOG(BIGRAM) (log bigram probability), and (vii) LOG(TRIGRAM) (log trigram probability). Frequencies, bigram and trigram probabilities are provided in the NSC. In the model, we ignore the first of each sentence since the first words are often outliers, and furthermore, bigram and trigram probabilities cannot be calculated at the beginning of a sentence. We also ignore words directly followed by punctuation marks since these are known to show wrap-up effects, not modeled by the parser. Finally, the model included by-subject random intercept and random ACTIVATION and ZONE slopes and by-text random intercept and random ACTIVATION slope.14 The results of the model are shown in Table 2. We see that after adding the confounds, ACTIVATION importantly remains significant and the effect goes in the expected direction, showing that the role of activation of parsing steps cannot be explained (away) by the considered low-level factors. We now proceed to another model, which breaks down the role of ACTIVATION and can reveal what drives the effect observed in Table 2. Two possibilities for the source of the ACTIVATION effect are of theoretical interest. We know that the activation increases with the increase in the number of cues that match between the context and the retrieved chunks (i.e., the facilitatory effect of partial distractor match in ungrammatical sentences, see Section 2.2). It is possible that our finding in Table 2 is driven by the number of cues matching, that is, the increase in the matching features correlates with a decrease in logRTs. We also know that the activation decreases with the size of fan of cues: if a cue matches many parsing steps, it is not very useful and does not boost activation as much as when it matches only few parsing steps (i.e., the inhibitory interference due to partial distractor match in grammatical sentences, see Section 2.2). If the fan size was the driving force, we would expect that the increase in fan size correlates with an increase in logRTs. To investigate this, we consider the model whose estimates and t-values are summarized in Table 3. In this model, we substitute ACTIVATION with the z-transformed factors # MATCHING CUES (how many cues are matching?) and FAN SIZE (the average fan size of cues). The effect of # MATCHING CUES is highly significant and goes in the expected direction. The effect of FAN SIZE is non-significant. We can conclude that the effect of activation is driven by the match in features, rather than the size of the fan of labels. It is left open to the future research why the number of matching features, but not the fan size, seems to be crucial in modeling reading times and the effect of parsing on reading times, at least in the case of the Natural Stories Corpus. Next, we investigate the question of how the observed effect of activation on reading times compares to well-investigated and related theoretical concepts in computational psycholinguistics: surprisal from Surprisal Theory (Hale, 2001) and integration cost from Dependency Locality Theory (Gibson, 1998(Gibson, , 2000 (see also Section 5 for a comparison of the cue-based model of parsing to Surprisal theory and other related works). We consider a model in which, besides activation and the low-level factors introduced above (see Table 2), the following measures from computational psycholinguistics are added: a surface surprisal estimate, namely, 5-gram surprisal trained on Gigaword corpus (Graff & Finch, 2007), two hierarchical surprisal estimates, namely, a surprisal using the parser from Van Schijndel and Schuler (2013) trained on the Penn Treebank data sections 2 through section 21 reannotated using generalized categorial grammar, GCG (Nguyen, Schijndel & Schuler, 2012) and a probabilistic contextfree grammar (PCFG) surprisal trained on the same sections of Penn Treebank but using the original labels, that is, no reannotation, and finally, integration cost of DLT that additionally assumes that coordination is less expensive and that excludes modifier dependencies. Finally, since it is possible that the effect of just introduced psycholinguistic measures spills over to the following words (see also for detailed investigations of spillover effects), the model also includes one-word spillover for each of the five predictors. The surprisal values (apart from the PTB with no reannotation) were also used in Van Schijndel and Schuler (2015); Shain, Blank, Schijndel, Schuler, and Fedorenko (2020) and the DLT values were also used in Shain et al. (2016).15 The model summary is given in Table 4. ACTIVATION remains significant even after these psycholinguistic measures are added. Finally, Table 5 summarizes log-likelihood of the models that use the same low-level factors as Table 2 plus one of the following theoretical measures: our main measure of interest, activation, the averaged activation of retrieved parsing steps per word (line 1), 5-gram surprisal (line 2), PCFG surprisal using reannotated generalized categorial grammar, GCG (line 3), PCFG surprisal using the original PTB annotation (line 4), and DLT that assumes that coordination is less expensive and that excludes modifier dependencies (line 5). Every model in Table 5 also has by-subject random intercept and random ZONE slope and by-text random intercept. The 5-gram surprisal turns out to be the best model. The model that collects activations is worse than the surface estimate of surprisal (line 2) and the surprisal estimate based Table 4 Estimates for the mixed effect model log(RT ) ∼ 1 + POSITION + ZONE + WORD LENGTH * LOG(FREQ) + LOG(BIGRAM) + LOG(TRIGRAM) + 5-GRAM SURPRISAL + 5-GRAM SURPRISAL SPILLOVER + GCG SURPRISAL + GCG SURPRISAL SPILLOVER + PTB SURPRISAL + PTB SURPRISAL SPILLOVER + DLT + DLT SPILLOVER + (1 + ZONE + 5-GRAM SURPRISAL + GCG SURPRISAL + ACTIVATION|SUBJECT) + (1 + ZONE|TEXT) on PTB with GCG reannotations (line 3). However, our model with activations has a better fit to data compared to the model with surprisal based on the original PTB annotations (line 4) and to the model based on DLT (line 5). Of the comparisons, the comparison between our model and the model with surprisal based on the original PTB annotations is arguably the most important since these two models were trained on the same data set using the same (PTB) labels. In this case, the cue-based model of parsing compares favorably to surprisal. Summary of the results We have inspected three case studies that tested the predictions of the cue-based model of parsing: • Case study 1 simulated the self-paced reading experiment of Grodner and Gibson (2005). The study shows that it is possible to construct a good-fitting reading model in which lexical retrieval, dependency retrieval and parsing are built based on the same memory structures restricted by the same parameter values. • Case study 2 simulated the eye-tracking experiment of Staub (2011). The study shows that the cue-based model of parsing can provide a link between the symbolic system (parses) and behavioral measures (reading times and regressions). It also shows that it is possible to build one model in which lexical retrieval and cue-based retrieval of parsing interact in the E-Z Reader style with eye control and in which all parsing and lexical retrieval are built based on the same memory structures restricted by the same parameter values. • Case study 3 correlated the measure of parsing step availability, stored in activations, with reading times using the data from a self-paced reading corpus, the Natural Stories Corpus (Futrell et al., 2018). The study shows that the cue-based model of parsing is a significant predictor of reading times, even after various possible confounds are considered and after estimates of other measures often used in psycholinguistics, surprisal, and integration cost in DLT are included. The prediction is driven by the matching cues between the context and the retrieved parsing step. The Case studies 1-3 provide evidence for the cue-based model as a computational model of human parsing. Comparison to related works In this section, it is discussed how the cue-based model of parsing compares to related proposals of parsing in computational (psycho)linguistics, including ACT-R linguistic models. Surprisal At least since Hale (2001), it has become very common to model processing difficulties using quantitative distributions estimated on other data, as is the standard procedure in computational linguistics. This paper follows this methodological line. Arguably, the dominant method to study the impact of parsing on online behavioral measures is to use the theory that connects processing difficulties to the surprisal of a word given its syntactic context, as introduced in Hale (2001) (see also . This account is commonly labeled Surprisal theory. The theory has been supported by corpus investigations (Boston, Hale, Vasishth, & Kliegl, 2011;Demberg & Keller, 2008). It has also been validated in controlled experiments (Jäger, Chen, Li, Lin, & Vasishth, 2015;Levy, Fedorenko, & Gibson, 2013;Linzen & Jaeger, 2016;Wu, Kaiser, & Vasishth, 2017) even though see also Vasishth, Mertzen, Jäger, and Gelman (2018) for a failure to replicate the evidence for surprisal reported in Levy et al. (2013). This section does not focus on empirical issues with Surprisal theory, though. Rather, the goal is to compare the theory to the current approach. As we will see, the two approaches share several assumptions about the bottleneck that causes processing difficulties. Surprisal theory states that processing difficulties are related to the self-information (also known as surprisal) of the event that the word w n occurs, given the preceding context ctxt. In syntactic analyses, the preceding context ctxt is treated as equivalent to the words appearing prior to w n in the same sentence, that is, w 1 . . . w n−1 . − log(p(w n |ctxt )) (28) shows that under some reasonable restrictions, (28) is equivalent to the Kullback-Leibler (KL) divergence (also known as a relative entropy), see (29), where q is the probability distribution over structures given ctxt and p is the probability distribution over structures (T ) given ctxt and w n . The equivalence plays a role in the interpretation of Surprisal theory. One can think of Surprisal theory as an account that links processing difficulties to a high relative entropy between p and q. According to this interpretation, we can assume that readers track probability distributions over structures during incremental interpretation and when p, the probability distribution over structures given ctxt and w n , strongly diverges from q, the distribution over structures given just ctxt, processing cost is incurred. The formulas in (28) and (29) are also closely related to the cue-based model of parsing, as we will see now. Recall that association strength in ACT-R is the pointwise mutual information between cues and the chunk i, see (30), where p is a probability function, i is a chunk, and c is a cue in the current context. One can think of the chunk that i is a parsing step needed to be recalled to integrate w n . Spreading activation is the expected value of the pointwise mutual information (also known as mutual information) and is calculated for a single chunk as shown in (31).16 Mutual information and the KL divergence are closely related. Using the relation between the two information-theoretic notions, we can rewrite the last formula as follows: To generalize the last formula, let us think of cues as the context preceding the word w n (as is done in ACT-R, where cues represent the current cognitive context of the agent; see also Section 2.2). The spreading activation measures how different the joint distribution of the parsing step and the context is from treating the parsing step and the context as independent. A high divergence signals that the parsing step and the context are dependent, a divergence close to 0 signals that they are independent. Since the additive inverse of activation is used to calculate observable difficulties like increased retrieval times and increased chances of retrieval failure, it is predicted that the more the parsing step and the current cognitive context are independent of each other, the more observable processing difficulties there are.17 We thought of i as a parsing step and cues as a cognitive context, because this was the implementation of mutual information in this paper. However, this is not the only possible implementation. Generalizing to any structures gives us (33), where T are the structures generating ctxt and w n , while C are all the structures generating ctxt. The cue-based model of parsing is a particular implementation of (33). The point of difference between (33), the relative-entropy interpretation of cue-based model, and 29, the relative-entropy interpretation of surprisal, is that instead of measuring the divergence between two probabilities over structures, we measure the divergence between their joint distribution and their independence. While both interpretations of processing difficulties seem plausible, there is a difference between Surprisal theory and the cue-based model of parsing. The former is established to account for parsing effects. The latter, however, is built inside a cognitive architecture. Treating spreading activations according to (32) is motivated independently of parsing. The motivation comes from other linguistic studies (Lewis, Vasishth, & Van Dyke, 2006) and a wide range of data on human cognition (Anderson & Lebiere, 1998;Anderson & Reder, 1999;Anderson et al., 2004). Consequently, when the cue-based model of parsing has to be fit to behavioral measures, modelers do not have the freedom to fit parsing independently of other cases of retrieval: every recall is treated the same way. Another way to look at this is that the approach in this paper provides a single model (ACT-R retrieval) to explain processing difficulties caused by expectations given the constructed syntactic context and difficulties due to the recall of recently constructed dependents (such as wh-elements in relative clauses). Finally, embedding the model in a general cognitive architecture allows researchers to principally connect the theory to observable behavioral data. Indeed, to the extent the fit to behavioral data in Section 4.1 and Section 4.2 can be seen as success, we have evidence that the cue-based model of parsing is well positioned to not only predict processing difficulties, but also to model reading times and regressions using one and the same model for any type of retrieval. This is in contrast to previous traditions that commonly treat memory-based processing difficulties and difficulties due to expectations given the syntactic context as separate even in models that try to investigate their joint effect on reading (Boston et al., 2011;Demberg & Keller, 2008). Surprisal theory made several steps to connect its syntactic predictions to other cases of retrieval and cognition in general Smith & Levy, 2013) but I think it is fair to say that the strength of the theory lies in capturing expectation effects driven by the syntactic context. Consequently, it is not restricted by properties of retrieval outside of language when quantitatively fitting reading data and it is usually not used to capture processing difficulties due to the recall of dependents. This has changed recently in lossy-context surprisal (Futrell & Levy, 2017;Futrell, Gibson, & Levy, 2020), which shows that an account building on surprisal can provide one framework for both the recall of dependents and expectation-driven effects. This computational-level approach, in contrast to the algorithmic-level approach developed in this paper, expands suprisal theory with an extra component (noisy context) to capture memory-driven difficulties. This complements the current approach, which expands memorydriven analyses of parsing with an extra component (insights from transition-based parsing) to capture expectation-driven effects on parsing. The approaches in ACT-R can be divided into two groups depending on how they encode syntactic knowledge. Either they assume that syntactic knowledge is stored in the declarative memory of the agent (Reitter et al., 2011, this paper) or that syntactic knowledge is present in the procedural knowledge (Brasoveanu & Dotlačil, 2018;Brasoveanu & Dotlačil, 2020;Dubey et al., 2008;Lewis & Vasishth, 2005;Jones, 2019;Vogelzang et al., 2017). The difference drives the assumptions about how behavioral measures are captured. Since the procedural system does not operate with activations, the procedural approaches would need to consider other mechanisms. In fact, it falls out from these approaches that reading times should correlate with the number of rules/parsing steps assumed (see also Kaplan, 1972), since procedural knowledge applies serially in ACT-R, so a large sequence of rules should form a bottleneck. While there is some evidence that the number of parse steps is correlated with brain activation (Brennan & Pylkkänen, 2017;Hale, Dyer, Kuncoro, & Brennan, 2018), I am not aware of strong evidence showing that the number of parse steps is (linearly) related to reading time. It is likely for this reason that the procedural systems ignore this straightforward prediction and focus on other predictions present in their systems. Hale (2014) investigates to what extent reading times can be predicted by the likelihood that parsing steps should be compiled into a single rule (through production compilation). That work is directly compatible with the current proposal, in fact, it can be seen as an aspect that complements the current research. While the cue-based model of parsing investigates learning of parsing in declarative memory (through activation), production compilation represents learning in the procedural memory and if correct, it could explain why only a single retrieval per word could often be assumed (see discussion in Section 4.1). Lewis and Vasishth (2005), among others, study how the activation of partially built structures stored in declarative memory affects retrieval times of those structures. The prediction forms the core of the cue-based retrieval. It is used in Lewis and Vasishth (2005) and the following work to study the processing of dependencies. Since the current work also assumes that dependents are recalled from declarative memory, it shares this particular prediction with Lewis and Vasishth (2005) (see also Section 4.1, in which the recall of dependents and the recall of parsing steps are combined and tested in a single model). However, the current model goes significantly beyond Lewis and Vasishth (2005) by assuming that syntactic knowledge is also stored in declarative memory and as such, recalling parsing steps is susceptible to the same principles as the retrieval of partially built phrases (dependents). Lewis and Vasishth (2005) provide several conceptual arguments for storing syntactic knowledge in the procedural system. However, none of these arguments are evidence against the current approach, as far as I can see. First, they point out that there is experimental evidence, showing that syntactic knowledge should be kept separate from lexical knowledge. This is compatible with the current approach since syntactic and lexical knowledge are kept separate (as two independent types of chunks in declarative memory). Second, they point out that the lexicon and the grammar map into different parts of brain activations and the latter, unlike the former, activates brain areas that have been independently established in ACT-R as regions of procedural systems (Anderson, 2007). Again, this is compatible with the current approach. While the syntactic knowledge is stored in declarative memory, deploying it requires the application of procedural knowledge, for example, the procedural systems shown in Fig. 4 and Fig. 12. Reitter et al. (2011) provide evidence that the syntactic knowledge should be part of the declarative system. One advantage is that we can straightforwardly use the same model for comprehension and production. Second, the model can account for syntactic priming effects in production. The model in this paper is compatible with the positive results of Reitter et al. (2011). There is another dimension as to how ACT-R parsers differ from each other. Almost all existing ACT-R parsers are constructed by hand. The parser in this paper and Reitter et al. (2011) are the only parsers, as far as I know, that are data-driven. There is a clear and significant advantage to the data-driven approach. From the modeling perspective, it makes it impossible for modelers to sneak in a good fit by tweaking hand-coded parsing steps. Second, it allows one to investigate the model on a plethora of various data. Third, it provides a general link between the model and the data: we do not need to discuss the model case by case, since it is fully and explicitly described by the algorithm of the transition-based parser implemented in ACT-R, see Section 3 and Section 4. Finally, building a data-driven parser is the first necessary step in understanding the learnability of syntactic knowledge. It is impossible to even start addressing the question of how parsing is acquired if it is not data-driven. It should be clear that the model in this paper is closest to Reitter et al. (2011). However, there are also differences between the two approaches. First, Reitter et al. (2011) use Combinatory Categorial Grammar (Steedman, 2001 and grammar-based parsing, while this paper uses context-free grammar with gaps and transition-based parsing. This is useful since it allows us to study how parsing interacts with dependency resolutions. Second, Reitter et al. (2011) build the data-driven parsing for (a few) ditransitive sentences, that is, they do not strive to generalize their approach beyond those sentences to arbitrary structures.19 Third, Reitter et al. (2011) focus on production, whereas this paper studies comprehension. Finally, Reitter et al. (2011) develop a model to generate known qualitative effects in priming, while this paper, through the application of ACT-R models in a Bayesian framework, shows that the approach can model quantitative data patterns. Transition-based parsers This section briefly compares the cue-based model to other transition-based parsers. It justifies the choice of this type of parser and argues why the accuracy of the parser is sufficient (at this point). Transition-based parsers are a class of parsers that played an important role in computational linguistics, especially for dependency grammars (see Kübler, Mcdonald, & Nivre, 2009;Nivre et al., 2007;Zhang & Clark, 2008). One advantage of transition-based parsers over graph-based parsing and grammar-based parsing is that it is fast (under standard conditions, it has linear time complexity), it is incremental, and it allows for rich feature representations (see McDonald & Nivre, 2011;Nivre, 2004). Transition-based parsers have also been applied to phrase-structure parsing at least since Kalt (2004) and Sagae and Lavie (2005). The recent neural transition-based parsers for phrase-structure building have the F1 value around 95% on the PTB section 23 (Kitaev & Klein, 2018;Liu & Zhang, 2017). Most parsers ignore gap postulation and resolution, in contrast to ours, but there are transitionbased parsers that do include gaps (Coavoux & Crabbé, 2017;Coavoux & Crabbé, 2017). Transition-based parsers have also been used in computational psycholinguistics to model EEG data (Recurrent neural network grammars; Dyer, Kuncoro, Ballesteros, & Smith, 2016;Hale et al., 2018) and reading data (Boston, Hale, Kliegl, Patil, & Vasishth, 2008;Rasmussen & Schuler, 2018).20 While the high accuracy of the state-of-the-art transition-based parsing is encouraging, as it suggests that this line of parsing can eventually be used to a much more accurate parser than the one used in this paper, it should also raise worries. Why does the parser in this paper have a much lower accuracy compared to the state of the art? There are several reasons. First, it has been found that one of the disadvantages of transition-based parsers when compared to another class of data-driven parsers, graph-based parsers, is that they get worse with increase in sentence length and increase in dependence (error propagation, McDonald & Nivre, 2011). Traditional transition-based parsers, including the parser in this paper, explore just one path. They have to greedily select what path they will follow and stick to it until the end of the sentence. Thus, early mistakes will propagate the error throughout the whole sentence. Better transition-based parsers mitigate this type of mistake through beam search or methods to recover from errors. While the adaptation of these methods could be investigated for psycholinguistics, we are primarily not interested in the best accuracy of the parser on the complex Penn Treebank sentences, but in parsing that is human-like. Indeed, it is well known that human processor also shows error propagation in parsing, as witnessed by the fact that readers struggle to recover from garden path, the longer the wrong interpretation can be held (e.g., Frazier & Rayner, 1982). Thus, it is not a priori clear that error propagation should be avoided at all costs when we turn to psycholinguistics. For example, in the manual inspection of the parser accuracy results on PTB Section 23, it was found that coordinations were often misanalyzed by the parser. The parser always assumed local/smallest conjunction, an assumption avoided by more sophisticated parsers. This made the parser less accurate for PTB data but more in line with human parsing since it is known that the human processor prefers local attachment for coordinations (Frazier, 1987). Another reason why we see a low accuracy is that the parser assumes a very straightforward relation between memory instances and a parsing step. A parsing step is simply stored in declarative memory and is recalled using simple relations described in Section 2.2.21 This is in contrast to complex training methods commonly assumed in current neural parsers. Relatedly, current computational parsers assume a much richer feature system: they are enriched by vector space models representing lexical information; syntactic information is usually encapsulated in 200 or more features (see Chen & Manning, 2014 for discussion, cf. the cue-based model of parsing, which postulates around 10 features). The decision to have a simple feature model is driven by the fact that it is important to first establish that cue-based retrieval has a measurable impact on retrieval times during parsing and can be useful in predicting reading times. For that, it is preferable to keep the model as comprehensible and simple as possible; otherwise, it would not be clear whether the results reported in Section 4 are due to the cue-based retrieval model or some confound we are not interested in but is present in complex models (e.g., meaning similarity present in word vector spaces). Compare this to the case of other models of cue-based retrieval, which also started from probably an oversimplifying position of retrieval driven by a small set of binary features, rather than postulating from the start that retrieval is driven by high-dimensional vector-based lexical models. Finally, it is worth pointing out that even though the accuracy of the parser is not very high, the examples chosen in Section 4 show that it is sufficient to be usable in psycholinguistics, as the parser delivered correct parses for the relevant experimental sentences. Another point of improvement would be to consider transition-based parsers that do not build the structure bottom-up. There are known issues with bottom-up parsing: it accumulates elements on the stack in right-branching structures, suffers from disconnectedness, and has problems when tied to incremental interpretation (see Crocker, 1999;Resnik, 1992). For now, the choice was driven by the fact that transition-based parsing usually is combined with bottom-up parsing. It remains to be seen whether comparable or better results can be achieved with other types of parsers, notably, left-corner parsers (cf. Lewis & Vasishth, 2005;Resnik, 1992). Conclusion This paper presented a novel psycholinguistic parser, the cue-based model of parsing. It has been shown that the theory of cue-based memory systems can be combined with transitionbased parsing to produce a parser that can accurately construct phrase structures and, when combined with the cognitive architecture ACT-R, can to a large extent simulate correct reading times and regressions. Note 1 We ignore the number cue since it does not distinguish between the (a) and (b) cases of Fig. 1. 2 We ignore the animacy cue since it does not distinguish between the (a) and (b) cases of Fig. 2. 3 The action postulate gap is normally ignored in transition-based parsers, so parsers only proceed by shifting and reducing (but see Coavoux & Crabbé 2017;Crabbé 2015 as an example of transition-based parsers that do consider gap resolution). Ignoring gaps is possible if the end result is a match between hand-annotated and computerannotated parses of pronounced terminals but it would not work if we want to move from parses to actual interpretations. Ignoring gaps and their resolutions would also make the parser less useful for psycholinguistics, which often studies the effect of gap resolution on processing. 4 Unlike, for example, Roark (2001), we keep empty categories since they will be modeled by the parser. 5 Using three chunks, rather than a single chunk, to select which action should be carried out, makes the parser less sensitive to outliers and more accurate in syntactic structure building. Adding more than three chunks does not improve the accuracy of the parser. Unfortunately, retrieving three chunks, rather than a single chunk, makes the model go against the standard ACT-R assumptions (in the architecture only a single element is retrieved). I believe that this is justifiable, given the improved accuracy and the fact that some of the stringent ACT-R restrictions, assumed for much less structured and much less complex psychology tasks compared, are hardly tenable when modeling language (see also Boston et al., 2011). 6 Label Precision is calculated as the number of correctly constructed constituents divided by the number of all constituents proposed by the parser. Label Recall is calculated as the number of correctly constructed constituents divided by the number of all constituents present in the gold standard. F1 is the harmonic mean of the two accuracy measures. For the calculation, only non-terminal constituents are used for accuracy (i.e., trivial constituents like a, DT are ignored so that the accuracy measures are not artificially inflated). 7 Kitaev and Klein (2018) achieve F1 of 95.13 on PTB with a pre-trained ELMo word representations. I say more about the accuracy and comparisons between this parser and the parsers in NLP in Section 5.3. 8 The code to replicate the models can be found on https://github.com/jakdot/parser_and_ memory_additionalfiles.git. 9 We focus only on the relative clause regions and we stop before word 9, which is the last word in the relative clause and shows a large slowdown, possibly due to wrap-up effects. Since nothing in the model attempts to simulate wrap-up effects, one could worry that the fit of the model to the data would be driven by factors that are orthogonal to the model if we continued beyond word 8. As a check, though, another model, which included regions 2-10, was tested. The findings for regions 3-8 were not affected. 10 This is sometimes called an end-to-end modeling in ACT-R (Anderson, 2007): we do not abstract away anything; rather, we try to model the whole process that participants have to carry out in the experiment, from retrievals to key presses. 11 One difference is that in Fig. 4, move visual attention was run sequentially, after retrieve parsing steps. However, this is a mere convenience. The visual attention had to wait for the key press and the key pressing was the bottleneck in the process, so it did not matter whether we let the visual attention move concurrently with syntax, as we do in Fig. 12, or after the syntactic processing is finished. 12 The model becomes brittle if r is estimated. In particular, low values of r lead to word skipping and word skipping makes Bayesian modeling complex. In short, we would have to also model how likely word skipping is to occur, adding an extra dependent measure in the model and we would have to separately collect reading times and regressions for those instances in which no skipping took place. This significantly increases the complexity of the model and makes it less transparent how the cognitive model connects to the data. 13 The threshold estimate of t at 3.7 might seem high. However, for any criticism of that value in the model, it should be kept in mind that the absolute value of activations of parsing steps, on which t depends, is arbitrary since S, the parameter in spreading activation, is hand-selected simply to ensure that every case of spreading activation is positive. 14 Adding more random slopes led to convergence failures. 15 I am thankful to Cory Shain for providing these data. 16 In the standard ACT-R notation, used also in Section 2, p(c) is not used, instead, one writes W c and reads it as "weight" (a free parameter to be estimated). 17 In this discussion, we ignore the base activation, which measures just how accessible a chunk is independent of context. This part of activation has no counterpart in Surprisal theory. 18 The model of Lewis and Vasishth (2005) was expanded in Engelmann et al. (2013), Engelmann (2016), and Vasishth and Engelmann (to apear). 19 As far as I can see, their model relies on PTB data just to collect rule frequencies, not to train as a full-fledged data-driven parser that could be deployed to, for example, parse a corpus. 20 While these computational psycholinguistic analyses make use of transition-based parsing, they are not closely related to this work. In contrast to the current account, the cited approaches do not reconstruct the parsers inside a cognitive architecture. Their goal is different from developing a single account of cue-based dependency resolution and syntactic processing. 21 The parser could be subsumed under a case of memory-based parsing, see Daelemans, Van Den Bosch, and Zavrel (2004). However, unlike the past cases of memory-based learning, which were inspired by memory structures to deliver the best accuracy on data-driven parsing, the current approach is inspired by memory structures to connect parsing to online behavioral measures. Such a link is not possible (or even considered) in, for instance, the theory of Daelemans et al. (2004). Appendix C: Further details on Bayesian ACT-R models for reading data of Grodner and Gibson (2005) This section presents details of the Bayesian ACT-R models for the data from Grodner and Gibson (2005) presented in Section 4.1. Two issues are covered. We investigate prior predictive distribution of the model and its robustness, that is, whether the model can also be fitted to simulated values based on Grodner and Gibson (2005). In prior predictive, we simulate hypothetical data based solely on the priors of the parameters of the model, as specified in Section 4.1.4. The simulations are created for the three models presented in Section 4.1: the data-driven cue-based model of parsing (Model 1), the syntactic model without Active Filler Strategy (Model 2) and the syntax-free model (Model 3). The simulations were run for 1,500 iterations. They are graphically summarized in Fig. C.1. The prior checks for the three models are close to each other and the 95% credible intervals include mean RTs of all regions. This shows that the priors for the parameters do not a priori disadvantage one model over another when fitting to the data. The 95% credible intervals in the prior predictive checks cover mean RTs from roughly 150 ms to, in some cases, more than 2,000 ms. The upper limit might seem too benevolent and could be further restricted to match more closely the domain expertise (see, e.g., Schad, Betancourt, & Vasishth, 2019). However, it was decided not to restrict this upper limit further. This is because there are three parameters in ACT-R model that affect reading times: F, f , and W j . It is not clear a priori which of these three parameters should be more limited in its range. Fig. C.1 also shows that on some words, the 95% credible intervals are wider than on others. The wider intervals are observed on content words. These are regions in which every item is lexicalized differently and in which lexical frequencies can strongly differ between different items. Since the model is sensitive to frequency, it will show large variations in those regions. Next, we check the robustness of the model (see also Schad et al., 2019). We want to see whether it can also be fit to data that are simulated from Grodner and Gibson (2005) based on standard procedures. Ideally, we should observe that the fit to such simulated data should be comparably good as the fit of the model to the actual data from Grodner and Gibson (2005). We proceed as follows: (i) we fit a linear mixed model to the data from Grodner and Gibson (2005); the model includes intercept, word position (factor with six levels), and type of relative clause (subject vs. object) as fixed effects; it also includes subjects and items random factors; (ii) we extract all parameter estimates from the linear mixed model; (iii) we simulate new data based on these estimates; and (iv) we fit a Bayesian ACT-R model to the newly simulated data. We repeat this procedure for 10 different simulated data sets. It turns out that the Bayesian model is quite robust in the sense that it can be fit well to the values simulated according to the just given procedure. On average, the models include simulated mean RTs in their 95% credible intervals of posterior predictive distribution in 81% of cases. That is, on average, 10 out of 12 simulated mean RTs fall in the 95% credible intervals. Four selected examples of Bayesian models and their fit to simulated data sets are given in Fig. C.2. The posterior distributions for the five parameters of the ACT-R model after the fit to the simulated data are shown in Fig. C.3. The distributions are summarized also here: Fig. C.1. Prior predictive for the models 1-3 of Grodner and Gibson (2005). Recall that model 1 includes syntactic information, model 2 postpones trace resolution, and model 3 is syntax-free. The dots are predicted mean RTs. The bars provide the 95% credible intervals. The yellow triangles are observed mean RTs, taken from Grodner and Gibson (2005). Posterior predictive for model 1 against simulated data. The data were simulated according to the procedure described in the text. The dots are predicted mean RTs. The bars provide the 95% credible intervals. The yellow triangles are mean RTs of the generated data. Four examples are selected. The top two cases represent a good fit (all or all but one simulated data fall inside the 95% credible intervals), and the bottom two cases represent a worse fit (three or more simulated data do not fall inside the 95% credible intervals). In sum, we see that our Bayesian model is robust enough to generalize to new similar data generated from the estimates based on Grodner and Gibson (2005).
2021-08-12T06:23:50.004Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "7e91921ba5a99dddd45561360db93648fda22abb", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cogs.13020", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "47b3361d62f87911ca8da18fbfc8dfbaf0b2743a", "s2fieldsofstudy": [ "Psychology", "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
258471182
pes2o/s2orc
v3-fos-license
STUDENTS’ QUESTIONING SKILLS IN ENVIRONMENTAL POLLUTION COURSE USING CASE STUDY METHODS Questioning skills are very important for prospective teacher students to master in order to help students learn actively in class later. Students' questioning skills are empowered through case study learning of environmental pollution subjects. Because it discusses pollution in the environment, a solution needs to be found. The purpose of this study was to determine the questioning skills of prospective teacher students in environmental pollution course using case study method. This qualitative descriptive research was conducted in the odd semester of the 2020/2021 academic year. The measurement instrument was in the form of scoring the students' questioning skill level based on the cognitive level of the revised bloom taxonomy. The research data were analyzed by using the percentage technique. The results showed that the questions asked by students at the low order thinking skills level were 83.33% and the high order thinking skills level was 26.66%. Furthermore, it is necessary to habituate students with asking questions with more intensive assistance and teachers can provide more examples of questions at the high order thinking skills level. INTRODUCTION Questioning skills are one of the basic teaching skills that prospective teachers need to master.Prospective Teacher students who are skilled at asking questions will be able to make students actively respond to learning and interested in learning.Cahyani, et al. (2015) stated that questions from teachers to students can be useful for obtaining an overview of student knowledge, promoting students to think, avoiding boring classroom conditions due to the absence of discussions, providing interactions in the learning process, and involving students actively in learning.Teacher questions in class can encourage students to think critically, further good questions can encourage good thinking (Yuliawati, et al., 2016). Questioning skills can be possessed due to cognitive processes, intellectual processes, or thought processes.Thinking is the process of finding and knowing the relationship between objects, ideas, and concepts to gain understanding.So, this questioning skill can support critical thinking skills (Zulkifli & Hashim, 2019). Critical thinking skills are skills that young generations need in the 21st century. Various studies have been conducted on questioning skills, both for teachers and students and students.The results of research on teacher questioning skills conducted by Yuliawati, et al. (2016) show that teachers ask questions at the level of cognitive knowledge, understanding / comprehension, application or those categorized as low order thinking skills (LOTS) and analysis.The level of cognitive analysis was categorized as high order thinking skill (HOTS).Meanwhile, questions at the other HOTS level, namely evaluation and create did not appear.Research by Cahyani, et al., (2015) shows that students generally ask questions at the comprehension and application level. Questioning skills can be possessed by every student who is a teacher candidate.However, having good questioning skills needs to be trained (Santoso, et al., 2017) (Zulkifli & Hashim, 2019).One of them is through the case study learning method in the environmental pollution course. Environmental pollution is one of the current crucial issues that need to be provided to prospective teacher students in the biology education study program.This course equips students with knowledge about environmental pollution and its management and so that students can care about the environment.The method used in learning is a case study. Case studies are conducted by asking students to look for problems about pollution in the world and in the environment around humid tropical rainforest areas through articles according to the lecture topic, then asking students to analyze existing cases and study the aspects in them.Furthermore, students are asked to make decisions in the form of solutions that can be submitted regarding the cases they choose. Case studies are active student learning methods that ask students to apply analytical knowledge and skills to solve complex problems related to the material discussed (Giacalone, 2016).Roell (2019) adds that case studies are a learning method based on descriptions of real events or hypothetical situations that require solutions and action.Students are expected to be able to make a decision or possible solution steps to the problem.Therefore, learning this case study is expected to encourage students to have the skills to ask questions.The treatment instruments are in the form of Lesson Plans and Worksheets.The measurement instrument in the form of scoring the student's questioning skill level based on the cognitive level of the revised bloom taxonomy was used against the results of the questions compiled by students.A total of 20 students who took the environmental pollution course were asked to compile three questions.The questions that have been compiled are analyzed by using the percentage technique to the cognitive level RESULT AND DISCUSSION Environmental pollution learning is carried out using the case study method.The discussion on environmental pollution lectures includes the definition of environmental pollution, water, air, soil, and indoor pollution.In the discussion, students are asked to determine a solution to pollution problems that exist in the world and in the surrounding environment.In each learning activity, group and classical discussions are carried out to discuss pollution cases or problems that are being discussed.During the discussion, students were also given the opportunity to do question and answer.At the end of the semester session, students are asked to compile 3 questions related to the environmental pollution lecture material that has been discussed.The results of the analysis of student questioning skills are shown in Table 1.Based on the results of the analysis, it is known that students have been able to ask questions at the knowledge till evaluation level.No student has yet to ask questions at the create stage.Most of the questions asked ScienceEdu: Jurnal Pendidikan IPA Vol.IV.No. 1 April 2021 by students were at the Comprehensions level with a total of 51.67%.Next is the application level of 28.33.At the level of analysis of 13.33%.These results indicate that the questions posed by students are still dominated by questions at the low order thinking skills level of 83.33%, namely at the level of knowledge, comprehension and application.However, students were also able to ask questions at the high order thinking skills level at the analysis and evaluation level of 26.66%. Regarding the number of students who were able to ask questions, all students had questions at the comprehension level.Fifteen students were able to compile questions at the application level and six students were able to compose questions at the analysis level.Two students were able to ask questions at the evaluation level and two others asked questions at the knowledge level. The results showed that more student questions were at the low order thinking skills level compared to high order thinking skills in line with the research of Cahyani et al. (2015) and Yuliawati, et al., (2016).Asking skills related to learning experience, knowledge and understanding.The higher the level of understanding the student has, the more complex questions will be asked.Own understanding is influenced by the method factor, reinforcement from the teacher, and interaction with friends.So it can be said that the questioning skills of students are also influenced by these three factors.In addition, the more intensity students ask, the more students get used to asking questions and can make students more critical.As stated by Prilanita & Sukirno (2017) that students' skills to ask questions are not innate but rather shaped and developed.Agustini & Sopandi (2017) added that the number of questions at the LOTS level could be caused by the lack of initial abilities of students and unsuitable strategies. Questioning skills can be developed through teacher guidance and mentoring efforts with certain methods (Santoso, 2017).One of the factors that support the ability to ask questions at both the LOTS and HOTS levels is the learning method.The learning method applied is a case study.This method also makes students face real cases, both simple and complex, which can make students feel familiar and close to the material being studied, so they feel interested in learning.Students are given the opportunity to choose various topics according to the cases being studied, from which students' curiosity about cases related to the material can be raised. Case studies as a learning method involve a process of discussion and negotiation which often raises questions for students such as how and why (Minniti, et al., 2017).Bonney (2015) adds that the activities of studying cases and discussing encourage students to think so that they can raise curiosity and questions.Giacalone (2016) explains that case studies involve two stages, namely identifying key concepts and offering solutions to the cases at hand.This stage requires students to be able to analyze the situation through various questions related to the case and think about how to solve the case.Furthermore, when students are faced with real situations and ask students to make decisions, it can encourage students to connect knowledge with decision-making skills.Students are required to think and even think deeply about what happened, how it happened and how decisions about solutions were taken or to explore innovative ways of making decisions. The follow-up that can be done related to the results of this study is to make improvements in the steps to teach questioning skills.Research by Agustini & Sopandi (2017) and Zulkifli & Hashim (2019) shows that if taught regularly, students' questioning skills can improve.At first the level of asking students was at the low order thinking skills level, but at every meeting the skills to ask questions at the high order thinking skills level experienced an increase.In addition, teachers can also provide more examples of questions at the high order thinking skill level so as to motivate students to have better questioning skills and critical thinking skills.This is as stated by Yuliawati, et al., (2016) that educators who ask questions can promote students' thinking skills and questions at the Table 1 . The results of the analysis of student question levels based on the revised Bloom Taxonomy
2023-05-04T15:07:53.855Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "470cc43f0c59006b2b2bcef3d622a114f287a647", "oa_license": "CCBYNCSA", "oa_url": "https://jurnal.unej.ac.id/index.php/Scedu/article/download/23947/9727", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d0e917b1199a25f132acea7ba9b123b0a6cfb402", "s2fieldsofstudy": [ "Education", "Environmental Science" ], "extfieldsofstudy": [] }
119254201
pes2o/s2orc
v3-fos-license
A theoretical and numerical study of gravity driven coating flow on cylinder and sphere: two-dimensional and axisymmetric capillary waves The theoretical and numerical models for gravity driven coating flow on upper cylinder and sphere are formulated. Using a perturbation method, the governing equations which depend on one Bond number $Bo$ are derived for a liquid film flow down the outside of a horizontal cylinder and a sphere. They can be simplified to one-dimensional form due to the symmetries. The general structure of the two-dimensional and axisymmetric capillary waves under high $Bo$ condition is focused on. An asymptotic theory is used to solve for the free-surface profiles in the outer and inner region, respectively. Even though the evolution in the outer region is essentially different, there are inherent similarities in the inner region because the capillary ridges are proved to degenerate into the one-parameter family in high $Bo$ limit. Using appropriate numerical techniques, some parametric studies are performed on the profiles of the capillary waves. The spreading of the front recorded from the outer solutions quantitatively accords with the scaling laws in a top region at early time, but deviates obviously at later time. The comparison between the composite solutions constructed using the asymptotic theory and direct solutions calculated from the evolution equations is highlighted via both the global and local features. Agreement between the composite solutions and the direct solutions is good for high $Bo$ cases. This asymptotic behavior is common on both of cylindrical and spherical surfaces and not affected by the partial wetting process. I. INTRODUCTION In physicochemical hydrodynamics, the coating flow is a fluid flow in which a large solid surface area is covered with one or more thin liquid layers 1 . Examples of scientific and technical importance range from small scales 2 (coating processes in manufacturing) to large scales 3 (lava flow on volcanoes). The fluid flow is always coupled with some surface physicochemical processes such as wetting and spreading 4 . Even though the aim is often to study the way in which an extra quantity of liquid moves on an already wet wall, many of the coating flow problems contain the evolution of a moving contact line. The macroscopic hydrodynamic model breaks down at the contact line as the traditional no-slip condition causes a stress singularity. The no-slip condition can be relaxed by allowing slip in the vicinity of the contact line 5,6 or introducing a thin precursor film 7 on the solid substrate. These two approaches for dynamic contact line give rise to many coating flow researches in last two decades 8 . Most of the researches focused on gravity 9,10 or thermally 11 driven flow of a thin liquid film spreading on a solid surface. The region near the moving contact line is dominated by the surface tension effect. A capillary ridge is typically observed in this region, and the spreading rate of the moving front is determined by the propagation speed of this nonlinear traveling wave. Moreover, the capillary wave is unstable to the transverse disturbances, and subsequent evolution may give rise into a fascinating phenomenon of a fingering instability 12 , which depends on the free-surface profile at the onset of the destabilization. The capillary wave can be considered as a base state for the subsequent fingering. For this significance, it is important to study the quantitative capillary wave profile for understanding many features of film spreading and wetting. For gravity driven coating flow, if the liquid film has high viscosity, the flow becomes low Reynolds number, and the inertia effect can be negligible where gravity, viscous force and surface tension dominate. A lubrication theory is widely used to model viscous flow and simplify the hydrodynamic equations 8 . Numerous theoretical, experimental and numerical studies have examined the dynamics of gravity driven viscous films down an inclined plane 13,14 . In these canonical problems, the profiles of capillary waves were well studied by analytical and high-resolution numerical methods in the framework of the lubrication theory. Subsequent researches were paying attention to film flow over more complex topography, such as flow down the outside of a horizontal 15,16 or vertical 17,18 cylinder. But most of these studies are based on numerical simulation, and the limited computational examples can not describe the commonality within these phenomena. Moreover, because of the multi-scale features in coating flow, developing an appropriate numerical method with small discretization errors is a challenging process. There are also some papers to give theoretical perspectives for coating flow on a general curved face [19][20][21] . These studies put more emphasis on the mathematical formulation of governing equations in a general orthogonal curvilinear coordinate system. The lack of solution on specific surfaces makes these theories rather abstract, they would benefit from specific applications. One of the motivations in our study is to use the ideas of mathematical modeling behind these papers to solve more specific cases and find some similarities behind the phenomena. If the fingering instability is not considered, one of the difficulties to theoretically describe the gravity driven capillary wave on curved surface is that the contact line may be a three-dimensional curve due to the non-symmetry of a general solid substrate, which is intractable compared to the straight contact line on inclined plane. Even so, there are typical symmetric geometries which are more common in nature. For example, cylindrical and spherical surfaces are the representatives with high symmetry and constant curvature. They can be considered as curved surfaces generated by translating or rotating a circle in three-dimensional space. These two similar curved surfaces can be considered as a class of problem and used as a starting point for more general study. The tangential component of the gravity varies as a sinusoidal function on both of the cylinder and sphere, due to the same base curve -a circle for these two geometries, which is much different from the constant gravity component on inclined plane. Moreover, the curvature on cylindrical or spherical surface may affect the expression of the surface tension, which may make the free-surface profile essentially different from the planar problem. Some experiments concerning the flow and stability of thin films down the outside of a cylindrical and a spherical substrate had been examined by Takagi and Huppert 22 recently. They also presented an analytical study of viscous current at the top of cylinder and sphere. But the theory is based on conditions that the film is constrained near the top where the streamwise gravity is approximated to increase linearly, and the surface tension is neglected everywhere for simplification. The main motivation in our study is to explain these phenomena from a more deductive perspective. We will derive the governing equations to model the leading order physics of the coating flow down the outside of a horizontal cylinder and a sphere. A regular perturbation method is used to expand the hydrodynamic equations and to obtain the leading order terms which is more tractable. If we assume that the initial film is located symmetrically at the top of the cylinder or sphere, the capillary wave formed later may become two-dimensional (for cylinder) or axisymmetric (for sphere) which makes the governing equation simplified further. It is well known that Rayleigh-Taylor instability may occur if the high-dense fluid is located above less-dense fluid. We do not consider moving contact line on the lower cylinder or hemisphere in this paper to avoid mixing two kinds of instabilities. The basic problems of coating flow may arise under two types of constraint conditions: constant volume or constant flux. This study focuses on the dynamics of coating flow which is formed by liquid of constant volume. We assume that the capillary wave is not to be destabilized by any disturbance, namely that, the fingering instability will never occur so that we can study long-term evolution of an unstable capillary wave, even though it may be difficult to realize in a laboratorial environment. We will focus on the situations that a thin film flows down the large sized curved substrate. In this situation there is a small parameter in the evolution equation which makes the mathematics become a singular perturbation problem. Like the planar problem, the capillary wave on cylindrical or spherical surface always consists of three regions, each with its own characteristic scaling. In the outer region where the free surface curvature is small, the body force that drives the liquid is resisted by the viscous force, and the surface tension plays a negligible role. In the inner region local to the moving front, where the free surface curvature is large, the surface tension is balanced with body and viscous force, and a capillary ridge always evolves. A third contact region exists at the advancing contact line, where the no-slip condition is no longer valid. In this paper, the general structures in both the outer and inner regions are studied. A method of matched asymptotic expansions can be used to match the outer profile and inner profile, which constitute a complete capillary wave. We will show that, the capillary waves on cylindrical and spherical surfaces are closely related to that on inclined plane, but still have their own characteristics. The outline of this paper is as follows. In II, the theoretical formulation is developed by two scalar Partial Differential Equations (PDE) that govern the coating flow of thin liquid films on cylindrical and spherical surfaces. Due to the symmetrically distributed initial conditions, the governing equation can be simplified to a two-dimensional form for cylindrical problem or an axisymmetric form for spherical problem. We use the precursor film con-dition with a compatible disjoining pressure model to simulate the partial wetting process near moving contact line. In III, an asymptotic theory is elaborated to deal with the singular perturbation problem of high Bond number flow. In IV, the solution techniques for numerically solving the outer and inner equations introduced by the asymptotic theory and the complete evolution equations are discussed. In V, the numerical solutions of the outer equations, inner equations and the evolution equations are presented. We discuss the comparisons between the asymptotic theory and direct numerical solutions. The individuality and commonality in the two-dimensional and axisymmetric capillary waves are highlighted. Section VI summarizes the conclusions of the present work. A. Governing equation on cylindrical surface An incompressible Newtonian liquid film of density ρ, viscosity µ and surface tension σ, flows under gravity on a stationary rigid cylinder of radius R, the axis of the cylinder is horizontal. The standard cylindrical coordinates of azimuthal angle θ and axial length z are established on the cylindrical surface. The normal distance n = r − R is defined as the third coordinate of the complete coordinate system, where r is radial distance. The diagram of standard cylindrical coordinates and liquid film are sketched in Fig. 1(a), and note that the azimuthal angle θ measured downward from the top of the cylinder. The following scales are used to nondimensionalize the hydrodynamic equations in cylindrical coordinate system where (u, v, w) are azimuthal, axial and radial components of the fluid velocity, g is gravity acceleration, t is time and p is pressure, the variables with asterisk represent dimensionless. L and H denote length and thickness scales of the liquid film, respectively, and ǫ = H/L is aspect ratio. The values of velocity scale U and pressure scale P are standard Stokes scales when gravity and pressure terms on the right side of hydrodynamic equations are supposed to be in equilibrium with the viscous term. A regular perturbation method is used to simplify the three-dimensional hydrodynamic equations when the aspect ratio, ǫ, and reduced Reynolds number, ǫ 2 Re(Re = ρUL/µ), are both sufficiently small. This "long wave" assumption is similar to a lubrication theory for coating flow on inclined plane, and the main difference is that the kind of solid substrate is extended to a curved surface. By substituting (1a), (1b) and (1c) into hydrodynamic equations, the simplified Stokes equations and continuity equation on cylindrical surface can be derived after some algebraic manipulations The terms of O(ǫ) and O(ǫ 2 Re) are perturbation terms in the framework of a perturbation method, and the leading order form can be obtained by neglecting all higher order terms. In this paper, only the terms explicitly expressed in the equations (2a)-(2d) are retained to study the leading order physics. These leading order terms are transformed back to the dimensional forms for deriving a dimensional governing equation. The boundary conditions to solve (2a)-(2d) are no slip and no permeation conditions on solid substrate the continuous conditions of normal and shear stress at the free surface of liquid film and the kinematic condition which holds the movement of the free surface where h(θ, z, t) is the local thickness of liquid film. The Young-Laplace pressure p s is expressed in (4a), and the mean curvature of free surface κ on the right side is approximated by the summation of the mean curvature of cylindrical surface and the first order terms including h and its derivatives introduced by additional film thickness 21 . Note that the length scale to measure the Young-Laplace pressure is different from length scale L in (1b). A so-called capillary length which is derived by assuming the surface tension to be in equilibrium with the dominant driven force is often used to estimate the order of Young-Laplace pressure. In next section, we will show that the leading order physics may become singular in a special region if the surface tension is neglected, and the Young-Laplace pressure serves to smooth out singularities in the leading-order solution. Thus they are considered having a leading-order effect in the regions where the capillary length is much less than the global length scale L (the long wave assumption holds because the capillary length is still greater than the global thickness scale H). The azimuthal velocity u and axial velocity v can be determined by integrating (2a) and (2b) across the film thickness subject to (3) and (4b), the difference of radial velocity w between solid substrate and free surface is obtained by integrating the continuity equation (2d) along the normal direction. The final equation governing h(θ, z, t) is derived by substituting w in (3) and (5) into the integration of (2d) The governing equation (6) can be nondimensionalized using the following scales Then, the dimensionless form is obtained as where the Bond number Bo is This dimensionless number is determined by density and surface tension of the liquid, gravity acceleration, radius of the cylinder and thickness scale of the liquid film. Note that the constant term in (4a) which represents the mean curvature of the cylindrical surface is neglected in the dimensionless equation (8b) because the derivatives of constant curvature on cylindrical surface are zero in (6) and (8a). B. Governing equation on spherical surface The theoretical formulation of coating flow on spherical surface is analogous to the problem on cylindrical surface, if the standard spherical coordinates are used instead of cylindrical coordinates. Consider the gravity driven flow of a Newtonian liquid film with the same properties on a spherical substrate of radius R. The polar angle θ, azimuthal angle φ and normal distance n = r − R are introduced near the spherical surface. The sketch of the spherical substrate and the liquid film are shown in Fig. 1(b). The same scales expressed in (1a) and (1b) are used to nondimensionalize the hydrodynamic equations in spherical coordinate system, and the simplified Stokes equations system can be extracted from the leading order The boundary conditions for (10a)-(10d) are identical to that in cylindrical problem, except the expression of the Young-Laplace pressure and the kinematic condition at the free surface around the sphere. Finally the governing equation for local film thickness on spherical surface is Using the scales described in (7a), the dimensionless form is The definition of Bond number Bo is identical to that expressed in (9). According to the expression of Bo, for the liquid films which have the same physical property, the high Bond number represents thinner film flow on larger sized solid surface, and conversely the low Bond number represents thicker film flow on smaller sized surface. C. Two-dimensional flow on cylinder and axisymmetric flow on sphere Now let us focus on a special case of coating flow on cylindrical surface. A condition of ∂h ∂z = 0 is assumed to describe uniform distribution along axial direction. This condition represents a two-dimensional flow on the cylinder and the governing equations (8a) and (8b) can degenerate into a two-dimensional form The asterisk signs are neglected hereinafter. The two-dimensional case can be realized by setting an appropriate initial condition for (8a). Consider an initial liquid film of flat dimensionless thickness h i = 1 (the initial film thickness is used as the thickness scale H), is located at the top of the cylinder where the absolute value of azimuthal angle θ is less than a preset value θ i . Because the range of azimuthal angle θ is from 0 to 2π, to achieve more simplification, assume that the distribution of initial film is symmetrical about the vertical plane, so that we can only study a semicircle region of 0 ≤ θ ≤ π. A precursor film of dimensionless thickness b is assumed ahead of the uniform film to overcome the contact line singularity. The whole surface is prewetted by the precursor film and the precursor thickness is presented as a part of initial condition, mathematically, it is The perturbation method may be invalid at θ i because of the discontinuousness. A continuous initial profile can be used to approximate (16) where tanh(x) represents a hyperbolic tangent function, a is a coefficient for controlling the width of transition zone. The boundary conditions of (15) are in which the subscripts represent derivatives. It implies that the flux into the flow domain is zero and the fluid volume can keep constant. The length scale of initial film can be expressed as L = Rθ i , thus θ i has a lower limit due to the request of the perturbation method For coating flow on spherical surface, a similar constraint condition ∂h ∂φ = 0 can be used to simplify governing equation (14a) and (14b) to an axisymmetric form It represents an axisymmetric flow on the sphere. The initial conditions and boundary conditions of (20) are identical to (16), (17) and (18). The main difference is that the range of polar angle θ is natively from 0 to π, and the additional symmetric plane is not necessary for spherical problem. The evolution equations (15) and (20) are the focus of this paper. D. The wetting model The wetting process commonly exists in the coating flow problems. There are three cases in the region near the contact line: a prewetting case, a complete wetting case and a partial wetting case 4 . For prewetting cases, there is not a physically real contact line (the whole solid surface is prewetted by a macroscopic liquid layer), and the contact area can be defined using the region where the bulk liquid meets the prewetted layer. If the solid surface is sufficiently dry in the region where the coating liquid has not arrived, the contact angle at the contact line may be zero or finite (depend on the particular liquid-solid material system), corresponding to complete or partial wetting cases. The film thickness near contact line may be microscopic (at the nanoscale) in complete or partial wetting cases. To approximate the combined microscopic intermolecular forces, a disjoining pressure model introduced by Frumkin and Derjaguin 23 relates observed equilibrium contact angles to the intermolecular forces that become important for liquid films of submicroscopic dimensions in partial wetting cases. The energy density associated with the disjoining pressure Π is minimized when the film thickness assumes a very small value. A computationally convenient model function is where the exponents n and m are positive constants with n > m > 1. The constant B is positive and has the dimension of pressure. The first term in (21) represents liquid-solid repulsion while the second term is attractive, leading to a stable film thickness at h min . This stable film is assumed on the whole solid surface, which can be considered as a precursor film and is compatible with the precursor condition. In the lubrication limit where the equilibrium contact angles θ e is small, the force balance is used to evaluate the constant B 24 As a result, Π is characterized by h min , θ e , m, and n only. B has nonzero value only in partial wetting cases, because the equilibrium contact angles θ e is zero in complete wetting cases. Unlike Young-Laplace pressure, the value of disjoining pressure at any location depends only on h. Π is zero when h = h min and becomes vanishingly small for h ≫ h min . Gradients of Π drive liquid motion but the effect is only important in the immediate vicinity of apparent contact lines. Since the disjoining pressure is assumed to depend on the local interfacial separation only, with no slope and curvature contributions, the validity of expressions (21) and (22) also requires the small free-surface slope and substrate curvature approximation. The formula of disjoining pressure is independent on the type of curved substrate. In the present formulation, disjoining pressure is an additional interfacial effect that may be thought of as a modification to the Young-Laplace pressure where p t is the total interfacial pressure. The governing equations on cylindrical and spherical surface can be modified after replacing p s by p t in boundary condition (4a). The dimensionless form of disjoining pressure is where ǫ = H/R is the aspect ratio if the radius R is used as the length scale, b = h min /H is dimensionless thickness of the stable film which can be considered as dimensionless precursor thickness described previously. The evolution equations become for cylinder, and for sphere, respectively, where Π d is which can be derived from (24). III. ASYMPTOTIC THEORY FOR HIGH BOND NUMBER FLOW Because the Bond number Bo is a unique dimensionless number in two-dimensional equation (15) and axisymmetric equation (20), the free-surface profile may depend on Bo in different ways. The flow characteristics are distinguished by the value of Bond number. Under high Bond number condition, the evolution may become a singular perturbation problem. We use a method of matched asymptotic expansions to study the high Bond number limit of (15) and (20). This method is firstly discussed by Moriarty, Schwartz, and Tuck 25 for modeling unsteady spreading of a liquid film on a vertical plate with small surface tension. A. Outer equation on cylindrical surface If the Bond number is very large, i.e. Bo ≫ 1, then the terms including Bo −1 can be neglected in an outer region where h θθθ ≪ Bo, and (15) is simplified to an outer form The quasi-linear form of (28) is This first order PDE can be solved using the method of characteristics, but unfortunately the analytical solutions can not be expressed using elementary functions due to the existence of a first kind elliptic integral in solving process. Even so, in a specific region where θ ≪ 1, sin θ ≈ θ ≈ 0 and cos θ ≈ 1, (29) can be simplified further This equation is definitely valid in θ ≪ 1 region, moreover, in the region where θ < 1, (30) can be considered as a leading order equation if the trigonometric coefficients in (29) are expanded using Taylor series. The dimensional form of outer equation (28) was firstly analyzed in Takagi and Huppert 22 and they derived a long-term similarity solution in small θ region. Following their ideas, the general solution of dimensionless equation (30) can be given directly using the characteristics method h i (θ) is the initial profile. We use uniform film with a precursor layer expressed in (16) as initial condition, the solution is θ F is the location of moving front. Because the initial profile is independent on azimuthal angle θ, according to (32a), the film may evolve with uniform thickness. The profile of (32a) and (32b) is discontinuous at moving front, which can be considered as a shock wave due to the hyperbolic type of the outer equation. The front location θ F can be determined by the conservation of fluid volume. If V is the dimensionless volume of that portion of the initial film lying above the precursor layer, i.e., Then θ F can be calculated by The front speed can be evaluated by differentiation of (34) Even though the above solutions are derived under the condition θ ≪ 1, the definitions of θ F , h F and b F are general on the whole upper cylinder if the solution of (28) is known. In Appendix A, we will use the method of characteristics to derive an implicit form including the introduction of elliptic integral, which can be considered as the exact outer solutions on the whole upper cylinder. B. Outer equation on spherical surface The evolution equation in the outer region on spherical surface can be simplified from (20) by neglecting the Bo −1 terms The quasi-linear form is ∂h ∂t The only difference between outer equations on cylinder and sphere is twice the right term of Analogously, in the region of θ ≪ 1, the second term on the left side of (37) can be neglected, the simplest form is With the same initial profile, a shock wave solution can also be obtained At a given time, the solution profile on both sides of front location θ F is uniform, which is identical to the cylindrical problem. The same volume conservation method can be used to determine θ F . Note that the formula to calculate the net volume V on spherical surface (eliminating the coefficient 2π) is different from that on cylindrical surface Due to the addition of a term sin θ in the net volume formula, the front location θ F varies like t 1/4 at later time when the effect of initial condition can be neglected, which is different from t 1/2 in (34). But coincidently, the front speed is identical to that in cylindrical probleṁ The front speedθ F was obtained in the region θ ≪ 1, no matter on cylindrical or spherical surface. Even though we can't give the explicit solutions for (28) and (36) in the region of θ ∼ 1, there is a general method to compute the front speed on the whole upper surface using the values of θ F , h F and b F . For cylindrical problem, according to (28), the net flux across the front is sin θ F (h 3 F − b 3 F ), which must exactly balance the flux defined using the front speed, For spherical problem, according to (36), the net flux is which is equivalent to the cylindrical problem. In the mathematical viewpoint, (42) belongs to a Rankine-Hugoniot relation of a weak solution system, as pointed out by Howell 21 , which describes the relationship between the states on both sides of a shock wave. This Rankine-Hugoniot relation is mathematically consistent with the outer equations (36) and (36) and the volume conservation formulae (33a) and (40). C. Inner equations on cylindrical and spherical surfaces In previous subsection, we found some similarities of outer solutions on cylindrical surface and spherical surface, but the long-term evolution of these two kinds of shock waves is essentially different (with different scaling law). In this subsection, we will simultaneously discuss the evolution equations in the inner region for both cylindrical and spherical problems. The outer solution is invalid near the moving front, because the profile of shock wave is singular at θ F . The steepening effects in the region near θ F involve rapid and localized increases in the free surface curvature, and will be vigorously resisted by surface tension. In this inner region, high order derivatives included in surface tension terms are important. We can evaluate the width of the inner region by assuming the highest order derivative in (15) and (20) For cylindrical problem, an inner coordinate system can be established using the trans- The inner coordinate ξ defines a stretched coordinate system moving with the front speed. Using (43), the two-dimensional equation (15) can be transformed into the following Compared to the condition Bo ≫ 1 in the outer region, a more strong condition Bo 1/3 ≫ 1 is assumed in the inner region. To keep the leading order and neglect all O(Bo −1/3 ) and For spherical problem, using the same transformation (43), the axisymmetric equation (20) in the inner region becomes an observer in the inner system moving with the front will not perceive any time evolution of the capillary wave profile. The profile extends infinitely far downstream and upstream, and appears essentially flat far away from the front. Because the leading order inner equations on cylindrical and spherical surface are equivalent, now (45) can be used as a starting point to study the common characteristics of the cylindrical and spherical problems. We can integrate (45) with respect to ξ The integration constant d can be given by matching the profile onto h → h F as ξ → −∞, then by substituting d, (42), (48a) and (48b) into (47), the equation become is determined by the outer solution can be considered as a relative precursor thickness, with the boundary conditions It can be written as a third order Ordinary Differential Equation (ODE) because time plays a parametric role as δ(t) in (49). This boundary value problem is classic, firstly analyzed by Tuck and Schwartz 26 elaborately, and subsequently cited by many previous literatures 27, 28 . These references focused on the general description of the inner structure for the draining or coating flow problems on planar surface, but according to (44) and (46), in the inner region the profiles on cylindrical and spherical surface can degenerate into the planar front when It is not a surprising result because the terms introduced by the curved substrate in (44) and (46) The complete relationship between the final inner coordinate ξ ′ and original outer coordinate θ can be obtained from (43) and (48b) Alternatively, if the width of inner region is expressed as R∆θ, (51) can be derived using the well known capillary length 27 on inclined plane to nondimensionalize R∆θ where h N = h F H is the dimensional front thickness in the outer region, Ca = µU/σ = ρg sin αh 2 N /3σ is capillary number, α is inclined angle. Thus, the following formula is obtained as This derivation gives a direct connection between cylindrical (or spherical) and planar problem. The high Bond number condition implies the capillary length is much smaller than the radius of cylinder or sphere, i.e., l ≪ R. Since the width of capillary ridge ∆ξ ′ can be obtained for a given δ(t) in (49), we can estimate the width of inner region via According to (53), the width of capillary ridge is proportional to the cubic root of front thickness h F , and inversely proportional to the cubic root of Bo and sine of front location θ F . Thus we obtain a 1/3-power law for the width of capillary ridge which is an analytic expression derived from asymptotic theory. The asymptotic theory can be validated using the width ratio between the inner region and the outer region This ratio decreases with time because of increasing θ F and decreasing h F . For a given Bo, if the front location θ F is small (mostly arise at initial time), the ratio may approach or even be greater than 1, then the asymptotic theory may be invalid. But as time increases the asymptotic theory may become leading order valid. On the other hand, if the asymptotic theory is expected to be valid from the initial time, i.e., ∆θ/θ i ≪ 1 ⇒ ∆ξ ′ θ i sin 1/3 θ i Bo 1/3 ≪ 1, under the condition θ i ≪ 1 the initial polar angle θ i should satisfy Compared to condition (19), this is an additional constraint condition for initial film extent. D. Composite capillary wave in the inner region is obtained for each choice of the one-parameter δ(t). This inner solution can be rewritten in outer coordinates using the transformation (51), so that In a typical matched asymptotic expansions method, if we have obtained both the outer and inner solutions in leading order, the leading order composite solution over the entire flow domain can be constructed using a multiplicative composite expansion 29 h inter is the intermediate solution that is common to both of the outer and inner regions. Upstream of the moving front, where θ ≤ θ F , h inter = h F , and h outer is the solution of (28) or (36). Downstream of the front, where θ > θ F , the common value between the inner and outer solutions is the precursor layer thickness, so that h inter is equal to b. The composite solution is then The continuity of composite solution can be guaranteed using this constructed method. Note that (49) is translational invariance in the ξ ′ direction, because ξ ′ is not included explicitly on the right side of (49) and the boundaries are located infinitely far. is also a solution for this boundary value problem. The free translational parameter ∆ξ ′ can be determined by volume conservation if the profile of composite solution is given for sphere. This constructed method can be used on both cylindrical and spherical surfaces to obtain a composite capillary wave. E. Addition of disjoining pressure The asymptotic theory discussed previously should be modified intuitively if the disjoining pressure term is added in evolution equations. Starting from (25) and (26), because in the outer region the film thickness h ≫ b, like the surface tension terms, the disjoining pressure can be neglected in the outer region, and this term does not appear in the outer equations on cylindrical and spherical surface. Now pay attention to the inner coordinate system described in (43), for example, transform (25) in this inner coordinates Note that if we assume that Bo 1/3 ǫ ∼ 1, this term can be retained in the leading order inner equation Using (48a) and (48b), the final form of inner equation is where the dimensionless contact angle parameter is Only the rightmost term which represents disjoining pressure is added compared to (49). The same leading order inner equation on spherical surface can be obtained from the transformation of (26). Equation (60) is identical to the two-dimensional steady-state ODE which is derived by Eres, Schwartz, and Roy 30 for a gravity-driven draining film on a vertical plate, in which a similar disjoining pressure model is used. Even though in partial wetting cases, the inner equation on cylindrical and spherical surfaces can degenerate into a common form which includes the disjoining pressure as additional terms. As discussed in I, besides the outer and inner region, there is a third region existing at the advancing contact line, in which the characteristic scaling is much smaller than the inner region. The disjoining pressure is only operative in this contact region. Strictly speaking, the asymptotic theory described above only makes the inner solution matching to the outer solution, and an additional matched asymptotic expansion 26 can be implemented between the inner region and the contact region. But in present study, the inner region and the contact region are merged into a complete inner region and modeled using a unified inner equation. A. Numerical techniques for outer and inner equations Even though there is a characteristics method to solve the outer equation (28) or (36) analytically, the explicit expressions can not be written in elementary functions due to the existence of elliptic integrals as discussed in III A and III B. Nevertheless, an implicit form can be derived to calculate exact solutions of (28) or (36). The outer profile at a given time can be modeled using some nonlinear implicit expressions which includes the first kind (for cylindrical problem) or second kind (for spherical problem) incomplete elliptic integral. These expressions can be considered as a system of independent nonlinear algebraic equations and the Newton-Kantorovich's method can be used to obtain numerical solutions. The outer profile constructed using this numerical technique is exact because only local errors are introduced by the iterative method and the high precision numerical solutions can be guaranteed by controlling the residuals. A classic shooting scheme 26 is used to solve (49) (or (60) in partial wetting cases) and construct the inner solution. Numerical integration is performed using an adaptive fourthorder Runge-Kutta solver. Initial conditions are generated from an asymptotic equation valid far upstream where the uniform layer is only slightly perturbed, then the numerical profile in the inner system is constructed. The detailed numerical methods for outer and inner equations can be found in Appendix A. B. Numerical techniques for two-dimensional and axisymmetric evolution equations It is useful to develop a numerical approach to solve the complete two-dimensional and axisymmetric evolution equations (15) and (20) (or (25) and (26) A. Outer and inner solutions on cylindrical and spherical surfaces In this subsection we focus on the prewetting and complete wetting cases in which the disjoining pressure term in (60) is negligible (degenerate into (49)). The solutions of outer equations (28) and (36) are obtained via the method of characteristics. (16) is used as the initial profile for the following computational examples, the initial front location θ i is set to be equal to π/16. This initial condition will not be changed unless otherwise stated. Corresponding to the outer solution at a certain time, the quasi-steady solutions of (49) in the inner region are integrated using the shooting method. Fig. 2(b). The relative precursor thickness δ is calculated using δ = b/h F , where b = 0.001, h F is obtained using the exact value in outer profiles at corresponding time as shown in Fig. 2(a). A capillary ridge has been evolved near the moving front. The surface curvature is large at the capillary ridge, which represents the effect of the surface tension. Because δ is increasing with the decrease of front thickness as time increases, we can see the peak of the capillary ridge and the maximum slope at the moving front decreases with time. The inset in Fig. 2(b) shows the refined profiles near apparent contact line, a primary minimum is typically observed in this contact region and which can be seen clearly in Fig. 3(a). The peak of the capillary ridge and maximum slope at the moving front in the inner profile are the decreasing functions of b, which is presented obviously in Fig. 2(b). profile in θ ∼ 1 region is monotonically increasing). But the front speed on spherical surface is observably slower than that on cylindrical surface (spent nearly sixfold time to arrive at the same angular position). Because the inner equation of spherical problem is identical to cylindrical problem (see (49)), compared to the inner profiles shown in Fig. 2(b), the inner solutions in Fig. 4(b) are just another parametric family with different values of parameter δ, which also belong to the one-parameter solution set of (49). B. The spreading of the moving front In high Bond number flow, the spreading speed of the moving front (front speed) can be approximated by the propagation speed of the shock wave in the outer solution. The numerical relationship between the shock wave location and time is constructed by recording the front location which is calculated using the exact outer solution at a given time. Figure 5 when the front location θ F arrives near π/2, the front thickness h F is about 27% greater than the film thickness at θ = 0, which is more than that calculated in cylindrical problem (18%). The composite solutions on cylindrical surface can be constructed by merging the outer profiles shown in Fig. 2(a) and inner profiles shown in Fig. 2(b) for a given Bond number. Fig. 7(a), and the basic assumption (54) for the asymptotic theory can be satisfied. Figure 7 (15), and the initial condition is set using (17). The range of time are identical to that shown in Fig. 7(a). The capillary ridge has been calculated directly from the evolution equation. It is apparent that the outer and inner region in these complete profiles can be distinguished. The outer region is far away from the capillary ridge where the free-surface derivative is small. The inner region can be defined at the capillary ridge where the curvature is observable. The width ratio between the inner and outer region is relatively large at early time, and decreasing as the time increases, which had been proved in (54). The inset in Fig. 7(b) shows the refined free-surface profile near apparent contact line at t = 6, which benefits from the adaptive method for directly solving the evolution equation. A primary minimum displayed is similar to that in inner profiles as shown in Fig. 2(b). To study the effect of the Bond number on the composite profiles, the composite solutions are constructed using different Bond number ranging from logBo = 5.0 to logBo = 8.0. Figure 8(a) shows the profiles of the composite solutions depending on Bo at t = 6. According to (51), the smaller Bond number corresponds to wider inner region. It is clear in Fig. 8(a) that the width of the capillary ridge (inner region) decrease with Bo, but the maximum slope near the apparent contact line is an increasing function of Bo. As comparative references, the typical profiles of complete capillary wave in the same Bond number range at the same time as Fig. 8(a) are shown in Fig. 8(b). In these direct solutions, the width ratio between the inner and outer region is decreasing obviously as Bo increases, which convincingly verifies the asymptotic condition 54. One interesting question is that what is the difference between the free-surface profile integrated from inner equation (49) and the profile calculated directly using evolution equation (15). This discrepancy as a function of Bo may represent an asymptotic behavior of the capillary ridge, and may be important to study the effect of curved substrate on the complete capillary wave. To describe the global features of the capillary ridge in the inner region, we should define two important parameters -the width and the peak of the capillary wave. We can use these two parameters to quantitatively describe both the inner solutions of (49) and the direct solutions of (15). The peak of capillary wave can be defined using the primary maximum value in the inner solution or direct solution. The definition of the width is more complicated, we can use a kind of angular difference to define the width of capillary wave in the direct solution where θ pmin is the angular position of primary minimum which is described in Fig. 7(b), and θ smin is the angular position of secondary minimum which is always located behind the wave peak. We can also use a difference of inner coordinates to define the width of capillary wave in the inner solution where ξ ′ pmin and ξ ′ smin are the inner coordinates of the primary and secondary minimum respectively, as shown in Fig. 3(b) . Figure 9 shows the distribution of data points which represent the width of capillary wave under conditions of different Bond number and time. Note that the ordinate is the width captured from the direct solutions. The abscissa is where h F and θ F are calculated from the corresponding outer solution, and ∆ξ ′ is the width captured from the inner solution. The reference lines are calculated according to (53) and the slope of them is Bo −1/3 . We can see that the CW (capillary wave) width deviates from the reference line observably when logBo = 5.0 at early time (with greater h F and smaller θ F ). These deviations are reduced at later time as the front thickness h F decreases (front location θ F increases). When logBo is greater than 6.0, the deviation from the reference line is slight at all stage as shown in Fig. 9(a). The 1/3-power law (53) is accurately recovered when logBo = 8.0. The distribution of the peak of capillary waves for different Bond number and time is shown in Fig. 9(b). Note that the ordinate is the peak captured from the direct solutions, and the two factors in abscissa h F h ′ max are calculated according to (48a) from the corresponding outer and inner solution, respectively. The solid line y = x is generated for reference. The CW peak deviate from the reference line observably when logBo = 5.0, and present a non-trivial feature (distribute at both side of the reference line) as the front thickness h F decreases. But as the Bond number increases, the asymptotic behavior for the peak fits to the y = x line. When logBo = 8.0, the CW peak recorded from direct solutions are almost identical to the corresponding value h F h ′ max calculated using the asymptotic theory at all stages, as shown in Fig. 9(b). Fig. 11(a) and 11(b). Similar to the cylindrical problem, the discrete data points calculated from (20) finally fit to the reference lines as the Bond number increases, which indicate the clear asymptotic behavior. The main difference between Fig. 11 and Fig. 9 is that for spherical problem the width and peak of the capillary waves deviate more observably from the asymptotic theory when features between the composite and direct solutions is also obtained. D. The capillary waves in partial wetting cases Now we focus on the partial wetting cases in which the disjoining pressure is operative. The evolution equations (25) and (26) wetting case and partial wetting case. First, the peak and maximum slope of the capillary ridge in partial wetting case is greater than that in complete wetting case. It belongs to the macroscopic effect of the disjoining pressure. Second, the primary minimum near apparent contact line in δ = 0.0035d cases is slightly less than that in δ = 0.0035 cases, as shown in the inset of Fig. 13(a), which is the microscopic effect of the disjoining pressure on the refined structure in contact region. The corresponding direct solutions solved from (25) under the conditions logBo = 6.0, t = 6 and b = 0.01 or b = 0.001 are shown in Fig. 13(b). The similar differences like the inner profiles are observed in the inner region, except that the location of the primary minimum in complete wetting case moves forward than that in partial wetting case, clearly seen from the inset. Figure 14(a) and 14(b) show the asymptotic behavior of the width and the peak of the capillary waves on cylindrical surface in partial wetting cases. The parametric conditions are identical to that in Fig. 9 and the disjoining pressure is set to the same value in Fig. 13. Compared to Fig. 9, the CW width and peak are slightly greater than that in complete wetting case. Except for this main difference compared to the complete wetting case, according to the distribution in Fig. 14 The disjoining pressure does not affect local asymptotic behavior on cylindrical surface. On spherical surface, the comparisons of the CW width and CW peak between the direct numerical results and asymptotic lines are shown in Fig. 16(a) and 16(b) respectively. Compared to the complete wetting cases as shown in Fig. 11, the differences of the CW width and peak induced by disjoining pressure on spherical surface are more observable than that in cylindrical problem (compare Fig. 14 with Fig. 9), which may be attributed to the common greater value of K calculated in spherical problems. Figure 17 illustrates the asymptotic tendency of the composite profiles under the parametric conditions t = 10 and b = 0.001 on spherical surface, logBo is ranging from 5.0 to 8.0 too. According to the plots shown in Fig. 16 and Fig. 17, both of the global and local features of the capillary waves indicate that the asymptotic theory in partial wetting cases is clearly validated for the spherical problem. is homogeneous at small angle region, but monotonically increases as the angle increases. The long-term spreading can be described using different scaling laws so that the spreading on cylinder and sphere, which we will describe in a future paper. Appendix A: Exact solutions for outer and inner equations The exact solutions of the outer equations (28) Upon this curve (28) degenerates into an ordinary differential equation Equations (A1) and (A2) are solved subject to initial conditions θ(0) = θ 0 and h(θ, 0) = h i (θ), which leads to the following implicit form of the solution in which the first expression at θ = 0 is identical to the general solution (31) in small θ region. The time dependence of the azimuthal angle is given implicitly as the solution of is the first kind incomplete elliptic integral of Legendre's form whose first argument is This form is identical to a large Bond number case shown in Reisfeld and Bankoff 32 , in which an implicit solution for a liquid film flow on a horizontal cylinder is presented via almost the same expression of the first kind incomplete elliptic integral, even though the initial film considered there is assumed to be uniformly distributed on both the upper and lower cylinder. For a given time t and a given location θ, the parameter θ 0 can be numerically solved from nonlinear algebraic equation (A3b) using a Newton's method, then the film thickness h is calculated using (A3a). To construct a complete outer profile at a certain time, a number of (A3b) with different values of θ and a same value of t constitute a system of independent algebraic equations for obtaining the film thickness at different locations. Note that there is no difficulty for the Newton's method to solve (A3b), because the value of incomplete elliptic integral can be evaluated using the standard subroutine in module of special functions and the derivative is analytical. If the outer equation (28) is solved subject to initial condition (16), the value of parameter θ 0 calculated from (A3b) may become multi-valued. It implies that the characteristics lines may intersect due to the discontinuity in initial condition and a shock wave may be formed. The location of shock wave (front location) θ F can be determined by volume conservation where the profiles h 1 and h b are constructed via separately solving two systems of equations (A3) by setting the initial function h i (θ 0 ) = 1 and h i (θ 0 ) = b, respectively. After the front location is calculated, the front thickness h F is obtained as h F = h 1 (θ F , t). For spherical problem, a similar procedure of the method of characteristics can be applied. is the second kind incomplete elliptic integral of Legendre's form whose first argument is identical to (A3d) and is an elementary function. The Newton's method to solve nonlinear algebraic equation (A7b) and the numerical technique to construct the outer profile is analogous to that in cylindrical problem except that the volume formula for calculating the front location θ F is different where the profiles h 1 and h b are constructed by setting the initial function h i (θ 0 ) = 1 and h i (θ 0 ) = b, respectively. The boundary value problem in the inner region can be solved using a classic shooting method for a given δ 26 . The third order ODE (49) can be re-written as three first order ODEs, and the initial values should be set at a certain point. Because the boundary value of (49) is located at infinitely far, an asymptotic equation valid far upstream should be used to set the initial conditions. When ξ ′ → −∞, the uniform layer is only slightly perturbed h ′ → 1 + g ′ The linear ODE for perturbation g ′ is derived from the degeneration of (49) The characteristic equation of (A9) is Thus the upstream limiting solution of (49) is where a is an arbitrary parameter, and b and c are the real and imaginary parts of a conjugate complex root q of the characteristic equation (A10). The initial values of the first order ODEs h ′ , h ′ ξ ′ , and h ′ ξ ′ ξ ′ can be calculated using limiting solution (A11) at a sufficiently far location upstream. Then this initial value problem is solved using a fourth-order Runge-Kutta method. The parameter a is a free parameter in the shooting process and can vary until the downstream condition h ′ → δ is satisfied to high accuracy. Finally the inner profiles can be numerically integrated. The integration step of the Runge-Kutta method should be adaptive because the profile in the inner region and contact region varies sharply, especially for the small δ case. The integration step can be adapted according to the local values of the first order and second order derivatives of the numerical profile. An alternative adaptive function can be expressed as Where ∆ξ ′ 0 is the uniform integration step, α and β are two adjusting factors. The ODE (60) which includes the disjoining pressure terms can be solved using the similar shooting method described to solve ODE (49). But the characteristic equation of the linear ODE which is valid at far upstream is Except that, the shooting steps are identical to that for solving (49). The second order derivative of h in (B1c) is calculated by the typical central difference scheme shown in (B2c). For a given Bond number Bo, a system of quartic equations for the film thickness h k+1 i at the next time step is constructed by substituting (B2b) and (B2c) into (B2a). We use an iterative Newton-Kantorovich's method to solve the nonlinear equations. For axisymmetric evolution equation (20), the conservation form is A similar Crank-Nicolson scheme is used for (B3a), (B3b) and (B3c), the difference is that the sinusoidal coefficient in (B3a) and the additional first order derivative of h in (B3c) which can be discretized using the central difference scheme. For evolution equations (25) and (26) including the disjoining pressure terms, because the derivatives of h do not appear in the disjoining pressure equation (27), the discrete form can be written directly where i is the grid index, N is the total number of the nodal points. The factor α should be calculated using the first grid spacing at a starting point (i = 1), and this spacing is set sufficiently small to capture the refined structure near apparent contact line. The [0, π] domain is split into two segments with a common starting point. The location of the starting point should be specified to implement the distribute function (B5) for each segment. It can be selected using the point which has the maximum rate of change in the whole free-surface profile. A function like the denominator on the right side of (A12) can be used as a criterion to calculate the change rate at a given location The time step ∆t can be also adapted according to a formula ∆t = ∆h max abs(Q θ ) max (B7) where ∆h max is a preset maximum permissible change of film thickness between each time step, Q θ is the net flow flux calculated from an explicit scheme which uses the film thickness at current time step.
2017-07-31T12:32:55.000Z
2017-07-31T00:00:00.000
{ "year": 2017, "sha1": "17dc2198366d0dcb97663f7af3def2923d4d7971", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "17dc2198366d0dcb97663f7af3def2923d4d7971", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
253008961
pes2o/s2orc
v3-fos-license
Study on the Effect of Flocculation and Sedimentation on the Mechanical Properties of Iron Tailing Sand Tailings sand is the dam-building material of the tailings dam accumulation sub-dam, and its mechanical properties directly affect the safety and stability of the whole tailings dam. With the improvement of mineral sorting technology, the content of fine and ultrafine particles in the tailings gradually increases, which will affect the mechanical strength of tailings accumulation sub-dam. To improve the settling effect of tailings accumulation in the Chengchao iron ore mine (China), the effect of iron tailings flocculation and settling law on the mechanical properties of tailings sand was studied. Polymerized iron sulfate (PFS) as an inorganic polymer flocculant, compared with the traditional flocculant, has a stronger flocculation and adsorption capacity, is widely used in drinking water, industrial wastewater, and domestic sewage treatment, is an anionic polyacrylamide (APAM-800) for polymer compounds, mainly divided into nonionic, cationic, and anionic three, and is commonly used mine flocculant at home and abroad. Through different flocculants, PFS, and APAM-800 compound flocculation and settling experiments, the effect of the flocculant-dosing scheme on the settling rate and turbidity of clarified liquid was determined, and the change in tailings consolidation shear strength of the best dosing scheme was measured. The experimental results showed that the settling rate of iron tailings was the fastest when PFS was used in combination with APAM-800 at 1.2g L − 1 and 0.20g · L − 1 dosages, respectively. The direct shear test results showed that the internal friction angle of tailings sand increased by 2.18 ° , the cohesion increased by 5.814kPa, and the shear resistance of flocculated settled tailings increased, which can effectively improve the safety factor of tailings sand pile sub-dam. Introduction Fine grinding in the mineral sorting process can improve the bene ciation of minerals, but it also can cause several problems such as producing tailings at ner sizes as well as a slow settling rate of these tailings after bene ciation. Tailings is a solid waste discharged after grinding and nishing. As the dam material of tailings dam accumulation, the mechanical characteristics of tailings directly a ect the stability of the whole tailings dam. e geotechnical properties of the tailings sand depend on the permeability and nes content of the sand, which in turn a ects the undrained strength and drainage rate [1,2] of the tailings sand. With the increase of ne grain content in sand, the shear strength of tailings sand will decrease, which will a ect the accumulation strength [3,4] of the tailings dam. Fine-grained tailings have low permeability and are di cult to use directly for dam construction because they are not permeable enough to consolidate and obtain the necessary geotechnical stability in a short period. Pretreatment of ne-grained tailings with occulants and pumping them into tubes or geotextile dewatering bags is a promising technique [5,6] to solve this problem. Enterprises such as Dahongshan micro ne grain iron tailings [7] and comprehensive tailings of an iron ore mine in Magang [8] have studied occulation and sedimentation tests and have achieved better sedimentation results. Xiao et al. [9], Qian et al. [10], and Chen et al. [11] studied the settling effect of different flocculants on fine-grained copper oxide leaching slurry, ultrafine-grained manganese ore leaching slurry, and wet zinc refining slurry. Xiao et al. [9] used composite flocculant to settle a fine-grain copper oxide leaching slurry. e influence of flocculant type and dosage and slurry concentration on sedimentation properties are investigated. Qian et al. [10] used different flocculants to carry out the flocculation settlement test on the leaching pulp of electrolytic manganese ore and investigated the influence of the electrical properties, ionic degree, molecular weight, and additional amounts of flocculants on the flocculation settlement effect of the pulp. Chen et al. [11] conducted flocculation settlement tests on various kinds of pulp in the wet zinc smelting process of neutral leaching-weak acid leachingreductive leaching-copper-hematite removal of iron. According to the properties of the pulp, different kinds of flocculants were selected to optimize the flocculation settlement effect of the pulp, so as to achieve a good liquidsolid separation effect and economic benefits. To improve the flocculation effect, Xiao et al. [12] studied the settling characteristics of iron tailings by compounding inorganic and organic flocculants. Inorganic and organic flocculant combinations are synergistic, which can significantly improve the settlement effect of tailings and provide a reference for the settlement of tailings, especially finegrained tailings. Gao et al. [13] studied the flocculation and precipitation law of filled tailings in detail and improved the thickening technology and paste-filling process. e results show that when the concentration of tailings slurry is 15%, the dosage of XT9020 is 26.67 g·t −1 , the concentration of XT9020 solution is 0.22%, and the maximum treatment capacity per unit area is 3.26 t·(h·m 2 ) −1 . e experimental value is in good agreement with the predicted value. e tailings are the discharge of the tailing slurry discharged in the magnetic separation process after precipitation, and the composition of Chengchao iron ore tailings used in the experiment is shown in Table 1. e tailings of Chengchao iron ore belong to low calcium, magnesium, aluminum, and silicon type, and the SiO 2 content of tailings is 37.73%, which is very different from the SiO 2 content of yellow sand commonly used in building materials. However, the hardness of quartz is higher compared with other mineral components, which further shows that the fine grain content is not conducive to the strength improvement of tailings. In this study, to study the effect of iron tailings flocculation and sedimentation law on the mechanical properties of tailings sand, samples were taken at the site of Chengchao iron ore tailings pond. Flocculation and sedimentation experiments with different flocculants were carried out, and consolidation direct shear tests were conducted on the tailings sand after flocculation and sedimentation to study the mechanical properties of tailings sand, which in turn guided the construction of tailings dam accumulation sub-dams. Experimental Materials. e experimental tailings were a mixed slurry of fine and coarse tailings from the Chengchao iron ore mine. Referring to the Geotechnical Test Method Standard (GB/T 50123-2019) [14], the particle analysis test was performed on the tailing sand, and because of the small particle size of the tailing sand, the particle size below 0.075 mm was tested by laser particle size analyzer, and the particle size above 0.075 mm was tested by sieving method. e coarse tailings amounted to 655 t d −1 with a concentration of 11.52% and a particle size of 0∼0.5 mm. Meanwhile, the fine tailings amounted to 2695.98 t d −1 with a concentration of 27.69% and a particle size of less than 0.074 mm accounting for 60%. e concentration of the mixed slurry was 21.73% and the specific gravity of the tailings was 2.65 t/m 3 . e composition is shown in Table 2. From the results in Table 2, it can be seen that the tailings are fine in size, with a yield of 66.18% for less than 0.076 mm particle size, poor permeability of fine tailings, and long drainage consolidation time, which is not conducive to the stability of the tailings dam. Experimental Agents. Six kinds of flocculants were used to flocculate and settle the tailing slurry, and it was concluded through experimental screening that when PFS, PAM, PAC, PPFS, zinc polysilicate, and APAM-800 are used as flocculants alone, the flocculation effect on iron ore tailing pulp is general. When using polyacrylamide to flocculate and settle the tailing, the flocs produced were large and the settling speed was fast. However, the turbidity of the clarified layer was higher [15]. When using an iron polymer flocculant, the turbidity of the clarified liquid was low, but the settling speed was slightly lower than that of polyacrylamide [16]. erefore, the flocculant APAM-800 and PFS were chosen for the tailings settling experiment to achieve a better settling effect. e molecular formula of PFS is [Fe 2 (OH) n (SO 4 ) 3 − n/2 ] m . PFS has good flocculation performance and wide applicability and has been widely used in urban sewage purification in recent years. APAM-800 molecular formula is [CH 2 CH(CONH 2 )] m [CH 2 CH(COONa)] n . It is formed by the polymerization of anionic monomers with acrylamide. At present, the commonly used anionic monomers are sulfonic acid and carboxylic acid. ese monomers have high activity, high yield after polymerization, and good thermal stability. Experimental Methods. PFS were prepared into 0.6, 0.9, 1.2, and 1.5 g·L −1 solutions, and then 0.10, 0.15, 0.20, and 0.25 g·L −1 APAM-800 solutions were added, a total of 16 groups. Each time, the same amount of tailing slurry 2 Advances in Civil Engineering (200 mL) was put into a 500 mL beaker and the same amount (10 mL) of flocculant was added successively with a pipette. Each time an equal amount of tailing slurry in a beaker, using the coagulation experiment mixer (Shanghai Shimadzu ZR4-6) while adding flocculant, first fast stirring and then slow stirring was employed. After the stirring, the tailing slurry was poured immediately into the measuring cylinder to settle. e volume of the clarified layer was recorded at different settling times. e volume can reflect the settling rate of tailing after adding the flocculant. After settling, take the supernatant to determine the turbidity. e analytical instrument is HACH 2100Q turbidity meter produced by HACH Company. e instrument uses a tungsten filament lamp as the light source, the range is 0∼1000 NTU, and the resolution is 0.01 NTU, in line with the required instrument parameters in the standard. According to a series of automatic readings, there is no extra measurement and estimation [17]. In order to study the shear strength of tailing sand under different stress conditions, the ZJ strain-controlled direct shear instrument and TSZ-3 strain-controlled triaxial instrument were used for shear tests. According to the requirements of the Geotechnical Test Code, axial loads of 100, 200, 300, and 400 kPa were applied in the direct shear test, and shear was carried out at a shear rate of 0.8 mm·min −1 until failure. e strength indexes such as cohesion C and internal friction angle φ were calculated. e flocculants were screened and the best dosing scheme was determined. e flocculants were dosed according to the best dosing scheme, the settled tailings drainage was solidified, and direct shear experiments were conducted to analyze the effect of flocculation and settlement on the stability of fine-grained tailings ponds. Tailings Flocculation and Sedimentation Experiment When the flocculant is not added, the tailings slurry will settle naturally. e flocs observed under the microscope are shown in Figure 1. e overall color is light, and the flocs are mixed with mud and water, the flocs are blurred, and almost no flocs particles are formed. When PFS flocculant is added alone, as shown in Figure 2, alum flowers are generally dense. Compared with APAM-800 alone, as shown in Figure 3, alum flowers of APAM-800 flocculant are like small snowflakes, and the flocs are larger and cluster with each other. In order to achieve a better flocculation settling effect, two kinds of flocculants, PFS and APAM-800, were used in the experiment. e dosage of PFS was quantified (0.6, 0.9, 1.2, and 1.5 g L −1 ), and 0.10, 0.15, 0.20, and 0.25 g·L −1 APAM-800 were added, respectively. e volume of the clarified layer and turbidity of the clarified liquid was measured. e settling curves of the tailings when the two flocculants were used in combination at various combination rates are shown in Figures 4-7, and the turbidity of the clarified layer is shown in Table 3. e experimental results show that the PFS and APAM-800 together have a greater settling rate and lower turbidity in the clarified layer than when they are used alone, bringing into play their respective advantages. As can be seen from Table 3, the main reason is that PFS forms various complexions through ionization and hydrolysis, and these ions can enter the solid-liquid interface through the periphery of colloidal particles and fine particles and neutralize the potential determining ions so that the total potential φ decreases. erefore, ζ potential decreases, and thus the electrostatic repulsion between the colloidal particles decreases resulting in the colloidal particles destabilizing and Advances in Civil Engineering coagulating. However, if the dosage continues to increase, the surface electrical properties of the colloidal particles will be reversed and the opposite charge will be applied to make the colloidal particles have electrostatic repulsion and restabilize the dispersion [18]. e settling rate of tailings increases significantly when the amount of APAM-800 is increased from 0.10 g·L −1 to 0.20 g L −1 . e settling rate is maximum when the amount of APAM-800 is 0.2 g L −1 . e maximum speed is 8.87 mL·min −1 . e settling rate decreases when the amount of APAM-800 is more than 0.2 g L −1 . e minimum speed is 6.89 mL·min −1 . APAM-800 is a chain polymer with large linearity and long molecular chains. e molecular formula is [CH 2 CH(CONH 2 )] m [CH 2 CH(COONa)] n [19]. e molecular structure of APAM-800 is shown in Figure 8, and through the action of electrostatic gravitational force, van der Waals force, and hydrogen bonding force in the active part, it has a bridging effect with colloidal particles and fine Advances in Civil Engineering particles. When one end of it adsorbs a certain colloid, the other end adsorbs another colloid so that the colloid coalesces gradually and becomes larger, forming a coarse floc, thus settling fast. When the dosage is too large, molecular chains may appear around colloidal particles and fine particles to make them stable again, resulting in fine flocs, slow settling speed, and high turbidity. According to the experimental tailings flocculation sedimentation experiment and turbidity measurement, the clarification layer volume of 157 mL with the PFS dosage of 1.2 g L −1 and APAM-800 dosage of 0.20 g L −1 , the maximum settling rate, and turbidity 17.8 NTU; the clarification layer volume of 142 mL with the PFS dosage 0.9 g L −1 and APAM-800 dosages 0.20 g L −1 , the lowest clarification layer turbidity 15.05 NTU. e complexions generated by the PFS will agglomerate the fine particles into larger particles through electro-neutralization, and then the APAM-800 polymer chain bridging effect will cause the particles to form large flocs quickly, which will accelerate the settling speed of the fine particles and reduce the turbidity of the clarified layer, thus improving the settling effect of the tailings. In addition, the results show that the volume of the clarified water layer after tailings flocculation settlement is much higher compared with the tailings natural settlement. e upper clear liquid accumulation is about 33 mL after natural settlement. After PFS and APAM-800 mixings, about 140-150 mL can be obtained, that is, 30% clarification liquid can be obtained for 18 min. In practice, extending the settlement time will obtain more clarification liquid, so the reuse rate of the clarification liquid can be greatly improved. Tailings Flocculation Settling Shear Strength Experiment e slow settling and difficult dewatering of fine particles in the tailings have a great impact on the stability of the tailings pond. By adding flocculants, the fine particles can be coalesced into large flocs to accelerate the settling of the tailings and the discharge of the supernatant, which is an effective way to alleviate the stability problems of the tailings. e shear strength experiments were carried out with the best flocculant-dosing ratio, as PFS 1.2 g L −1 and APAM-800 0.20 g L −1 . e tailings were solidified by discharging the clarified liquid under low stress after settling, the specimens were prepared and the mass and density were measured, as shown in Table 3. Four specimens were subjected to straight shear experiments at different normal stresses (100 kPa, 200 kPa, 300 kPa, and 400 kPa) at a shear rate of 0.8 mm·min −1 until the specimens failed. e specimens before and after shear strength experiments are shown in Figure 9. e same method was used for straight shear experiments on the original tailings to measure the shear stress τ f at different normal stresses σ. Based on the Moore-Coulomb damage theory e graphs are plotted with σ and τ f as horizontal and vertical coordinates, from which the friction angle φ and cohesion c can be determined, two parameters that are important for the design and simulation of tailings dams. e results of shear strength compared with and without flocculant are shown in Figure 10, and the friction angles φ and cohesion c are shown in Table 4. From the experimental results, it can be seen that the density of tailings is significantly reduced after adding flocculant, which indicates that the porosity increases; it can be concluded from the results of the direct shear experiment that the degree of tailings' shear resistance is greatly improved after adding flocculant. After adding the flocculant, the friction angle of the tailing increases by about 2.18°, which indicates that flocculation can improve the friction property of the tailing so as to improve the stability of the tailing pond. e cohesive force increased by 5.814 kPa, indicating that the flocculant increased the attraction between mineral sand molecules, made the cementation of compounds in mineral sand more obvious, and enhanced its ultimate resistance to shear failure. ese results show the effect of flocculant addition on tailings slurry parameters, which not only improves the dewatering of tailings slurry but also the degree of shear resistance, which is of great significance to the stability of tailings ponds. Conclusion Iron tailings with fine particle size, slow natural settling speed, and high turbidity in the clarified layer require the addition of flocculants to accelerate the settling rate and reduce the turbidity in the clarified layer. rough the experimental screening of PFS, PAM, PAC, PPFS, zinc polysilicate, and APAM-800 these six conventional flocculants; PFS and APAM-800 effect are significant. In the experiment, the best solution for flocculation and settlement of iron tailings was determined, the changes in the solidification shear strength of flocculation settling tailings were measured, and the effect of flocculation settling on the dewatering and solidification performance of fine-grained tailings and geotechnical strength was studied. (1) Organic polymer flocculants can significantly improve the settling rate of iron tailings, and inorganic flocculants can significantly reduce the turbidity of the clarified layer. 1.2 g L −1 of PFS and 0.20 g·L −1 APAM-800 have a synergistic effect in improving the settling rate and reducing the turbidity of the clarified layer, which is better than using one flocculant alone. (2) Compared with no flocculant added, the internal friction angle of tailings after flocculant treatment increases by 2.18°the cohesive force increases by 5.814 kPa, which improves the frictional resistance, and the stability of tailings sand and the tailings dam can obtain better geotechnical performance. In addition, after the tailing slurry is flocculated and settled, the clarified layer has low turbidity, and the settled tailing sand can also be used for mine filling after concentration, which can effectively solve the two major safety hazards of metal mine tailings dams and mining areas. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest. Acknowledgments is study received the funding support from the Special Fund Project for Production Safety of Hubei Province, Advances in Civil Engineering 7
2022-10-20T15:46:40.896Z
2022-10-18T00:00:00.000
{ "year": 2022, "sha1": "b4ed4cf0ee870063e87a77d5e61e2db6c8b111e1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ace/2022/6045436.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "32c42bbecdb0b04bbb73152f2edebad7cd902915", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
260234854
pes2o/s2orc
v3-fos-license
Healthy Eating for Elective Major Orthopedic Surgery: Quality, Quantity, and Timing Abstract Improvements to enhanced recovery pathways in orthopedic surgery are reducing the time that patients spend in the hospital, giving an increasingly vital role to prehabilitation and/or rehabilitation after surgery. Nutritional support is an important tenant of perioperative medicine, with the aim to integrate the patient’s diet with food components that are needed in greater amounts to support surgical fitness. Regardless of the time available between the time of contemplation of surgery and the day of admission, a patient who eats healthy is reasonably more suitable for surgery than a patient who does not meet the daily requirements for energy and nutrients. Moreover, a successful education for healthy food choices is one possible way to sustain the exercise therapy, improve recovery, and thus contribute to the patient’s long-term health. The expected benefits presuppose that the patient follows a healthy diet, but it is unclear which advice is needed to improve dietary choices. We present the principles of healthy eating for patients undergoing major orthopedic surgery to lay the foundations of rational and valuable perioperative nutritional support programs. We discuss the concepts of nutritional use of food, requirements, portion size, dietary target, food variety, time variables of feeding, and the practical indications on what the last meal to be consumed six hours before the induction of anesthesia may be together with what is meant by clear fluids to be consumed until two hours before. Surgery may act as a vital “touch point” for some patients with the health service and is therefore a valuable opportunity for members of the perioperative team to promote optimal lifestyle choices, such as the notion and importance of healthy eating not just for surgery but also for long-term health benefit. Introduction Perioperative medicine deals with the surgical patient from the time of contemplation of surgery to full recovery. In the context of modern-day elective surgery involving the hip, knee, and spine, hospitalization is a short stay focusing on the appropriateness of surgical, anesthetic, nursing, and rehabilitation practices. Therefore, the period before hospitalization arouses the most interest for what concerns patient education on how to best prepare for surgery. This material can be reported in information booklets, delivered during preoperative education sessions, or communicated via online videos. Within these mediums, information is given regarding a range of topics, such as the background to osteoarthritis or low back pain, the technical details of the operation, when to stop routine medications, and what to expect from the hospital stay. 1 However, apart from fasting guidelines before admission, it is the opinion of the authors that very poor technical information on healthy eating choices is provided. This is at a point when patients may be more responsive and motivated than at other times to optimize lifestyle factors, such as diet. The prevalence of orthopedic patients reporting sub-optimal intakes ranges from 50 to 91.5% in the perioperative period. [2][3][4] As we become aware of the critical role of improving lifestyle choices for everyday health but also for recovery from surgery, [5][6][7] it is relevant both to patient experience and care quality to integrate the journey towards surgery with the dietary notions that help to fulfill the needs of energy, nutrients, and fluids. A healthy diet is, in fact, a prerequisite for the effective application of nutritional optimization methodologies that should be applied only to patients who already meet their metabolic requirements. To inform patients, and to remind healthcare professionals of what is meant by healthy eating, the technical concepts relating to food and nutrition must be translated into terms of common use avoiding simplification of knowledge and ambiguity. 8 The educational component should promote patient self-management, 9 using practical examples to help clear the general notions. In view of sustaining the nutritional requirements before surgery and during rehabilitation, our goal is to bring together the principles of healthy eating into the perioperative practices for patients undergoing elective major orthopedic surgery as a starting point for designing successful nutritional optimization strategies. In order to do this, educating health professionals is the first step, and our intention with this work is to describe a practical and usable categorization of foods based on the nutritional intent, to which the general concepts of quantity, quality, food timing, and preoperative eating can be integrated with effectiveness. The Principles of Healthy Eating Nutritional Use Food can be categorized according to different properties, such as the origin (animal or plant-based), processing (whole or refined), acid-base load, glycemic index, or antioxidant capacity. 10 The classification based on the main nutritional value lets us distinguish five groups of foods: starchy carbohydrates, the five sources of proteins, fats (oils and spreads), micronutrient-rich fruits, and fiber-rich greens and vegetables. It is essential to inform patients about these distinctions to transcend local misconceptions or culinary customs that might in the long run unbalance the diet. 11,12 Although tubers are vegetables, they are to be considered as an alternative to cereals and derivatives and therefore as a primary energy source (Table 1). Although green beans are legumes, they are not to be considered a source of protein but as low-protein vegetables, having an edible pod and high-water seeds. Essentially, each meal must be based on cereals, derivatives, or tubers together with a source of proteins, fats, and fiber. It may be useful to suggest patients introduce certain food categories only at certain moments of the day to aid equal distribution of energy and macromolecules. Fresh fruit, nuts, low-salt low-sugar baked goods, or yogurt are ideal snacks in mid-morning and mid-afternoon. 13 Although the guidelines or suggestions regarding healthy eating habits have long been around as a means of disease prevention and quality of life improvement, they are not consistently followed by patients or applied by institutions. 14 Moreover, it is not uncommon that some patients often wish to turn to plant-based products or exotic foods assumed to be ethically superior or healthier. 15 Most of these foodstuffs hardly find a place in the daily diet because they deviate from the nutritional criteria that recognize the five food groups, thus misleading even the most attentive patient. Examples are the plant-based alternatives of milk (eg, oat milk), the flours from legumes (eg, chickpea flour), oily fresh fruits (eg, avocado), soy products (eg, tofu, tempeh), or sea vegetables (eg, algae). Although these foods might not be a wrong health choice, patients should be advised that it is first of all important to know how to integrate the distinct five food groups into the usual diet. Food Quantity The concept of food quantity encompasses the daily, weekly, and seasonal requirements together with the portion size. Meeting daily needs is essential for the maintenance of a good nutritional status. 16 Different quantitative dietary reference values (DRV) are used by the European Food Safety Authority to issue dietary recommendations, such as the average requirement (AR) of healthy individuals, the population reference intake (PRI) for most healthy people, the adequate intake (AI) when AR is not available, and the reference intake (RI) for macronutrients. 17 The healthy eating instructions for preoperative education should feasibly adhere to these dietary standards as well as consider that distinct daily values are recommended in older and surgical populations. 18 In the context of orthopedic surgery, we can consider appropriate requirement energy of 27-30 kcal and 1.2-1.5 grams of proteins per kg of body weight. 19,20 The abovementioned categorization of foods based on the primary nutritional function allows the definition of quantitative standardized portions since similar amounts of macronutrients are provided by food products of the same group, with water content being the Regularly, certain foods must be consumed more than once a day or a week to fulfill the requirement of certain nutrients. For example, it is recommended to consume at least five portions of a variety of fresh fruit, greens, and vegetables every day. Moreover, the seven lunches and seven dinners per week are suggested to comprise a portion of legumes 3-4 times, seafood 3-4 times, eggs 2 times, dairy products 2 times, and meat 2-3 times. 21,22 Requirements depend also on bodyenvironment interactions, and it may be necessary to adapt intakes to unfavorable conditions (eg, increased vitamin D in the less sunny seasons). If there is not enough time prior to surgery, nutrient deficits may be corrected through dietary supplementation after discharge. 23,24 Food Quality Despite being inherently based on quantitative attributes, food quality is the aspect that bestows the "healthy" connotation to a diet. 25,26 It combines the refrain from unhealthy food components with the pursuit of nutrient balance. A highquality diet adheres to the suggested dietary target (SDT) and does not exceed the tolerable upper intake level (UL). The SDT for certain food components like sugars aims at preventing chronic degenerative diseases 27 while the UL refers to the maximum intake to be unlikely to pose a risk to health, being useful in case of use of dietary supplements. 28 Nutrient balance is ensured if the patient meets the requirements daily, varies the food choices each week, and follows food seasonality. Patients should be taught in reading food labels to choose less processed food and educated in conservation, handling, and cooking techniques. In Table 1 we summarize these concepts. It is important to consider that the nutritional profile of a meal is highly dependent on the way the raw food was stored and cooked, especially as regards the vitamin content. For example, prolonged boiling in unsalted water causes mineral depletion of greens and vegetables. 29 Following seasonality and weekly variety is also a means to sustain a healthy diet. Each day and week, the patients should alternate the animal with plant-based protein sources 30 as well as vary the consumption of fruits, greens, and vegetables. Practical advice is always the easiest to grasp, such as changing the color of the fruit, green, and vegetables every meal. Each season, the patient should prefer local and seasonal products. It is, in fact, important to recall that food quality depends not only on the geographical location and the season of growth and harvest but also varies according to the subspecies of plants, the cultivar and ecotype, the chemotype, soil and nourishment, environmental impact during plant growth, the weather and climate changes, and the agricultural practices. 31,32 Food Timing Food is a potent "zeitgeber" (time cue), being capable of acting as a circadian time trigger. 33 The time to eat a meal, the order of nutrients ingested in the same meal, and the distribution of intakes are to be considered the time variables of feeding. Regardless of the composition and number of meals, patients should be advised to eat at fixed times to avoid circadian misalignments. 34 Moreover, meals should be consumed not too close to physical activity or bedtime. 35 The decrease in appetite and food intake of aging (ie, geriatric anorexia) may expose older patients to inadequate amounts of food. Greens and vegetables boost meal volume, stomach stretching, gastric mechanoreceptor activation, vagus nerve signaling, and early satiety. Older adults might be instructed to prefer single dishes that combine carbohydrates with proteins, avoiding eating vegetables at the beginning of the meal. Energy intake should be high in the morning to sustain the waking hours, thus progressively being reduced during the day. 36 Proteins should be evenly allocated over main meals to provide a steady stream of blood amino acids, 37 with this concern being most significant for patients with sarcopenia. The right timing of both energy and protein sources is one of the factors that can in fact positively influence the maintenance of the nitrogen balance. 38 Eventually, biochemistry reminds us that the bioavailability of some nutrients varies in the presence of other food components. For example, it is wise to inform patients that the iron in plants is poorly absorbed unless it is consumed with L-ascorbic (eg, freshly squeezed lemon juice should be added). 39 These tips can make a difference in patients with age-related malabsorption. 40 The timing of food is often conditioned by prescribed medicines, which are to be taken at certain times on a full or empty stomach or at some distance from meals, and by surgical, anesthetic, and nursing practices. In time, it is important that any dietary advice, albeit generalist, is always comprising detailed instructions for what concerns the last dinner before surgery and the eating behaviors on the day of admission. Healthy Eating Prior to Orthopedic Surgery Dietetics is the science that studies the elaboration of diets with common or special foods, the administration of food through tubes connected to gastrointestinal districts, and the supply of nutritional factors via parenteral routes. Nutrition is the science that focuses on the biological processes that allow or condition the growth, development, conservation, and reintegration of physical and energy losses. Considering these disciplines in the context of major orthopedic surgery, eating through diet is the determining condition of a balanced nutritional status, which is a set of measurable parameters related to health, performance, and consequently the suitability for surgery. It is not uncommon for the patient to ask what the most appropriate food is to eat in preparation for surgery, when should the last meal be, or if any dietary supplements may be of use for a better recovery. In building a healthy eating strategy, it is primarily important to convey the concepts of quantity, quality, and timing of food, possibly through a standard dietary plan contingent on the technical skills of the perioperative team in order to simplify the integration of theoretical concepts into the patient's everyday routine. Subsequently, it is vital to give specific indications for the day of surgery for what concerns eating and drinking practices. Current enhanced recovery after surgery (ERAS) guidelines do not specify what is meant by "6-hour fast for solid food" or what "clear fluids" are allowed until 2 hours before the induction of anesthesia. 41,42 The last meal could be the dinner of the day before if the surgery is early in the morning, breakfast if the surgery is late in the morning, or lunch if surgery is in the late afternoon. We argue that the last meal should be light, generally lighter than usual, conceivably consisting of a source of complex carbohydrates and proteins, such as an egg with bread or yogurt with cereals and fruit. It is also reasonable to recommend that a glass of water should be preferred, but it is also possible to drink a glass of fresh juice extracted (pulp-free) from a maximum of 50 g of fruit or vegetables. 43 Another aspect worth discussing with the patient in the context of surgical preparation is the use of dietary supplements, for which there are limited evidences associating their use with reduced length of stay and accelerated return to functional mobility. 44 Although not pertinent to our argument, we can reasonably assume that most patients do not require supplements if they follow a healthy and balanced diet before and after surgery. Their use must in fact be limited to malnourished patients or when there is not enough time to correct the deficits through dietary strategies. In addition, some food components found in dietary supplements are known to interact with medications, hence affecting the therapeutic efficacy. 45 Therefore, we suggest abstention rather than improper use or use with no purpose. On the day of surgery, it is reasonable to schedule that at least one meal might be skipped and to plan the time for oral refeeding accordingly to ensure a functional stream of energy and protein. On the one hand, it is imperative to resume oral feeding as soon as possible but, on the other hand, it is not prudent to provide meals that require long digestion times. In the absence of postoperative diets planned by the dietetic unit, water balance and high caloric density meals in small volumes should be favored. If no complications arose, it may be possible to resume normal feeding 4 hours after surgery. 46 We suggest that patients continue to follow healthy eating indications after the operation in order to balance the nutritional needs during the rehabilitation process and, most likely, for good health in later life. In fact, it is not uncommon for patients to experience a loss of weight or lean mass after orthopedic surgery, 47 which might accelerate the natural physical decline observed early in subjects with frailty. 4,35 In Figure 1 and Supplementary Figure 1, we have reported two exemplary healthy eating plans for a lady and a man, respectively, to guide health professionals toward the applicability of the basic food principles. One way to summarize the patients' needs, answer the questions, help the transitioning from hospital to home, and anticipate the barriers of the first days after discharge is the use of an information booklet (Supplementary Material), whose acceptability and effectiveness should be evaluated with quality-improvement initiatives. Of note, dietary counseling combined with exercise therapy in the context of prehabilitation 48 and postoperative rehabilitation 49 of orthopedic patients could reveal the most satisfactory outcomes. Conclusion In summary, the hospitalization of patients undergoing elective hip, knee, and spine surgery is short, with perioperative processes focusing on the principles of enhanced recovery after surgery (ERAS). There is a growing interest in prehabilitation techniques, and nutrition is one of the three pillars of this concept. There is a clinical acknowledgement that the promotion of healthy eating should be championed by all members of the multidisciplinary team along with the fact that patients who follow a healthy diet are more likely to have an enhanced recovery than patients who eat poorly. Whilst not unreasonable, there is currently no evidence to sustain this argument. Nevertheless, we should take advantage of the teachable moment from the contemplation of surgery to admission. Scheduling a patient for major elective surgery can take place weeks or months in advance and health eating education should be part of the preparation. The sooner patients start to eat healthily in view of surgery the better, and following the same indications after discharge should not have a deadline. In teaching patients about healthy eating, dietitians are able to easily fill this educational role, providing expert advice that adheres to the dietary reference values and is based on our proposed key principles of healthy eating that apply for the orthopedic patient. These include eating a variety of food from all healthy food groups (Table 1), choosing whole, unprocessed products whenever possible, and drinking plenty of water (Supplementary Figure 2). In this article, the eating plans have been designed to ensure optimal amounts of proteins and calories, mainly referring to the foods used in the cuisines of Italy and England. Nevertheless, different countries and ethnicities could mean different cultural heritages. Should the healthy eating indications integrated into perioperative medicine practices of major elective orthopedic surgery be one for all? Probably not, and it will therefore be the Healthy eating is not to be confused with the nutritional optimization strategies advocating optimal bodily reservoirs through intakes greater than the requirements but beneficial to the patient's health in view of or under particular circumstances. Nutritional optimization should, in fact, be applied to manage perioperative malnutrition, 50 which refers to any balance deviation including excess (eg, obesity) and insufficiency (eg, anemia) factors associated with poor outcomes. 51 Diet therapy as part of the preoperative conditioning process is a valid add-on for specific patients. 52,53 Conversely, by following our simple guidelines on healthy eating, the entire surgical population is likely to benefit, especially in terms of getting fit for surgery, long-term patient satisfaction, and positive impact on both quality-adjusted life years and economic indicators in health care. [54][55][56][57][58] The accomplishment of future preoperative education might need pioneering dissemination methods, 59 thus exploiting different approaches as in fact occurs for what concerns the management of anxiety 60 or physical activity to practice. 61
2023-07-28T15:38:19.489Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "be11d5bd81bee4f18f54bc56ff187ae2eac4bf78", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=91428", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad5603511b3b324b6a6575a538c2a7364bccf49b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
144548602
pes2o/s2orc
v3-fos-license
Library-Press Collaborations: A Study Taken on Behalf of the University of Arizona BACKGROUND The University of Arizona Press moved under the University of Arizona Library both physically and administratively a few years ago, echoing a trend amongst university presses: 20 AAUP members now are under the administration of university libraries. To understand the new evolving relationships in scholarly communication, a review of university press and library collaborations was undertaken by the University of Arizona Press and the University of Arizona Library through the Association of Research Libraries Career Enhancement Program (ARL CEP). LITERATURE REVIEW There has been much written throughout the years on both the acrimonious and collaborative relationships between university presses and academic libraries. Much of the literature includes either editorials or case studies, with one or two major reviews of scholarly communications and the state of publishing. DESCRIPTION OF PROJECT During the course of nine weeks, the ARL CEP Fellow reviewed existing literature, interviewed staff at the University of Arizona Press and Library, and conducted 27 informal interviews with library deans, press directors, and scholarly communications leaders. The interviews addressed the partnership history, structure, motivations, goals and needs, administrative support and budget decisions, key stakeholders, and thoughts on the future of their relationships as well as scholarly communications. Then University of Arizona Library and Press staff were interviewed regarding their perceptions of their roles and each other’s roles. NEXT STEPS This research report includes findings from the literature review and interviews as well as specific recommendations for the University of Arizona that will be implemented to improve and build relationships going forward. The University of Arizona (UA) Press was founded in 1959 through the Department of Anthropology and has retained its reputation in the field and a strong regional focus, though it publishes authors from around the world. It produces about 55 books a year with a staff of 13 people. The UA Press wins awards every year and prides itself on a high quality of scholarship and publishing. A few years ago, it was precipitously moved by university administration into the University of Arizona Libraries, both physically and administratively. This situation is not unique to Arizona. The UA Press and Libraries are part of a recent trend in scholarly communications: As of 2013 there were 20 members of the Association of American University Presses (AAUP) that reported through their university libraries and 58 institutions participating in the Library Publishing Coalition (LPC). The relationship between the UA Press and the UA Libraries previous to the move was amicable but distant. The Special Collections Department in the Libraries and the UA Press collaborated on events with success, as both are focused on Southwestern subjects and have good partnerships with local agencies both in Tucson and in the wider region. However, the relationship between the Libraries and the Press was not clearly defined both before and after the move. There was a lack of clarity of roles and communication between the two organizations, which remained functionally independent. This led to uncertainty and anxiety, particularly on the Press side, which was not unfounded considering the unusually high number of university press closings over the last decade. There were friendly conversations about possible projects through the Library's Scholarly Publishing and Data Management team, but there was some trepidation in planning for the future since Dean Carla Stoffle was due to retire. In order to better understand the trend of new evolving relationships, a review of university press and library collaborations was undertaken by the University of Arizona Press and the University of Arizona Library through the Association of Research Libraries Career Enhancement Program (ARL CEP). The goal was to gain better understanding of past and present collaborative relationship in order to inform the future relationship and collaborations between the UA Press and Libraries. METhODOLOGy During the course of nine weeks, the ARL CEP Fellow reviewed existing literature, interviewed staff at the University of Arizona Press and Library, and conducted 27 informal interviews with library deans, press directors, and scholarly communications leaders. The interview questions were developed by exploring common themes in academic literature regarding library-press collaborations in the past and took the bulk of their inspiration from the Ithaka report "University Publishing in a Digital Age" (Brown, Griffiths, & Rascoff, 2007). The questions also relied heavily upon the input and interests of Kathryn Conrad, the Director of the UA Press; Jeremy Frumkin, the Assistant Dean for Technology Strategy, and Dan Lee, the Director of the Office of Copyright Management & Scholarly Communication. The interviews addressed the partnership history, structure, motivations, goals and needs, administrative support and budget decisions, key stakeholders, and thoughts on the future of their relationships as well as scholarly communications. Not all questions applied to all the respondents, as each library had a different relationship with each press. The questions asked at the University of Arizona did include similar elements but were different for the obvious reason that there was not yet ongoing collaboration between the Press and the Libraries. Rather than examining programs, the interviews were an attempt to determine current perceptions of the library and the press and ask the staff to consider current challenges and future steps. Both sets of questions are available in Appendix A. LITERATURE REVIEW In 1995, Colin Day, then Director of the University of Michigan Press, advocated for change in his article "The Need for Library and University Press Collaboration." His essay asked readers to look beyond budgetary issues and see the interdependence of libraries and presses as part of a system. He asked for a higher level solution, writing that, "[it] is hard not to wonder if things are organized sensibly when two entities owned by the same institution -the university -are each pursuing policies that make life more difficult for the other." Unfortunately, the environmental factors that have made lives difficult for libraries and presses have only intensified in the past two decades. The change factors and ensuing tensions behind these new structures and roles are well documented. High journal prices for electronic formats, consequent low sales for print monographs, a poor economy, advances in technology, and the evolving habits of scholars have all led to slashed budgets for both libraries and presses. The word "crisis" has been used so often that it has lost its meaning, and there is resentment and defensiveness amongst publishers and libraries, particularly when there is lack of dialogue between specific parties. In his article, "Library and University Press Integration: A New Vision for University Publishing," Richard Clement (2011) of Utah State University detailed the strategies that presses have employed in order to adjust to shrinking monograph sales, and concluded that it is not enough, that presses are still in jeopardy because they stand at the margin rather than the core. He advocated for library and press partnerships, stating bluntly that, University libraries, unlike most presses, stand at the core of essential programs and at the center of the university's mission. While library budgets have been cut, and librarians periodically contemplate potential marginalization, there is very little risk of libraries being eliminated. From a press perspective, libraries look to be good partners that can protect them politically and financially and help move them to the center. From the library's perspective, a press offers an obvious expertise in editing and publishing, and, in particular, the production of a peer-reviewed product with an established reputation, an imprimatur. But most significantly, a press brings new pathways for interaction with faculty and engagement with the creation of scholarly content. (p. 12) This concern for closer engagement with the faculty and alignment with the "center" of an institution (i.e., an institution's strengths and priorities) is perhaps the most common theme amongst advocates for library and press partnerships (Brown, Griffiths, & Rascoff, 2007). It speaks to the distance publishers have traveled from their original founding purpose of publishing the scholarly output of their universities, although there are many presses that have retained their regional strengths. Collaborative projects with libraries have therefore allowed presses an opportunity to better maintain the balance between supporting local efforts and being seen as vanity publishers that are biased towards their institutions. Purdue University Press (Watkinson, Murray-Rust, Nesdill, & Mower, 2011), Penn State University Press (Eaton, MacEwan, & Potter, 2004), the University California Press (Greenstein, 2010), Georgetown University Press (Alexander, McCoy, Salisbury, & Brown, 2011), and the University of Michigan (Courant, 2010) are just a few examples of press and library efforts that address this alignment need by either creating new works that are of value to their institutions or making available previously published works to the greater public. Programmatic collaboration has always been a part of library partnerships with other entities on campus, including publishers. However, the same environmental factors that have put pressure on university presses have also put pressure on libraries. The development of institutional repositories, digital archives and curation (Choudhury, Furlough, & Ray, 2009), and now faculty journal hosting and data management services has led to a shift in focus, from traditional collection development and access to library distribution and a more active role in the research process (Armstrong, 2011) as the scholarly landscape itself changes (Smith, 2009). New forms of dissemination and scholarship itself have brought libraries, presses, scholars, and administrators to rethinking the future of scholarly communication (Brown, Griffiths, & Rascoff, 2007). As libraries experiment with new forms of scholarly material and output (Mullins et al., 2012), the university press is an obvious resource for publishing expertise as well as legitimacy (Butler, 2013). This is particularly relevant as institutions look toward open education resources (Withey et al., 2011) and open access publishing as part of their mission (Anderson-Wilk & Kunda, 2012). INTERVIEWS WITh SChOLARLy COMMUNICATION LEADERS: REASONS FOR COLLABORATION The interviews conducted over the course of nine weeks for the most part reflected the published literature. Of the 27 scholarly communications leaders interviewed, 13 were library professionals and 14 were publishing professionals. Though many of the presses reported administratively through their libraries, this did not preclude or require active collaboration. On the other side, there were several presses that did not report through their libraries but had a working cooperative relationship. Many of the people interviewed mentioned the need for alignment with the core strengths of the parent institution as a reason for working together. The scholarly mission of making academic work available to the public is seen as a common goal of both the library and the press, as both helped scholars as producers and consumers of content by both sides. According to one library dean, press and library collaborations "bring two very important players in the scholarly communication ecology together both physically and organizationally." Another publisher commented: I certainly see every reason to be open to the interaction. I would recommend understanding and respect of…the particular parts of the mission. The recognition that ultimately we're in the same business and that in the process there are mutual gains to be had. This sense of belonging to a common scholarly ecosystem was a common theme for many press directors and several library deans, who expressed as a priority the audience beyond the academic community of their institution. The status of many universities as land grant institutions, and subsequently their directive to serve the public at large, is taken seriously by libraries and presses. This is most clearly shown by the many institutions that participate in open access publishing related to regional issues and history: Northwestern University Press, Penn State University Press, and Purdue University Press are just a few of the publishers that cited their land grant status as a reason for making their work available online for the public. Some of these digital offerings resulted in print sales, but that was generally not the main goal for a digitized backlist or online journal. Most publishers saw the digitization of their backlist as a way to keep important scholarly works alive, the proof of which is evidenced by the number of downloads. This makes authors happy, commented one publisher who went on to describe the surprising number of local and international downloads that have gained new audience for works that would otherwise have disappeared. Another reason for cooperative digital publishing is to have the means to experiment with emerging forms of scholarly publishing. As previously mentioned, many presses have cut costs to the point where they simply do not have the staff and funding to experiment beyond their traditional roles without the support of the library. One publisher explained that their collaboration has been a positive one because "there are things that may or may not be critical in the future, presses have to think about what they spend and hope they get it back someday" and the library is a partner that can experiment and serve as a digital lab. Many libraries provide support in both staff time and technology infrastructure in order to digitize, produce, and host projects in which they already have interest. One publisher noted that their "Press has a focused mission, and are a little more conservative…. [There's] a little anxiety about the future, and we haven't had resources. Now as a part of the library, we are in a position to take more risk." Admittedly, quite a few libraries have engaged in publishing projects independent of their university presses, usually through repositorybased journals. Often library publishing efforts are less formal (gray literature, conference proceedings, data sets, and the like) and done independently rather than under a joint imprint or active sponsorship of a press. However, both university presses and academic libraries expressed the desire to engage with scholarship as it evolves, to experiment and create new working sustainable models of publication and access. One interviewee noted that her organization was "imagining a day when [we] can make content more digitally readable" beyond text and pictures to a more interactive experience. In fact, this sort of exploration was seen by many as a necessary action in order to be active players in the scholarly communications landscape. Digital offerings are not the only partnerships, and in fact, one publisher commented on concerns regarding the digital divide and how this move toward online-only access would impact the public. For presses that are structured administratively through their libraries, digital project-based partnerships are an extension of the office overhead and IT support that is funded through the library budget. Several presses have benefited from a librarybased new reporting structure because their human resources and IT functions are now handled by the library. In one instance, a publisher recounted how the change in reporting structure resulted in an upgrade of their offices (located in a historic house) to the current century with new wiring, plumbing, and technology. The library dean in this case pushed for the funds from the university administration and hosted the press staff in the library during construction. Other examples include development and fundraising: One library created an endowed internship for the press, and another library has partnered with the press and an academic department on campus to apply for a grant together. According to one interviewee, guarantee (or strong consideration) of publication can be a determining factor for grant acceptance, particularly for international grants. For those presses now under library administration, the subsequent sharing of resources such as IT and HR services have made both the library and press more efficient and opened possibilities for experimentation. However, these benefits also speak to the lack of support and advocacy for the university press within its parent institution. The creative editorial and marketing expertise of the press staff possess valuable skill sets, yet these assets are not fully understood. Most presses are minimally supported by their parent institutions but are for the most part financially self-sustaining, operating at maximum scholarly benefit for minimum dollar cost. Unfortunately, this academic output in relation to fiscal conservatism is not always recognized by institutions. In one case, a university made moves to do away with their press even though the press published strongly in regional materials and demonstrated solid profit throughout recession times. This lack of understanding of the value of the university press is perhaps due to the fact that historically the press has existed outside the bureaucratic structure of the university, as the press is not an academic department. One publisher explained that the move had been a positive one because "what the Press really needed was financial support and strong visible advocate on campus, and both [the current] and previous director have been vocal in supporting the Press." In fact, many presses reported that one of the benefits of moving under the library reporting structure has been a seat at the table. Some of the interviewees reported that under the library, presses are considered as part of the library strategic plan and therefore included in conversations with the university. Press directors are included in upper level meetings and serve as members of the library board, and press staff serve on library committees. Of course, a formal reporting structure is not absolutely necessary for collaborative involvement, nor does it preclude a positive relationship. Several presses that did not report through their libraries cited library advocacy on behalf of the press as one of the most important and helpful results of a positive relationship. However, one library dean said with administrative oversight of a press commented, "libraries and presses are coming together but there's still some tension there. I think librarians are still naïve when it comes to what it takes to publish and presses are narrow in their definition." Like the University of Arizona, there were a few people who indicated that even though their press reported through the library, they had minimal interaction with the library. One common theme was that there are still major differences that are unlikely to be overcome, since they are rooted in the different business models and therefore practices and philosophies. INTERVIEWS WITh ThE UNIVERSITy OF ARIzONA At the University of Arizona (UA), librarians and press staff were interviewed on the nature of publishing and the relationship between the library and the press. The question, "What does publishing mean to you?" produced some measured responses from the larger scholarly communications community (many felt it was a loaded question). In contrast, the response at Arizona on both the library and press sides were surprisingly similar. Both librarians and press staff agreed that scholarly publishing was peer reviewed and went through a process of editing, developing, and marketing that added value and authenticity. Librarians did have a broader view of what could be considered publishing and felt that open access was important, but were also very aware that costs were involved and wanted to know more about the business models that enabled the UA Press to be profitable. The UA Press and UA Library demonstrated little knowledge about each other despite being housed in the same building. It was recognized by librarians that the UA Press publishes in tandem with the strengths of the university, and therefore acts as a leader in the region and its related fields (such as border studies, anthropology, and planetary sciences), but the perception from the Library is that the Press is small in scope and size. The UA Library is recognized for its service to the campus and community and its strong national reputation as an ARL library, but the Press has little intimacy with the actual responsibilities and projects taking place. This lack of understanding has led to a lack of trust and collaboration despite the desire to experiment and do more in scholarly communications. While this could be attributed to the general hostile climate between libraries and presses, only Kathryn Conrad, the Press Director, was aware of this hostility, and the librarians were quick to say they respected the Press. The lack of trust instead seemed rooted in the fact that the Press was not consulted in their move to the Library, and there had been no discussion as to how that move impacted them. The status of the Press within the Library was not clear. This uncertainty coupled with a busy publishing schedule meant that conversations on the relationship between the Library and the Press had only taken place recently. One of the things that emerged from interviews with the Library and Press staff was the lack of infrastructure and staff on both sides. The Press staff felt that they would like to innovate and do more, but simply did not have the time, as they were already so busy in their regular duties, having lost several full-time staff in the past. The Library had also made cuts in staff that were not replaced, and do not have the time and technology knowledge in-house to move forward beyond current institutional repository and basic journal hosting services. In fact, only recently, in the fall of 2014, did the University of Arizona Libraries advertise for positions that had long been needed. Generally, the three big needs identified by Library staff were time, technology capability, and outreach to faculty. ASSESSMENT OF FINDINGS The importance of relationship-building cannot be overstated. The influence of personalities and a positive relationship between the library and press was the most commonly cited reason and recommendation for a successful collaboration. More than one publisher commented that they had a very good relationship with their library for now, but that could change in the future; an element of caution that pervaded all interviews due to past animosities. Though the library and the press may share mission goals of high quality academic research and output, there are large cultural and structural differences between the two that need to be bridged. For example, while there is often administrative and resource sharing between libraries and presses, for the most part the budgets remain separate. This is because, in the words of Patrick Alexander at Penn State University Press, libraries are given a bucket of money to spend, while presses are given a bucket with a little bit of money and told to fill the rest. This long-established difference in business models was the most cited reason for cultural differences between libraries and presses. These economic and idealistic cultural differences have perhaps expressed themselves most loudly around the issues of pricing and open access. In fact, the question of open access met with the most variation in response. One press director called open access a tool amongst others, and Patrick Alexander bluntly stated, "Open access is not a business model. It's a philosophy. The reason open accesses works in the sciences is the sciences have money and the humanities don't." In contrast, Bryn Geffert of Amherst College maintains that open access publishing is the solution for how to connect needed material to readers that traditional models of publishing cannot reach. Despite these opposing views, there is not a simple library versus press divide, as it is clear from both case examples and conversations that press directors are not opposed to open access. One press director noted that working with academic librarians was actually easier than working with other partners because "libraries understand that digital costs something," and several directors expressed that they would like to have their publications available online either through open or hybrid access. The issue is again a cultural one, as presses are concerned about filling the metaphorical bucket with money, particularly since for many sales is a marker of value. However, there are other ways of measuring value, such as downloads and citations. The concerns for both publishers and librarians are more practical than philosophical, namely 1) how open access would be funded 2) how quality would be maintained. This issue of sustainability was one that came up often, and while the published literature on press-collaboration features many successful projects, conversations revealed that some of these successes are one-time projects and some lack the funding and infrastructure for sustainable expansion. In the words of Kathryn Conrad of the University of Arizona Press, "Open access is almost a red herring. The goal is to provide as much scholarship at the highest quality possible in sustainable way…We have to be open to new models and new business models, but we have to stand up for what we believe standards should be." This emphasis on standards for publication was most present when people were asked the question, "What does publishing mean to you?" The response was inevitably a measured one, with the words "continuum" or "spectrum" used to describe everything from blog posts to traditional peer reviewed monographs. There was often a distinction made between traditional scholarly publishing and more casual forms of what many called "dissemination," whether to establish traditional publishing as "real" publishing or make the argument that in this era all forms of public dissemination could be called publishing. This semantic debate is likely to be continue as new forms of scholarly communication advance their efforts to establish standards of quality control. RECOMMENDATIONS FOR ThE UNIVERSITy OF ARIzONA While academic presses and libraries have a fundamental difference in business model, taken all together it was clear from the interviews that there were quite a few similarities in mission, their positions within the institution, and value, as shown by the table below: Academic Libraries and University Presses both… Mission Believe in high quality academic research and output Make academic work available to the public Serve consumers and producers (scholars and researchers) Institution Come out of and serve academic institutions Are feeling pressure from institutions to prove their value Day-to-day operations not clearly understood by institution Value Are positioned to offer unique expertise Are positioned to offer and act on big-picture thinking Are positioned to be leaders in scholarly communications There are several areas in which the UA Library and Press can work together for mutual benefit. The first is to share expertise with each other. The Press would like to know more about metadata and discoverability, and the Library would like to know more about business models. This is knowledge that can be shared with each other through workshops or meetings. The second is to use their shared expertise in order to market their value to the campus. For example, the Press can connect faculty to librarians as valuable resources. The Press and Library together can investigate campus interest in different forms of publishing in order to take initiative and establish themselves as expert resources. Programmatic partnerships, like a publishing panel for graduate students or author speakers during Open Access Week, are also ways in which the Library and Press can provide value, work together, and market themselves. Thirdly, the Press and Library can advocate for more infrastructure in staff and technology. At the time of this project, the UA Library was going through a time of change, as they had recently hired a new dean. While tumultuous, such change is an excellent time to determine areas of mutual need and request those resources that can be shared. For example, due to the strong regional focus of the Press, digitizing the backlist for the Library's institutional repository and possible print-on-demand sales would provide value to the community both on campus and throughout Arizona. Lastly, the Press and Library need to build relationships and determine their identity in relation to each other. Since the Press has an identity that is grounded in campus strengths and high quality, the Library should be careful not to dilute that brand. Instead, the brand can be leveraged in order to initiate new opportunities such as grant-based or collectionsbased publishing projects. The positive attitude of librarians toward the Press indicates that the Press should see the Library as a resource, an advocate, and an opportunity to be more involved in the life of the campus. CONCLUSIONS In February 2014, the Association of American University Presses (AAUP) and the Association of Research Libraries (ARL) released a joint study on the very topic of library and press collaborations. Many of the conclusions echoed those of this study: more presses were under library administration, most remained functionally separate, libraries are publishing more but differently. The report revealed some tension regarding the quality of library-published efforts. One press director commented that, "Libraries are well-suited to create and preserve free, online materials. They are rarely suited to engage in commerce, or in editing, design, and printing" (AAUP Library Relations Committee , 2014, p. 20). However, for the most part the tone was conciliatory, as more institutions move towards partnerships. One of the largest motivating factors for collaboration that is missing from both the published literature and from the interviews conducted through this study is scholar habits. Monograph publication is often necessary to the tenure track process, but this does not address attitudes toward publication from the author side. According to Ithaka's 2012 Faculty Survey, "less than one in five respondents across disciplines strongly agreed that their ability to share work directly with peers has made scholarly publishers less important, with almost half of respondents strongly disagreeing; this brings into question the rhetoric of decline in publishing" (Housewright, Schonfeld, & Wulfson, 2013). It is clear that the peer review and editorial process is still highly valued by scholars. Publishers should be more vocal about their role in this process, both as providers and as advisors to their libraries, particularly if there is partnership for an online imprint or journal. Also of note is that published conference proceedings rank above scholarly monographs in how scholarly research is shared, indicating that it should be a target area of growth for publishing institutions. Another important finding, though not a new one, is that scholars publish most frequently in the scholarly communication formats that they themselves read. While influence varies by subject area, the internet era has been democratizing for the dissemination of information in that anyone now has the power to read and make public their thoughts without going through a library or a publisher. Many young scholars now operate in different modes of information-gathering and discussion. Rebecca Kennison of Columbia noted that the scholarly communication process "used to be a tricycle: creator, library, publisher." This has changed because technology allows for the creator to publish and disseminate on their own, since, a "unicycle is totally fine at the end of the day. Not very stable but simple… The creator of the work is really the important one. Lots of people really like bicycles. If we can really sort out how we can be that other wheel and what creator wants." This shift has already made itself felt in sometimes awkward ways, 1 and both publisher and libraries need to think about what this means for their roles in the shifting landscape. A more casual means of scholarship that lives outside the traditional ecosystem is valuable, but also brings up the question again of standards, a topic that should be explored further by the scholarly community. For example, the MLA has standards for how to cite a tweet -does this mean that publishers and librarians should have standards on the veracity said tweet? Does this include peer review? High quality open access journals have shown that traditional peer review standards of verification and authenticity are not limited to traditional means of publishing, just as there are poor quality subscription journals that prove the same. Business models aside, it seems that libraries and presses share similar values when it comes to integrity in scholarship and similar hurdles when it comes to unconventional means of scholarship. Their shared challenges and values, along with the mission of supporting scholarship in their institutions and at large, are a common ground on which libraries and presses can build relationships and plan for the new future of scholarly communication. Strategic planning and partnerships are key in establishing and marketing value in an increasingly loud and crowded information marketplace. In fact, recently at the first LPC meeting, the tone toward presses was decidedly friendly from the libraries, and many of the speakers talked collaboration rather than crisis. Judging from published examples and interviews, the shape of these partnerships will be different depending on each context, but relationship-building and resource-sharing has incredible value as the landscape of scholarship itself is changing.
2019-05-05T13:04:41.400Z
2014-12-11T00:00:00.000
{ "year": 2014, "sha1": "859943ae675207b8e6f64e1b65b274be4555f1df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7710/2162-3309.1102", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eaca8bcd4cd89f99b787853a32baacd941e88a73", "s2fieldsofstudy": [ "Education", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
237443878
pes2o/s2orc
v3-fos-license
Introduction The Grassroots Right in Latin America: Patterns, Causes, and Consequences A a decade of leftist governments, the Latin American right is resurgent. While rightist and center-rightist politicians and parties have come to power in a number of countries, the shift is most significant at the grassroots. This special section of Latin American Politics and Society is dedicated to understanding the “grassroots right”: the diverse citizens, civil society associations, and religious groups engaged in activism to support right-wing issues. Their causes range from restricting abortion, affirmative action, and LGBTQ+ rights to expanding gun rights and violently repressing crime to supporting free markets and opposing redistribution. We know too little about the grassroots right in contemporary Latin America. Recent work on civil society in the region has focused mostly on left-leaning groups that mobilize for social and economic rights. For instance, a rich literature has developed to study feminist movements (Baldez 2002; Blofield 2008; Daby and Moseley 2021; Htun 2003; Thayer 2009), LGBTQ+ rights movements (Díez 2015; Encarnación 2016), environmentalists (Herrera and Mayka 2020; Hochstetler and Keck 2007), Black and Indigenous rights movements (Lucero 2008; Paschel 2016; Yashar 2005), and rights-based health movements (Gibson 2019; Mayka 2019; Niedzwiecki and Anria 2019; Rich 2019). Meanwhile, most existing studies of the right have focused on electoral participation and party organizations. These studies have analyzed the public opinion foundations behind voters’ support of right-wing parties and candidates (Samuels and Zucco 2018; Wiesehomeier and Doyle 2014; Layton et al. 2021), changes in the linkage strategies and targeted appeals used by parties on the right to woo voters This introductory essay poses four sets of questions on the nature, origins, and impacts of the grassroots right in Latin America. First, what is the grassroots right? We identify the range of issues, identities, and claims embraced by the grassroots right in contemporary Latin America. What unites them, we argue, is an affirmation of traditional social hierarchies, whether patriarchal, heteronormative, cisgender, economic, religious, or ethnic and racial, often defined in reaction against progressive social actors seeking to level them. After conceptualizing and providing a brief overview of the grassroots right, we proceed to several subsequent questions. Second, how does the grassroots right mobilize? What organizational forms, repertoires, and frames does it employ? We show that the grassroots right adopts a range of mobilizing structures, from formal organizations to loose networks. Furthermore, it has adapted many of the strategies, tactics, and frames historically used by left-wing movements and repurposed them to new aims. Third, what has caused the recent rise in grassroots right mobilization? We sketch out several sets of hypotheses explaining the recent growth of the grassroots right: secular societal change and democratization; new grievances and perceived threats, especially in countries governed by the left during the Pink Tide; an expanded infrastructure for mobilizing via Evangelical churches and social media; and international diffusion. Fourth, what is the impact of the grassroots right on policy and democracy? We highlight the movement's mixed record in achieving its objectives and develop hypotheses to explain variation in its impact. In addressing these questions, we draw on a growing body of existing scholarship on rightist mobilization in Latin America and globally. We also reflect on the four excellent essays in this special section, which represent cutting-edge perspectives on the grassroots right in the region. Together, these articles shine new light on key questions about the roots of right-wing mobilization, policy change, and the relationship between the right and democracy in Latin America. WHAT IS THE GRASSROOTS RIGHT? Scholars generally take one of two approaches to defining this object of study, emphasizing either the actors (the who), or their issues (the what). In the former category, for instance, is Timothy Power's 2000 volume on The Political Right in Postauthoritarian Brazil, which defines the right as an "exceedingly large and diverse" set of actors, including "the armed forces, large-and medium-sized landowners, and elements of the industrial bourgeoisie, as well as smaller segments of the Catholic hierarchy, the middle classes, and the media," plus Brazil's authori-tarian successor parties (36)(37). Following Power, as well as Edward Gibson (1996) and Kevin Middlebrook (2000), James Bowen identifies the right by its "core constituencies": "the upper economic and social strata of society," termed "elites" (2011,). Yet in a study of the grassroots, such a definition is unsatisfying; it suggests that nonelites cannot mobilize behind rightist politics. To be sure, Bowen (2011, 5-6) acknowledges that populist politicians can mobilize ordinary citizens to support the right, yet this approach implies an assumption of false consciousness within the "grassroots right" that we do not believe is justified ex ante. Instead, we focus on the what-the issues and ideologies that rightists champion. This approach avoids oversimplifying the interests of entire social groups or classes, instead inquiring into what political actors themselves want. However, it poses a different challenge: to find the thread tying together the extraordinarily diverse issues that rightists champion, which include everything from opposing citizen access to abortion to supporting citizen access to guns. What principle unifies those demands? In one attempt to answer that question, Luna and Rovira Kaltwasser (2014a, 4) put forth a clear, flexible, and expansive definition of the right as "a political position distinguished by the belief that the main inequalities between people are natural and outside the purview of the state." The strength of this definition is its capacity to describe neatly the right's stances on wide-ranging economic and racial issuesareas where the left historically has championed governmental action to redress inequalities. Yet this definition is less apt for describing rightist versus leftist stances on issues such as gay rights and abortion, for which contemporary rightists may actually advocate government policy to enforce inequality or to prevent access to services that could be provided via markets, such as abortion or prostitution. Given the centrality of sexuality politics for Latin America's "neoconservatives" or "New Right," we need a definition that adequately incorporates such stances (Côrrea et al. 2008;Cowan 2014;Lacerda 2019;Vaggione and Machado 2020). Therefore, we define the right as a diverse set of individuals and organizations aiming to maintain societal hierarchies that are perceived as traditional or natural. This definition differs from that of Luna and Rovira Kaltwasser (2014a) in that it centers rightists' proactive defense of hierarchies rather than their opposition to government action. Such hierarchies might include, for instance, patriarchy, the economic dominance of large businesses or landowners, or the subordination of LGBTQ+ individuals or Black and Indigenous Latin Americans. At the same time, as Eaton (2014) notes, Latin America's right is no longer invested simply in championing discrete issues. Instead, the right has begun "engaging in deeper and ongoing projects of identify formation," deploying "discursive and rhetorical practices that seek to transform political identities" (2014,87). Although Eaton focuses on sectoral and territorial identities, by 2021 the rightist project of identity formation has deepened and politicized identities ranging from religious to nationalist to antipartisan. Our definition of the right highlights its potential to advance a transformative agenda to entrench inequalities, which conceives of rightists as the true defenders of the traditional moral order. When discussing the grassroots right, we refer to the citizens and civil society groups that engage in activism to support rightist issues and identities. 1 Scholarship on the Latin American right has historically centered political elites, including politicians and governmental actors such as the military (Luna and Rovira Kaltwasser 2014a;Power 2000). However, Latin America's rightward turn in the 2010s was particularly striking among citizens. While rightists of various stripes captured the presidency in a number of countries-from Brazil's far-rightist Jair Bolsonaro to Chile's technocratic center-rightist Sebastián Piñera-nearly every country in the region witnessed prominent movements advocating for conservative positions on issues such as sexuality politics. The emergence of these movements signals that grassroots mobilization can come from a range of socioeconomic backgrounds-not just the subaltern groups traditionally associated with leftist social movements. This is not to say that current rightist movements are entirely novel in Latin America. Prominent right-leaning social movements of the twentieth century included conservative women's movements (Baldez 2002) and militarized groups, such as armed landowners in Brazil (González and Kampwirth 2001;Payne 2000). Still, the grassroots right has flourished since the turn of the millennium, in both volume of mobilization and societal prominence. We distinguish Latin America's grassroots right from two related concepts. First, it is not simply a mass public following elite direction; rightist citizens are independent actors, whose views and behavior are not reducible to their relationship with politicians. Thus, the grassroots right extends beyond-but interacts withrightist party organizations and electoral activism. Likewise, the grassroots right cannot be reduced to a top-down, "AstroTurf" dynamic, with participation simply manufactured by elites. Second, Latin America's grassroots right often takes the form of social movements, yet it may also defy traditional understandings of movements. Its organizational forms range from formal or informal social movements to religious congregations and denominations, or sporadic mobilization in contentious action among people who do not know each other and are linked only through social media, as emphasized in the article in this volume by Dias, von Bülow, and Gobbi, as well as the article by Gold and Peña. In practice, what does Latin America's grassroots right look like? Perhaps the single set of issues distinguishing the right today from that in previous periods is the centrality of sexuality politics; for instance, opposition to abortion, LGBTQ+ rights, and school-based sexual education (Biroli and Caminotti 2020;Pérez Betancur and Rocha-Carpiuc 2020;Vaggione and Machado 2020;Zaremberg 2020). In this section, for instance, Corredor and Reuterswärd describe movements opposing "gender equality" in Colombia and abortion access in Mexico, respectively. As Corredor (2019; this issue) argues, opposition to what rightists call "gender ideology" has emerged as a powerful counterframe that enables Latin America's grassroots right to contest the sexual empowerment of lower-status groups. This counterframe derailed public school sexual education initiatives in countries such as Brazil and Peru (Payne and de Souza Santos 2020; Rousseau 2020) and has even been harnessed to attack seemingly unrelated projects, such as Colombia's peace process (Corredor this issue). One reason sexuality politics has become a magnetic core for the right is that these issues have unique power to mobilize religious conservatives-both Catholics and the growing body of conservative Evangelicals (Smith 2019a;Smith and Boas 2020). While no topic has aroused more attention than sexuality politics, the grassroots right mobilizes around many other issues. Most classically, it may oppose social policies benefiting marginalized groups, such as conditional cash transfers and affirmative action in Brazil (e.g., Côrrea 2015;Feres Júnior and Toste Daflon 2015). Relatedly, the regional autonomy movement in Santa Cruz, Bolivia arose in reaction against the Evo Morales administration's efforts to empower Indigenous Bolivians and redistribute resources across regions (Bowen 2014;Eaton 2007). Similarly, Argentina's agricultural producers revolted against an export tax hike to fund social programs during the administration of Cristina Fernández de Kirchner (Fairfield 2011). In all these cases, rightist civil society actors mobilized to defend in-group economic interests, opposing redistributive policies under leftist administrations. Another area of rightist mobilization relates to security and crime, when citizens organize to support mano dura or "tough on crime" policies. Examples include mobilization to demand repressive policing of marginalized groups in São Paulo (González and Mayka n.d.), movements for gun rights in Brazil (Bob 2012, chap. 6), and organized opposition to police reform in Argentina (Eaton 2008, 22-25). Mobilization also has taken a distinctively international dimension as rightists oppose the specter of creeping communism and Chavismo, with Cuba and Venezuela serving as effective bogeymen in arousing opposition throughout the region. Because the right's objectives are socially constructed and contextual, its salient issues and identities vary across space and time. For instance, the European right tends to target immigrants to a greater extent than the Latin American right, which instead focuses on dark-skinned and lower-class internal enemies, such as bandidos in slums. Thus the constellation of issues and identities that characterizes the grassroots right in twenty-first-century Latin America is not definitional to the right. Instead, rightist activists can be identified via their general opposition to state intervention to dismantle existing inequalities and their support for a "traditional" social order. It is important to distinguish the rise of the grassroots right from two other recent trends in Latin America: autocratization and populism. In some cases, these three trends go together: most notably in the Brazilian movement leading to the downfall of Dilma Rousseff and the rise of Jair Bolsonaro (Dias et al. this volume; Gold and Peña this volume; Cohen et al. Forthcoming;Layton et al. 2021). Grassroots right mobilization also coincided with authoritarianism in the 2019 movement overturning Evo Morales's disputed electoral victory, which led to the oneyear right-wing rule of Jeanine Áñez in Bolivia. Nevertheless, support for authoritarianism and populism are uncorrelated with rightism; in recent years, right-wing movements have mobilized to support liberal democracy in Venezuela and elsewhere. 2 The grassroots right might even sometimes be a stabilizing, democratizing force, giving rightist actors a stake in the democratic system, as well as connecting them to partisan elites (Boas and Smith 2019; Boas 2021; Smith 2019a). HOW DOES THE GRASSROOTS RIGHT MOBILIZE? The grassroots right adopts various organizational forms, ranging from hierarchical institutional structures to loose, informal networks. For instance, the largest civil society organization in the world, the Catholic Church, has deployed its formidable networks and ideological resources to advocate rightist causes, including opposition to abortion and LGBTQ+ rights, as Reuterswärd explains in this issue. 3 Likewise, rapidly growing Evangelical churches have served as key nodes for right-wing activism. The grassroots right also may take the form of economic associations (Fairfield 2011); social movements, such as the anti-sex education movement Con Mis Hijos No Te Metas (Don't Mess with My Children) in Peru (Boas n.d.;Rousseau 2020); or loose coalitions organized among regional movements in Bolivia (Eaton 2007). Moreover, in this issue, both the articles by Dias, von Bülow, and Gobbi and by Gold and Peña describe how social media-based (even anonymous) networks framed issues and drove large cycles of protest, despite minimal formal organization. Right-wing individuals have also mobilized through participatory institutions, instead of forming organizations or movements, to promote police repression of marginalized groups in Brazil (González and Mayka n.d.). Often, activists coordinate through what Jessica Rich (2020) calls "federative coalitions," uniting diverse groups into a hierarchical structure for targeted campaigns while maintaining the flexibility of networks. Strategies and Tactics As a consequence of its organizational diversity, the grassroots right deploys a range of mobilizational tactics, including ones traditionally associated with the left in Latin America. Historically, the right leveraged its superior resources and networks to exert influence through institutional channels, from the military to bureaucratic politics to the partisan arena (Bowen 2014;Loxton 2014;Power 2000;Schneider 2004). Moreover, right-wing activists often turned against democracy and toward armed violence when they saw democratic politics as inadequate for interest representation (Payne 2000). Democracy was most stable in places where strong rightist parties came to support democratic institutions, while the defection of the right destabilized democratic regimes in other countries (Gibson 1996). In the twenty-first century, however, these channels narrowed, as democratization led to an increasing distance between military and political actors, and as leftist governments came to power. The right has added new forms of contentious mobilization to its repertoire, including blockades and street protests. In 2008, for instance, Argentine rural agricultural producers made international headlines by blockading highways to stop tax increases, imitating a tactic developed a few years earlier by the piqueteros (Fairfield 2011;Richardson 2009, 250-51). In Brazil, a cycle of protest that began in 2013 (Alonso and Mische 2017) eventually culminated in massive protests to impeach Dilma Rousseff (Dias et al. this issue; Gold and Peña this issue). Meanwhile, partic-ipation in large-scale street demonstrations, such as the annual, Evangelical-dominated March for Jesus in several countries, has helped to construct the politicized identities necessary for subsequent activism. This shift is seen in public opinion data from the AmericasBarometer. Across the region as a whole, participation in protests remained slightly more common among citizens on the left than on the right, even in 2018. However, the Pearson correlation coefficient between protest participation and ideology dropped from .13 in 2006 to .06 in 2018, indicating a growing comfort with protest on the right. The new right's repertoire of contention still includes traditional insider strategies. For instance, Mexico's Catholic pro-life movement relied on backroom deals and campaign contributions to pass anti-abortion amendments in some state constitutions (Reuterswärd this issue; see also Zaremberg 2020). In Peru, groups linked to Catholic and Evangelical churches have drafted legislation replacing legislative references to gender with the words women and men, defined as distinct and complementary (Rousseau 2020, 30-31). Other recent instances of lobbying include efforts to promote mano dura policies in Bogotá and Argentina (Eaton 2008;Mayka 2021) and to oppose disarmament in Brazil (Bob 2012, 163-65). The right also deploys legal strategies, turning to the courts to challenge school curricula that recognize gender equity and sexual diversity in Peru (Rousseau 2020, 29) and to contest gun control restrictions in Brazil (Bob 2012, 163). The grassroots right has inserted itself into elite-level political battles; for instance, over impeachment, peace negotiations, and legislation. Dias et al. (this issue) demonstrate how right-wing groups in Brazil used social media platforms to unify ideologically diverse individuals behind the political project to impeach Dilma Rousseff and to mobilize these individuals into street protest. Gold and Peña (this issue) show that recent cycles of protest in Argentina and Brazil produced diffuse and horizontal mass-elite linkages, mediated through social media. Similarly, Smith (2019a) argues that Evangelical clergy constitute a distinct type of interlocutor for elected politicians in Brazil; pastors can serve simultaneously as opinion leaders orienting both politicians and masses, and as brokers facilitating mass-elite linkages. As Corredor's article in this issue shows, clergy-politician relationships helped organize Colombian Evangelicals' movement to defeat a referendum on the 2016 peace deal. Just as the grassroots right carries elite-level policy conflicts into the streets, it also mobilizes the streets to influence who occupies the halls of power. Sometimes this happens through financial support for candidates, given the deep pockets of some individuals on the right. In this issue, for instance, Reuterswärd argues that campaign donations help explain successful efforts to pass anti-abortion amendments to some state constitutions in Mexico. The grassroots right is also a fertile soil for cultivating future elites. Gold and Peña (this issue), for instance, reveal that the networks organizing protests via social media in Argentina and Brazil incubated the next generation of young, social media-savvy rightist politicians. Similarly, church leadership has proven an important platform for running for office on the right in Latin America (Boas 2014(Boas , 2021Smith 2019a). Frames As part of Latin America's pluralistic social movement field, activists on the right dispute the frames of leftist movements, counterframing the advances of marginalized groups as threatening the moral foundations of society (González and Mayka n.d.;Payne and de Souza Santos 2020, 33-34). For instance, Peru's Con Mis Hijos No Te Metas frames programs to promote gender equity as assaults on the family and an intrusion into the sacred space of the home (Rousseau 2020). Despite their religious influence, these movements often rely on what Vaggione (2005) calls "strategic secularism": reframing a Christian moral agenda in universalizing terms to build broad support (see also Reuterswärd this issue). The grassroots right also emphasizes valence issues, such as security, opposition to corruption, or patriotism. In Brazil, diverse movements-organized around issues ranging from free market economics to opposition to women's and LGBTQ+ rights to frustration with rising crime-united under an anticorruption frame to demand the impeachment of Dilma Rousseff (Dias et al. this volume). Unifying frames also link right-wing issues to patriotism and nationalism, depicting feminist and LGBTQ+ rights movements as neocolonial intrusions that threaten sovereignty (Biroli and Caminotti 2020, 3;Bob 2012, 162). Perhaps ironically, such nationalist frames are shared across transnational networks of right-wing movements (Bob 2012;Vaggione and Machado 2020, 9-10). In addition, the grassroots right harnesses and repurposes the language of rights that has been historically favored by the left. In the twentieth century, Latin America's religious conservatives developed an alternative scholarship and narrative of human rights to contest abortion rights, one that drew on figures such as the US civil rights hero Rosa Parks to justify civil disobedience (Morgan 2014). More recently, right-wing movements frame expansions of LGBTQ+ rights as endangering children's rights under the United Nations Convention on the Rights of the Child (Biroli and Caminotti 2020; Corredor 2019), or threatening religious liberties (Payne and de Souza Santos 2020, 36). They also argue against "gender ideology" and feminism on women's rights grounds, claiming that women's advancement requires acknowledging the fundamental, complementary differences between men and women (Corredor this issue). In the field of public security, grassroots right activists contend that mano dura policing is essential to protect "deserving" citizens' rights to safety and freedom of movement (González and Mayka n.d.). Embracing rights frames yields considerable advantages, presenting rightists' demands as righteous and nonnegotiable and depicting their opponents as depraved human rights violators (Bob 2019, 14;Mayka 2021). Moreover, rights frames open up new venues for contention, given institutional obligations to protect human rights (Bob 2019, 8-9;Mayka 2021;Merry 2005;Vaggione and Machado 2020, 8-9). Yet another effort at counterframing involves repackaging opposition to sexuality politics in scientific terms. Religious anti-abortion activists have long sought partnerships with secular groups on the basis of scientific arguments about fetal development (Morgan 2014). More recently, Santos (2019) demonstrates that Catholic theology of environmental protection has been adapted to justify conservative positions on gender based on supposedly biological distinctions between men and women. Even the notion of gender ideology-a term invented by rightist activists who attribute it to leftists-implies that progressives have adopted false, politicized understandings of gender in contravention of nature and science (Corredor 2019). Reuterswärd (this issue) demonstrates that appealing to biological arguments can play an important role in building support from the nondevout. Future research is needed in these areas. First, how do the strategies, tactics, and frames of right-wing movements shift once the right comes to power? A number of studies have focused on the grassroots right's role in opposing left-wing governments and in mobilizing support for ascendant politicians on the right. As Gold and Peña (this issue) show, having right-wing allies in power can open up new opportunities for influence (as seen in Brazil under Bolsonaro) but might also demobilize and co-opt these movements (as occurred in Argentina following the election of President Mauricio Macri). Further research is needed to understand the conditions that enable sustained mobilization and influence. We also know little about how the grassroots right interacts with institutional channels for citizen participation-especially participatory institutions on issues of concern to the right, such as child welfare policy and security (Rich et al. 2019, 13). Existing studies of participatory institutions have largely ignored these policy sectors, signaling the need for research that explores whether right-wing groups might seek to occupy these participatory institutions typically associated with the left. 4 Furthermore, the crucial role of social media in the mobilization of the grassroots right raises important questions about how social media platforms have connected right-wing activists throughout the world, transformed transnational advocacy networks, and enabled the diffusion of strategic repertoires and frames across borders. WHAT CAUSES THE GRASSROOTS RIGHT? What has led to the present surge of grassroots mobilization on the right? We can immediately dismiss one possibility: the hypothesis that Latin American citizens are becoming more conservative. Analyzing AmericasBarometer data on citizens' selfplacement on a 1-10 ideological scale, where 1 indicates far left and 10 indicates far right, we see little movement in ideological self-placement in the aggregate between 2006 and 2019, across the region as a whole. In most waves, a bit over 10 percent of citizens place themselves at 10 on the scale (indicating association with the far right), and a bit more than one-third choose a position from 6 to 10. 5 Similarly, Abreu Maia et al. (2020) find no evidence that either progressive policies or conservative social movement backlash has led to growing conservatism on sexuality politics issues among Latin Americans, on net. What, then, might drive increasing mobilization? In the paragraphs that follow, we sketch a series of speculative hypotheses. First, scholars point to long-term changes in the political opportunity structure that encourage grassroots mobilization on the right. Democratization, growing state capacity, and changing international norms have made recourse to military intervention and armed rebellion increasingly unattractive for societal elites dissatisfied with democratically elected leaders (Linz and Stepan 1996;Loxton 2014;Power 2000). Moreover, the secular decline of clientelism and patronage politics has restricted another channel for upper-class influence on politics (Eaton 2014, 76;Weitz-Shapiro 2014). Even behind-the-scenes bureaucratic influence may be increasingly foreclosed to rightist societal elites, as centrist and leftist administrations have bolstered authority and autonomy within the bureaucracy (Abers 2019; Rich et al. 2019, 4-5;Rich 2019). Thus, the rise of the grassroots right might indicate the maturation of Latin American democracies, in which rightist groups must compete for policy influence on relatively even footing with other civil society actors. From this perspective, it is notable that even movements with close ties to elites-such as the Mexican prochoice movements analyzed by Reuterswärd in this issue-operate through protest and lobbying instead of just relying on secret back channels. A second hypothesis highlights the rise of Evangelical Christianity in most Latin American countries over the past several decades (Boas and Smith 2015;Chesnut 2003;Pew Research Center 2014). Both Evangelicalism and Catholicism are highly pluralistic traditions, encompassing actors ranging from the far right to the far left; neither has been associated with uniformly rightist voting and activism in Latin America (Boas and Smith 2015). Nevertheless, Evangelical politics may lean to the right of Catholic politics in the current era. Despite substantial moderation since the 1980s, Catholic progressive groups with roots in liberation theology organizing of the 1960s and 1970s remain active even today. By contrast, such movements were always weaker in Evangelicalism (Boas n.d.;Mainwaring and Wilde 1989). In addition, although Latin America's lay Evangelicals are centrists or even slightly left of Catholics on issues such as the environment, crime, and economics, they are distinctly conservative on sexuality politics-precisely the bundle of issues that has become most salient for the contemporary grassroots right (Smith and Boas 2020;Smith and Veldman 2020). The rise of Evangelical Christianity has bolstered new mobilizing structures on the right. Evangelical converts attend church more frequently than they did as Catholics; one recent study conservatively estimated that Evangelicals and Pentecostals spend at least twice as many hours in church per year as do Catholics (Smith 2019b). Members of Evangelical and Pentecostal churches may be particularly prone to adopt shared political views, due to regular socialization, shared identities, and the salience of moralistic frames (Smith 2019a). Hence, Latin America's changing religious landscape is likely to be associated with increased exposure to religious messages with political import. Evangelical churches offer a promising mobilizing structure to channel these new grievances into political participation, as seen in Corredor's article in this issue on right-wing activism against the 2016 Colombian peace deal. A third set of hypotheses points to the emergence of new grievances. It is not a coincidence that the grassroots right arose in the wake of Latin America's Pink Tide of the first decade of the 2000s. As left-wing administrations engaged in ambitious programs for redistribution and reducing economic and ethnic and racial inequality, privileged sectors had new reasons to feel threatened (Eaton 2007;Fairfield 2011). Moreover, the tragically poor performance of Venezuelan Chavismo and a more mundane frustration with corruption and economic stagnation in countries such as Argentina and Brazil triggered backlash that crossed class lines. Consequently, grassroots right protests attracted a wide range of participants, united by opposition to the policies and records of leftist administrations (Alonso and Mische 2017, 6-7;Eaton 2007, 85;Tatagiba and Galvão 2019, 79-85). Although protests initially targeted isolated grievances, they coalesced into larger movements that generated antileftist identities (Alonso and Mische 2017;Samuels and Zucco 2018;Dias et al. this issue). These new antileft identities appear to be particularly powerful among popular sector voters who had, ironically, risen out of poverty under left-leaning governments (Junge 2019;Naím 2015;Pinheiro-Machado and Mury Scalco 2020). One particular set of grievances bears special mention: anxiety over changes in sexuality politics. Although the latter half of the twentieth century brought major changes in women's rights and family structures, the first decades of the twenty-first century ushered in previously unimaginable policy changes on abortion and samesex marriage, as well as dramatic shifts in understandings of gender and gender norms (Abreu Maia et al. 2020;Côrrea et al. 2008;Corredor 2019). Although there is little evidence that these changes triggered a conservative backlash in public opinion in society as a whole (Abreu Maia et al. 2020), they did raise the salience of such issues for social conservatives. In a new paper, Smith and Boas (2020) show that when issues such as same-sex marriage and abortion rise in the news, religious conservatives-including Catholics, but especially Evangelicalsincreasingly translate their conservative views into political behavior. As Reuterswärd (this issue) shows, in the Mexican state of Yucatán, anxieties about the advances made by LGBTQ+ movements triggered right-wing mobilization to expand restrictions on sexuality more broadly, including a state constitutional ban on abortion. While rightwing movements have organized in response to shifts in sexuality, they are not advocating a return to the status quo, but instead a transformed gender order that imposes new regulations on sex and sexuality (Corredor this issue). Fourth, the rise of social media has served as a valuable resource that enables flexible and adaptable organizational forms (Castells 2012;Rich 2020, 432) and can translate grievances into collective action through framing (Dias et al. this issue). Social media may be particularly suited for rightist mobilization. As Gold and Peña argue in this issue, Latin America's leftist parties were historically more effective than rightist parties in cultivating organizational linkages with civil society. However, social media may invert that scenario, advantaging groups with deep pockets or wealthy friends who can support costs that range from "boosting" posts to funding technological access and know-how (Schradie 2019). Indeed, Gold and Peña (this issue) show that right-wing parties have formed novel forms of linkages with digital activist groups. Moreover, Dias, von Bülow, and Gobbi argue in this issue that the "reductionism" and "antagonism" of right-wing populist messages makes them particularly effective in social media. Fifth, the twenty-first century has brought new patterns of diffusion across borders. Growing ties among the global right wing have contributed to diffusion processes through transnational networks. One understudied topic is the mobilization of opposition to Chavismo as an organizing tool across borders, acting in parallel to the transnational coordination among leaders of the Bolivarian left. Moreover, we have also seen the emergence of transnational advocacy networks on rightist issues, including gun rights and opposition to abortion and LGBTQ+ rights (Bob 2012). Additional studies are needed to examine empirically the relative importance of these explanations for the rise of the grassroots right as a whole or in particular cases. Moreover, it is worth noting that the literature implicitly posits that the grassroots right must be composed of different segments of society, including at a minimum economic elites, religious conservatives, and members of the popular sectors. Future research should investigate how these very different social segments have constructed networks and engaged in collective action and how they navigate the challenges of sustaining mobilization with such a heterogenous coalition. Yet another line of inquiry that so far remains underexplored, with the exception of important work by Clifford Bob (2012), relates to the international right-leaning networks-both "North-South" and within Latin America-that link the grassroots right across the region. From a more theoretical perspective, further questions remain. One relates to the circumstances under which progressive or left-wing organizing thrives in both churches and social media. While there does appear to be an elective affinity between these mobilizing platforms and rightist issues and identities, a long history of leftist organizing in both churches and social media leaves no doubt that these platforms are far from hegemonic for the right. Theoretically informed explanations of the multivalent nature of these organizing platforms in Latin America could provide an important contribution to the literature. WHAT IS THE IMPACT OF THE GRASSROOTS RIGHT? The grassroots right has had varying impacts on a range of outcomes related to public policy and democracy, including legislation, social citizenship rights, political parties and party linkages, and polarization. The grassroots right's track record on legislation related to public policy has been mixed so far, often disappointing to newly mobilized rightist citizens and civil society groups. 6 For instance, following the legalization of abortion in Mexico City, anti-abortion movements were able in short order to get anti-abortion constitutional amendments passed in 19 (out of 32) state constitutions in Mexico-but failed to do so in other states (Reuterswärd this issue; Zaremberg 2020). In Colombia, right-wing activists mobilized fears over gender ideology to defeat a popular referendum to approve a peace deal with the FARC guerrilla group. Yet this movement failed to stop the deal in Congress or even to significantly change the gender-sensitive language in the peace plan (Corredor this issue). Most recently, pro-choice activists outmaneuvered the anti-abortion movement in Argentina when abortion at up to 14 weeks of pregnancy was legalized. It is impossible to explain the success or failure of the grassroots right without also considering the strategies, resources, and frames deployed by progressive movements. In the case of the Argentine abortion legalization, for instance, the failure of the grassroots right was the flip side of the success of feminist activists. Feminist movements' use of frames related to public health and social justice for poor women proved to be more effective than pro-life narratives about the human rights of the unborn (Daby and Moseley 2021;Morgan 2014). Just as the policy successes of leftwing governments may restrict the policy alternatives of subsequent rightist leaders (Niedzwiecki and Pribble 2017), the actions taken by left-wing movements may constrain the strategies and impact of movements on the right. We need more empirically careful and theoretically informed case studies to begin to draw general conclusions about the relative strengths of the grassroots right and progressive groups in such battles and how the actions of each side change the political calculus of the other. One promising case study may be Chile's constitutional rewriting process, which will certainly draw both rightist and leftist mobilizing. Additional work is needed to understand how allies inside the state shape the influence of right-wing groups. As the 2019 special issue of this journal on the "politics of participation in Latin America" ) emphasizes, allies within the bureaucracy and judiciary might amplify the policy impact of civil society groups. These allies may support the aims of the grassroots right, as seen in Corredor's analysis of the peace process in Colombia. Likewise, actors such as medical professionals, who are tasked with implementing policies such as abortion, can coordinate with rightwing movements to block progressive changes (Morgan 2014;Pérez Betancur and Rocha-Carpiuc 2020). Yet progressive bureaucrats can also block the social change envisioned by the grassroots right (Abers 2019). In other words, even if the grassroots right succeeds in electing its allies, state actors may resist conservative retrenchment. Further theoretical and empirical work remains for scholars to disentangle the complex interactions within the state that shape the impact of the grassroots right in both advocating legislative change and influencing policy implementation. Turning from discrete policy changes to an overall view, how has the grassroots right impacted Latin American democracy? We would argue that the grassroots right has enhanced democratic quality in some instances by providing representation and opportunities for participation to groups that historically had been excluded, such as Evangelicals (Boas 2021;Boas and Smith 2019;Smith 2019a). This position is not dissimilar to an older argument that rightist participation in democratic politics forestalled military coups and armed right-wing subversive activities by giving those on the right a stake in the democratic system (Loxton 2014;Payne 2000;Gibson 1996). In addition, in countries led by the "radical" or Bolivarian left, right-wing activists have, at times, mobilized to support liberal democratic institutions and checks and balances. Yet the grassroots right also has nonetheless potentially pernicious effects on democracy, particularly when it seeks to restrict the rights of marginalized groups. Indeed, Pentecostal and Evangelical groups have been the most prominent opponents of LGBTQ+ rights and inclusion in Latin America (Corrales 2017). Right-wing activists have demanded repressive policing that violates the civil liberties and procedural rights of marginalized communities, even rejecting the very notion that these groups should have citizenship rights (González and Mayka n.d.). In some countries of the region, grassroots right mobilization has accompanied societal trends toward an increasingly Manichean polarization. Yet leftists have also abetted polarization in Latin America, which is why we argue that both populism and authoritarianism are orthogonal to left-right divides in Latin America. Overall, we see these trends toward increasing polarization, populism, and illiberalism as worrisome for the long-term stability of Latin American democracy. These tensions pose an urgent research agenda for future scholars. What circumstances encourage the representation of diverse ideological perspectives without exacerbating populism, polarization, and illiberalism? Under what circumstances and in what contexts can grassroots right mobilization be a force for stability and inclusion without sacrificing the rights of other groups? One line of investigation that may be promising focuses on the potential of "rights" frames for bridging divides. In the US context, for instance, Lewis (2017) shows that Evangelicals' strategic adoption of rights-based framing in anti-abortion mobilization has had positive downstream consequences for democracy. Such frames have warmed Evangelicals' attitudes toward rights and democracy themselves, and can encourage US Evangelicals to become more tolerant of groups they oppose (Lewis 2017). Future work should investigate the roles of frames and political context in inclusion and moderation of the grassroots right in Latin America. OVERVIEW OF ARTICLES IN THIS SPECIAL SECTION The four articles review recent developments in mobilization by the grassroots right, examining a range of countries (Argentina, Brazil, Colombia, Mexico) and issues (LGBTQ+ rights, abortion, peace processes, corruption, removing elected officials from office). They highlight innovations in strategies, tactics, and frames and generate new hypotheses about the causes and consequences of the grassroots right in Latin America. The article by Camilla Reuterswärd explores the impact of right-wing movements through a comparison of struggles over abortion in two Mexican states. In Yucatán, pro-life movements succeeded in getting a restrictive anti-abortion law passed, while a comparable initiative failed in Hidalgo. Both pro-life movements framed their demands in biological and bioethics terms while cloaking the role of the Catholic Church. To explain their varied success, Reuterswärd points to movement resources, leveraged through tight networks between religious organizations, economic elites, and politicians. The hegemonic Yucatán Catholic Church mobilized moral and financial resources to pressure politicians to pursue anti-abortion reform. Conversely, the church in Hidalgo lacked such ties, creating an opening for feminist groups' demands. Reuterswärd signals the roles of networks, material resources, and frames in explaining the impact of the grassroots right. The article by Elizabeth Corredor also examines the policy impact of the grassroots right, this time in opposition to Colombia's 2016 peace agreement. Opponents of "gender ideology" objected to an intersectional approach used in the peace deal that recognized diverse sexual identities and promoted gender equality. Antigender activists creatively framed their opposition using human rights discourses, portraying a gender-sensitive approach as an assault on freedom of religion, the right to life, the right to marriage and a family, and the right to dignity. Using careful content analysis, however, Corredor demonstrates that anti-gender activists ultimately failed to strike references to gender sensitivity from the final document; nor could they stop its legislative passage, despite close ties to prominent politicians. One implication is that backlash against progressive causes does not necessarily succeed but can spark new dynamics of contention as movements and countermovements respond to each other. Tayrine Dias, Marisa von Bülow, and Danniel Gobbi illustrate framing processes via an impressive content analysis of thousands of Facebook posts made by five right-wing groups during the 2017 campaign to impeach Brazilian president Dilma Rousseff. They show that despite marked ideological differences, these five groups coordinated messages that cast blame for endemic corruption on the ruling Workers' Party and framed impeachment as the necessary corrective. This article highlights the crucial role of social media in creating a shared interpretation of grievances among right-wing individuals who may share few offline network ties and hold diverse political views. In their study, Tomás Gold and Alejandro Peña further probe the power of social media in right-wing activism, comparing protest cycles in Argentina (2012-13) and Brazil (2013-16). Gold and Peña argue that right-wing parties have developed novel forms of linkages with voters by aligning with digital activist groups, leading those parties to embrace protest and other forms of contentious politics that were historically the purview of leftist and other labor-based parties. Although these linkage strategies enabled right-wing parties to rise to power in both countries, digital activists maintained an important role under rightist rule in Brazil while being displaced in Argentina. This divergence raises important questions about how digital linkage strategies endure over time and their potential to yield institutional influence for the grassroots right. Together, the four essays shed new light on the role of the grassroots right in politics and raise important questions for further study. Right-wing activists have emerged as central figures in contemporary struggles over public policy, human rights, and partisan politics. We hope this special section serves as a call for future research that helps make sense of the crucial role of the grassroots right in Latin American democracies. NOTES 1. In emphasizing mobilized support for rightist issues and identities, our definition excludes militarized local right-leaning organizations whose primary purpose is criminal or extralegal territorial control, such as the milícias that dominate many urban neighborhoods in Brazil (Manso 2020). However, it does include right-wing groups that employ violent strategies. 2. During the early years of the Hugo Chávez presidency, Venezuela's opposition undertook a number of antidemocratic moves (Gamboa 2017). As the Venezuelan regime hardened, right-wing opposition groups shifted their approach to support the checks and balances involved in democracy. However, authoritarian strands persist among some in the opposition. 3. The Catholic Church's issue priorities are nonetheless diverse and conflictual, and defy easy categorization on the left-right spectrum (Hagopian 2008;Hale 2019;Warner 2000). 5. This analysis is of the combined 2008 to 2018-2019 AmericasBarometer file. For data access, see www.americasbarometer.org. These estimates include people who choose nonresponse to the ideology question, given evidence that understanding of and ability to locate oneself on the ideological scale is uneven and nonrandomly distributed in Latin America (Ames and Smith 2010;Batista Pereira 2020;Zechmeister and Corral 2013). The conclusion is roughly similar if we examine only citizens choosing a position from 7 to 10. 6. Although we will not address this topic here for the sake of space, it is worth noting that the grassroots right has not had overwhelming success in electoral politics, either. The 2018 election of Jair Bolsonaro in Brazil is the new right's most prominent electoral success, but the 2018 second-round loss of the Evangelical, anti-gender-ideology pastor Fabricio Alvorado in Costa Rica may more accurately reflect the general fate of the grassroots right at the ballot box in Latin America.
2021-09-09T13:18:07.715Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "b7547ff57cd06975d8e9cccb5ef7131246a4aa1a", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F631FC3F628FA1A71186680D545DBC67/S1531426X21000200a.pdf/div-class-title-introduction-the-grassroots-right-in-latin-america-patterns-causes-and-consequences-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "b7547ff57cd06975d8e9cccb5ef7131246a4aa1a", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
56384735
pes2o/s2orc
v3-fos-license
Genetic Analysis for Yield and Some Yield Traits in Spring Wheat A 5x5 diallel cross involving five wheat varieties/lines (Kohistan-97, Chakwal-86, 6529-11, 6544-6 and 7086-1) was conducted. Twenty hybrids along with five parents were planted in randomized complete block design with three replications in order to find out the gene action controlling some vital polygenic yield related attributes like plant height, spike length, peduncle length, number of tillers per plant and grain yield per plant. Highly significant differences among genotypes were observed for all traits. Plant height has only significant differences. The graphical presentation demonstrated that number of tillers per plant was ruled by partial dominance with additive type of gene action. While over-dominance was observed in plant height, spike length, peduncle length and grain yield per plant. It showed the potential for the availability of transgressive segregates in later filial generations. The prevalence of partial dominance type of gene action for number of tillers per plant showed that it can be gradually improved by selection. Materials and Methods This study was conducted during the year 2010. The material comprised five varieties/lines of spring wheat viz. Kohistan-97,Chakwal-86,6529-11, 6544-6 and 7086-1. This material was planted in the field on 18 th November, 2009 in a twin row of 5 meter length. The crosses were attempted in a diallel fashion including reciprocals. The hybrid seeds including reciprocals and parents were sown in RCBD with three replications. All the entries (20 crosses and 5 parent lines) were randomly assigned to 25 plots of each replication. Each plot consisted of single row of 5 meter length. The plant-to-plant and row-to-row distance was 15 and 30 cm, respectively. Two seeds per hole were sown and after germination these were thinned to single seedling per site to ensure good plant stand. At the time of maturity, ten guarded plants from each line were taken at random and data were recorded on plant height, number of tillers per plant, peduncle length, spike length and grain yield per plant. The data were subjected to analysis of variance and gene action was worked out for the characters showing significant differences among the genotypes Results and Discussion Analysis of variance Steel et al., [23] showed highly significant differences among genotypes for all the traits except plant height for which only significant differences were noted (Table 1). Plant Height Plant height is an important trait of wheat. Wheat cultivars genetically vary in plant height from short stature to medium and tall. Tall cultivars are more vulnerable to lodging than medium or short stature cultivars. So in breeding, wheat plant height should also consider as an important trait. Phenotypic expression of any trait is the out-come of the genotype × environment interaction. The five parents included in this study varied from medium to tall stature. Analysis of variance indicated the significant differences among the genotype. It is evidented from graphical analysis that plant height is controlled by over-dominance type of gene action as regression line intercepted the Wr-axis below the point of origin see Fig 1, Non-allelic interactions were absent as regression line did not deviate from the unit slope. From array mean variety Chakwal-86 was indicated the best general combiners having array mean value of 109.30 cm while genotype 6529-11 was poorest performer with an array mean value of 107.07 cm Current results are in accordance with the findings of Kashif et al., [16], Akram et al., [1], Iqbal ., [13], Hafeez., [6], Haidari et al., [9], Saleem et al., [22], Nazeer., [20], Uma and Sharma., [25], Gurmani et al., [5] and Nazan ., [19]. Distribution of array points indicates that variety Chakwal-86 contained maximum dominant genes for plant height as being nearest to the origin while genotype 7086-1 was present farthest from the origin indicating maximum recessive genes for plant height. Because over-dominance type of gene action controlled this trait so selection in early generations for plant height would be difficult. Spike Length Spike length is an important yield related trait of wheat. It directly contributes to the yield. More spike length produces more spikelets per spike that ultimately produce more grains per spike thus leading to yield increase. So in wheat breeding importance should be given to this trait and spikes with more spike length should be selected. Analysis of variance for spike length revealed highly significant differences among genotypes under study. From array mean, genotype 6529-11 was indicated the best general combiners having array means value of 13.79 cm while genotype 6544-6 was poorest performer with array mean value of 13.04 cm. Graphical presentation indicated the presence of over-dominance type of gene action for this trait as regression line intercepted Wr-axis below the origin, see Fig. 2. Epistasis is absent as regression line followed the unit slope. So it can be concluded from present study that selection in early segregating generations will not be possible. Peduncle Length Peduncle length is also an important trait in wheat. It plays an important role in plant yield during heading stage. It varies from genotype to genotype. Analysis of variance of the data showed highly significant differences among genotypes for this trait. Persual of array means revealed that genotype 6529-11 was good general combiner having an array mean value of 18.47 cm while variety Chakwal-86 showed poor performance having an array mean value of 17.17 cm. Over-dominance type of gene action is observed from graphical analysis as regression line intercepted the Wr-axis below the point of origin, see Fig. 3. Due to over-dominance type of gene action, selection for this trait in early generations is difficult. Epistasis is not present because regression line did not deviate from unit slope. These results are in accordance with Munis., [18] Ullah et al., [24], Uma and Sharma., [25], Hafeez., [6] and Farooq., [4].It is clear from distribution of array point that genotype 6544-6 contained maximum dominant genes for this trait being nearest to the point of origin while variety Chakwal-86 had maximum recessive genes being farthest from the origin. Number of Tillers per Plant Number of tillers per plant is a vital yield related trait. Greater number of tillers per plant ensure higher grain yield. Analysis of variance showed highly significant differences among all genotypes. Graphical analysis for tillers per plant revealed additive types of gene action with partial dominance as Wr-axis is touched above the point of origin by the regression . As regression line follows the unit slope so epistasis is not present. It suggests early generation selection for this trait. From array mean table it is revealed that genotype 6544-6 was good general combiners having an array mean value of 11.78 cm while variety 7086-1 had poorest performance with an array mean value of 11.03 cm These results are also confirmed by Khaliq and Chowdary ., [17], Akram et al., [1], Inamullah et al., [11], Hafeez., [6], Ullah et al ., [24], Bakhsh., [3], Farooq., [4], Nazeer., [20], Inamullah et al., [12] and Gurmani et al., [5]. Number of tillers per plant is a vital yield related trait. Greater number of tillersper plant ensure higher grain yield. Analysis of variance showed highly significant differences among all genotypes. Graphical analysis for tillers per plant revealed additive types of gene action with partial dominance as Wr-axis is touched above the point of origin by the regression line, see Fig 4. As regression line follows the unit slope so epistasis is not present. It suggests early generation selection for this trait. From array mean table it is revealed that genotype 6544-6 was good general combiners having an array mean value of 11.78 cm while variety 7086-1 had poorest performance with an array mean value of 11.03 cm. Grain Yield per Plant Ultimate goal of any wheat breeding is to get maximum yield. So it is most important trait. The plant express its potential the most when it has all its needs met in right proportions and at right time. Analysis of variance showed highly significant differences among all parental and hybrid genotypes. It was indicated that variety Kohistan-97 performed as best general combiner having highest array mean of 25.07g while genotype 7086-1 was the poorest performer having array mean of 21.88g. Graphical analysis indicated that regression line intercepted the Wr-axis below the point of origin, thus revealing over-dominance type of gene action, see Fig 5. Non-allelic differences are absent as regression line did not deviate from the unit slope. Conclusions The present study revealed the presence of over-dominance type of gene action for plant height, peduncle length, spike length and grain yield per plant whereas, additive type of gene action with partial dominance was found for number of tillers per plant. Transgressive Universal Journal of Agricultural Research 2(7): 272-277, 2014 277 segregates can be found for plant height, spike length, number of tillers per plant in later segregating generations. However ,desirable peduncle length can be fixed by gradual selection in segregating populations.
2019-03-30T13:07:20.570Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "758534a05154b64b7e1263a5831e50a23e5570d0", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20141001/UJAR7-17102585.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "97f82376fe341bd26fbd6283214bfd416f5e22c2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
49353073
pes2o/s2orc
v3-fos-license
Population Genomics Provide Insights into the Evolution and Adaptation of the Eastern Honey Bee (Apis cerana) Abstract The mechanisms by which organisms adapt to variable environments are a fundamental question in evolutionary biology and are important to protect important species in response to a changing climate. An interesting candidate to study this question is the honey bee Apis cerana, a keystone pollinator with a wide distribution throughout a large variety of climates, that exhibits rapid dispersal. Here, we resequenced the genome of 180 A. cerana individuals from 18 populations throughout China. Using a population genomics approach, we observed considerable genetic variation in A. cerana. Patterns of genetic differentiation indicate high divergence at the subspecies level, and physical barriers rather than distance are the driving force for population divergence. Estimations of divergence time suggested that the main branches diverged between 300 and 500 Ka. Analyses of the population history revealed a substantial influence of the Earth’s climate on the effective population size of A. cerana, as increased population sizes were observed during warmer periods. Further analyses identified candidate genes under natural selection that are potentially related to honey bee cognition, temperature adaptation, and olfactory. Based on our results, A. cerana may have great potential in response to climate change. Our study provides fundamental knowledge of the evolution and adaptation of A. cerana. Introduction Adaptation to diverse and changing environments is a fundamental question in evolutionary biology. An understanding of the mechanisms by which variable environments influence the genetic diversity of a species will not only illuminate its evolutionary history but also provide information for the development of strategies designed to effectively protect important species in a changing climate (Skelly et al. 2007;Balany a et al. 2009). In addition, detecting genes under natural selection can help elucidate the underlying mechanisms of adaptation to local environments and facilitate selective breeding. The eastern honey bee (Apis cerana) is an interesting species to study adaptation to diverse environments, as its natural distribution covers a large variety of environments, ranging from tropical to cold temperate climates and from plains to mountainous areas (Hepburn and Radloff 2011). It also exhibits the ability of fast dispersal (Gloag et al. 2016). However, despite the extensive number of population studies using morphometric characteristics, microsatellites, and mitochondrial DNA segments (Smith et al. 2000;Radloff et al. 2010;Hepburn and Radloff 2011;Tan et al. 2015 and references therein), knowledge of the evolution and adaptation of A. cerana is limited due to the lack of the studies at the genomic level. To date, the most comprehensive studies of A. cerana populations have been conducted by Smith et al. (2000) using mitochondrial DNA and Radloff et al. (2010) using morphometric characteristics, who identified five and six major groups across the range of A. cerana, respectively. Both studies classified the majority of A. cerana in China as one group. However, several studies focusing on different parts of China identified variations in mitochondrial DNA fragments or microsatellite DNA, even in populations from small geographic regions, suggesting large genetic differences among populations in China (Xu et al. 2013;Yin and Ji 2013;Zhao et al. 2014;Tan et al. 2015;Liu et al. 2016;Cao et al. 2017). This disparity may be attributed to the limited information gained from the A. cerana mitochondrial tRNAleu-COII region as most of the noncoding region is missing in many populations (Smith et al. 2000). Population genomics provides powerful toolsets and holds great promise in the study of honey bee populations (Lozier and Zayed 2017). Population studies at the genomic level not only increase the power and resolution of traditional genetic approaches but also identify genetic variation in adaptive and economic traits, as well as yield insights into the genetic architecture. However, population genomics of A. cerana has been hampered by limitations in obtaining adequate samples and the lack of a reference genome until 2015 (Park et al. 2015). Genomic research of A. cerana is also desirable to fill gaps in knowledge and the need for protection (Teichroew et al. 2017). A. cerana is a keystone species that provides pollination and other valuable ecosystem services that contribute significantly to agricultural production, food security, and nutrition for a growing global population. However, the A. cerana population has experienced a substantial decline in recent decades (Theisen-Jones and Bienefeld 2016). As climate change is one of the major threats (Thomann et al. 2013), studies of aspects of the evolutionary biology of A. cerana are urgently needed to obtain an understanding of how this keystone pollinator may respond to and/or be affected by climate change. The draft genome of A. cerana (Park et al. 2015) has provided a fundamental resource that enabled the resequencing of genomes and population genomic research. In the present study, we collected samples from 18 areas in China that, although they only constitute a small proportion of the A. cerana range, cover a variety of climates from tropical to temperate areas, with terrains such as plateaus, basins, plains, and islands, and resequenced180 bees with each individual from a different colony. We also downloaded resequenced data obtained from representative subspecies from major lineages of A. mellifera (Harpur et al. 2014;Chen et al. 2016). Using population genomic methods, we identified genetic variations and explored the population structure and differentiation among A. cerana populations. We also investigated the population history to understand climate-driven demography. Finally, we identified genes related to adaptation to local environments in A. cerana. This study provides insights into the evolutionary history and genetic diversity of A. cerana, as well as an example of mechanisms by which a species can adapt to regions with a variety of climates. Results We sampled 180 individuals from 18 populations throughout China ( fig. 1A Population Structure We first performed clustering analyses using ADMIXTURE (Alexander et al. 2009) to examine the relationships among the populations ( fig. 1B). With a K value of 2, populations from the north (AK, XA, GY, YL, JZ, and MX) formed an ancestral cluster and populations in the west (BM, DQ, and ME) and the Hainan island (HK) formed another cluster, while other populations distributed between these areas showed different degrees of mixed ancestry. As K increases from 3 to 5, HK, BM, and ME showed distinct ancestries from other populations. With K ¼ 6 and K ¼ 7, DQ, JZ, and YL were separated from the other populations. These populations are mainly from island and mountainous areas and are more isolated from the other populations. MY and QY were highly admixed in all cases, with large differences among individuals. Cross-validation errors for different K vales are provided in supplementary table S4, Supplementary Material online. The results of the principal component analysis (PCA) using GCTA (Yang et al. 2011) further supported the patterns. The first and second principal components (PC1 and PC2) separated BM, HK, and ME ( fig. 1C), consistent with the ADMIXTURE results at K ¼ 3-5; PC3 further separated JZ, YL, and QY from other populations but was not able to distinguish between YL and QY (supplementary fig. S1, Supplementary Material online); with PC4, DQ was also separated from the other populations, consistent with the ADMIXTURE results at K ¼ 6 and 7. The remaining populations were not separated by PC1 through PC4, further indicating that these populations may be admixed; however, PC3 and PC4 for these populations reflect their distribution along latitude (supplementary figs. S1 and S2, Supplementary Material online). We performed three population tests of admixture using TreeMix (Pickrell and Pritchard 2012) to further test for signature of admixture among the populations. We calculated the corresponding f 3 statistics for all possible combinations of three populations, with a negative f 3 statistic indicating admixture (Reich et al. 2009). Although only three populations (AK, HC, and SJ) showed explicit signals of admixture from different source populations (supplementary table S5, Supplementary Material online), the populations from island and mountainous areas (HK, DQ, BM, ME, and JZ) had the highest f 3 statistics comparing to the other populations, consistent with the PCA results. Interestingly, the genetic diversity of the island and mountain populations was lower than the other populations (table 1), suggesting that the high genetic diversity of the other populations may be the result of admixture. Next, we constructed a neighbor-joining tree using A. mellifera as an outgroup. Based on the results of this analysis, the grouping of populations reflected their geographical locations from the north to the south ( fig. 1D). Genetic Differentiation We calculated pairwise F ST between the populations to quantify their genetic differentiation (table 2). Pairwise F ST ranged from 0.008 to 0.228, with an average of 0.072, consistent with an overall structured population and heterogeneity in gene flow. F ST between the more isolated populations (HK, BM, ME, DQ, and JZ) ranged from 0.099 to 0.228, with an average of 0.162. Compared with A. mellifera, the level of genetic differentiation in A. cerana is higher than among the subspecies of A. mellifera (average F ST ¼ 0.10) (Wallberg et al. 2014), but lower than the pairwise F ST among A. mellifera lineages (Harpur et al. 2014;Wallberg et al. 2014;Cridland et al. 2017). This result indicates a subspecies level of divergence in these populations. Population Genomics of the Eastern Honey Bee . doi:10.1093/molbev/msy130 MBE To further test the relationship among the more isolated A. cerana populations, we included representative subspecies from lineages of A. mellifera (Harpur et al. 2014;Chen et al. 2016) and calculated time of divergence using MCMCTree program in the PAML package (Yang 2007). The results suggest an ancient split between the populations ( fig. 2A). The divergence time between the branches ranged from 300 to 500 Ka, which is significantly earlier than the time of divergence between A. mellifera subspecies (20-35 Ka) and comparable to divergence time among lineages (150-300 Ka) (Ruttner 1988;Franck et al. 2001;Wallberg et al. 2014). Times of divergence of the A. cerana populations are consistent with a subspecies level of differentiation. To explore the influence of geographic distance on divergence, we performed the Mantel test to test the association between geographic distance matrix and the pairwise F ST matrix obtained above, using the R package ade4 (Dray and Dufour 2007). We only identified a weak association between the geographic and genetic distances (P ¼ 0.0837). To reduce the effect of mountains and the Qiong Zhou channel that separate HK from the mainland populations, we removed the island and mountain populations and repeated the Mantel test, but did not observe an association between geographic and genetic distances (P ¼ 0.1521). In addition to population level differentiation, we also examined the genetic relatedness at the individual level by calculating allele sharing distances among all individuals with PLINK (Purcell et al. 2007), and constructed a network using the NetView pipeline (Neuditschko et al. 2012) (fig. 2B). Individuals are represented by nodes and individual relationships between bees are represented by edges connecting the nodes. The patterns revealed by NetView corroborated the PCA results. Some individuals from distantly located populations are connected (such as individuals in AK and NY), indicating a close relationship among these individuals. In contrast, individuals from geographically close populations are not necessarily connected (MX and JZ individuals, for example), indicating substantial differentiation between these individuals. The NetView results further support the results of the Mantel test showing that the correlation between genetic distances and geographic distances was not significant. These results of population structure and Mantel tests collectively indicate that strong barriers may have a greater influence on population differentiation in A. cerana than physical distance. For example, the pairwise F ST value was high for adjacent populations with barriers such as ME and JZ (277 km, F ST ¼ 0.16), and ME and SN (333 km, F ST ¼ 0.11), and was low for distant but connected populations, such as between XS in the tropical region (south) and YL in the temperate region (north, 1,875 km, F ST ¼ 0.06). Demographic History We inferred the demographic history of the A. cerana populations to obtain a better understanding the evolutionary history of this species. We estimated the historical effective population sizes of the populations using SMCþþ (Terhorst et al. 2017). Compared with MSMC (Schiffels and Durbin 2014), SMCþþ provides more accurate estimates for the recent past, and does not require phasing of the genomic data, thus avoiding the problem of phasing errors for populations that lack a suitable reference panel as is the case in this study (Terhorst et al. 2017). The effective population size history of different A. cerana populations showed a similar pattern ( fig. 3A). Surprisingly, changes in effective population size aligned well with changes in the historical global temperature. The effective population size peaked during Marine Isotope Stage 5 (MIS5, $80-130 Ka) (Lisiecki and Raymo 2005;Jouzel et al. 2007), the last major interglacial stage in history. During MIS5, the effective population size reached a local minimum during the colder substage MIS5b and recovered during the warmer substage MIS5a. After MIS5, effective population sizes gradually decreased during MIS4, until the beginning of the Holocene (11.7 Ka to the present), when the population started to increase dramatically. The Holocene is an interglacial stage in the current ice age, and the increase in effective population sizes in the warm periods suggested that an elevated global temperature may be advantageous for A. cerana populations. Notably, the effective population sizes of BM and XS experienced temporary decline during $1-1.7 Ka, while DQ showed a slower increase compared with the other populations during the same period ( fig. 3B). The reason for the decline/slow growth is not clear; however, as BM, DQ, and XS represents the southwest part of China, some local events may have influenced the honey bee population in these areas. After the disturbance, the effective population sizes exhibited a rapid recovery. We also estimated historical gene flow by calculating relative the cross coalescence rate (CCR) between all pairs of geographically adjacent populations using MSMC (Schiffels and Durbin 2014). The nonisolated populations showed pervasive gene flow in history, while the island and mountain Candidate Genomic Regions under Natural Selection A steep spatial gradient of allele frequencies in continuous populations may indicate regions under selection and has been used to identify loci that were not detectable using other methods (Yang et al. 2012 fig. S6, Supplementary Material online). In A. mellifera, the Hippo signaling pathway has been suggested to be important for the adaptation to cold climate (Chen et al. 2016). The involvement of the Hippo signaling pathway in the cold adaptation of both A. mellifera and A. cerana indicates common mechanisms of cold tolerance in honey bees, and suggests convergent evolution between the two species. However, the roles of the candidate genes require further validation. For the enriched GO terms, we further used the REVIGO tool (Supek et al. 2011) to summarize the nonredundant terms (supplementary tables S7-S9, Supplementary Material online). The top enriched category in biological process is "response to stimulus," as well as other terms such as "response to chemical" and "defense response," suggesting the importance of the processing of external signals in honey bee's adaptation to different environments. The enriched KEGG pathways of phototransduction and neuroactive ligand-receptor interaction further indicates that these interactions may be related to the high capacity of the A. cerana perception and cognitive systems. A. cerana has been shown to perform significantly better in learning and memory than A. mellifera in terms of color and grating patterns (Qin et al. 2012). Selected genes in the pathways include Nmdar1, which plays a key role in learning and memory (Xia et al. 2005), inaC, which encodes the eye protein kinase C and is required for inactivation of photoreceptor cells and light adaptation (Hardie et al. 1993), and ninaC, which is required to maintain the stability of inaC (Venkatachalam et al. 2010). Further categorization of the related variants may yield insights into mechanisms underlying honey bee perception and cognition. We further identified selective sweeps in the genome using 16 populations (excluding MY and QY) by calculating Tajima's D and F ST using VCFtools (Danecek et al. 2011), and the composite likelihood ratio (CLR) using SweepFinder2 (DeGiorgio et al. 2016) along sliding windows; we identified 13.9 M, 40.3 M, and 13.6 M regions under selective sweep, respectively. Genes overlapping the sweep regions were selected, among which 1591 genes were identified by at least two methods and were considered as genes under selective sweep. We performed a gene set enrichment test of KEGG pathways and GO categories, and identified significantly enriched categories (supplementary table S10 MBE signaling pathway, the Wnt signaling pathway, the Hedgehog signaling pathway, the Jak-STAT signaling pathway, and the Notch signaling pathway that were shown to be involved in ovary activity (Duncan et al. 2016;Chen et al. 2017). The enrichment of selected genes in different pathways suggests the importance of reaction to external cellular signals in the adaptation of A. cerana. We further identified genes under balancing selection using the top genes with positive Tajima's D values. Enrichment tests revealed that GO categories related to olfactory and sensory processing were highly enriched ( fig. 4), including "olfactory receptor activity," "odorant binding," "sensory perception of smell," and "sensory perception of chemical stimulus." As balancing selection maintains the diversity of alleles, diversity in olfactory genes may be important for honey bee survival. The olfactory system is involved in communication, recognition of parasites, and foraging, and a high diversity of olfactory genes may be beneficial for a colony to cope with various internal and external chemical signals, but further investigations are required. Discussion Variable climates have profound effects on the genetic diversity of a species. Analyses at the genomic level not only provide detailed information about the structure, dynamics, and adaptation to a variety of climates but also facilitate protection and selective breeding. The first study of the population genomics of A. cerana showed that populations in China harbor considerable genetic diversity, with a divergence at subspecies level or higher between HK, BM, DQ, ME, and possibly YL and JZ populations. Landscape factors such as mountains and channels, rather than physical distances, appear to play an important role in population differentiation. Analyses of the population history suggest that the historical global temperature had a substantial influence on the effective population size, with a warmer climate facilitating population growth. Genes related to adaptation to local environments were identified. Previously, understanding of the differentiation of A. cerana was quite limited. The most recent comprehensive study was based on morphometric characters (Radloff et al. 2010), which identified six main morphoclusters, and showed that most of China belongs to morphocluster I. Other studies have categorized several ecotypes of A. cerana in China but conflicting results were reported (Gong 2000;Yang 2001). Currently, the most generally accepted hypothesis is that nine A. cerana ecotypes are present in China, including Hainan, Yungui Plateau, Tibet, Aba, Changbai mountains, south Yunnan, north China, south China, and central China (The National Animal Genetic Resources Committee 2011). However, these studies mainly relied on the results obtained using no more than 12 morphological characters that were traditionally developed for A. mellifera, and therefore may be missing some crucially different characters when applied to A. cerana (Meixner et al. 2013). Researchers have attempted to use the mitochondrial tRNAleu-COII region, which was also developed for characterization of A. mellifera populations (De La R ua et al. 2000;Smith et al. 2000;Tan et al. 2006Tan et al. , 2007Tan et al. , 2015Zhao et al. 2014); however, these studies only provided limited information about A. cerana due to the short fragment of A. cerana tRNAleu-COII. In contrast, our genome resequencing approach provided 2.67 million polymorphic sites distributed throughout the genome, with which fine scale population structures can be obtained. This study may serve as a useful reference in future studies. MBE Analyses of population structures and connectivity showed that populations analyzed in this study belonged to one of two categories: the more physically isolated populations located in islands or mountainous areas and the connected populations. The more isolated populations, namely, HK, BM, DQ, ME, and JZ, exhibited high differentiation from the other populations and lower genetic diversity. Conversely, the less isolated populations showed less differentiation, higher gene flow and higher genetic diversity. MY and QY showed inconsistent individual genetic compositions, and these differences may be attributed to human activities that introduced nonlocal populations (Zhang 2012), which potentially impaired the local genetic integrity. The results of the Mantel tests are consistent with the results of small-scale studies using microsatellite markers (Yin et al. 2008;Ji et al. 2009). Our results identified a relationship among physical barriers, genetic differentiation, gene flow, and local genetic diversity. Barriers rather than physical distances appeared to exert a substantial effect on population differentiation, based on both the population structure and the results of Mantel tests. This observation may be the result of the rapid migration of honey bees that promotes gene flow among populations in the absence of physical barriers. The potential of honey bees for rapid migration was best illustrated by the invasive A. cerana in Australia (Gloag et al. 2016) and the Africanized Honey Bees in the American continents, which spread from Brazil to the southwestern United States within a few decades (Winston 1992;Schneider et al. 2004), averaging a speed of >250 km/year. The rapid migration of honey bees in a landscape without strong physical barriers may promote gene flow and reduces genetic differentiation. In addition, population expansion/growth may facilitate the gene flow between populations as illustrated by the high relative CCR during the period of rapid population growth ( fig. 3B). The high genetic diversity observed in the connected populations is another possible result of high gene flow. In A. mellifera, human mediated gene flow has resulted in a higher genetic diversity in managed honey bee colonies (Harpur et al. 2012;Harpur, Minaei, et al. 2013). Genetic variability is important for a population to survive in a changing climate. For honey bees, high genetic diversity at the colony level can increase the fitness of colonies, as colonies that display higher diversity exhibit enhanced homeostasis (Jones et al. 2004;Oldroyd and Fewell 2007), are more productive (Oldroyd et al. 1992;Mattila and Seeley 2007) and are more resistant to diseases (Tarpy 2003). In addition, complementary sex determination of honey bees confers a high genetic load in populations that lack genetic diversity at the sex-determining locus (Harpur, Sobhani, et al. 2013). Consequently, factors that reduce gene flow, such as habitat fragmentation, may result in a reduced local genetic variability and colony fitness. In China, due to human activities such as urbanization, expansion of agricultural lands, and the invasion of A. mellifera, the distribution of the A. cerana population has been reduced by over 75% (Yang 2005). A. cerana populations has not only disappeared in many areas but the genetic variability in the remaining areas may also be at risk due to serious habitat fragmentation. Conservation efforts should consider population connectivity, and corridors of habitats may be important for the effective conservation of genetic diversity. However, high diversity due to human mediated gene flow is not necessarily beneficial because it can result in a loss of genetic integrity and adaptive alleles to the local environment, and thus the usage of endemic populations for apiculture should be promoted (De la R ua et al. 2013;Harpur, Minaei, et al. 2013). Human mediated gene flow can also facilitate the spread of diseases. An outbreak of Sacbrood virus disease originated from QY in the year after the introduction of A. cerana from southern provinces into QY (Zhang 2012). The introduction of nonlocal populations must be prohibited to protect local A. cerana populations. According to our data, differentiation among the more isolated populations (ecotypes) is occurred at the subspecies level, based on both the F ST and divergence time analyses. F ST for the more isolated A. cerana populations averaged 0.162, ranging from 0.099 to 0.228. In comparison, F ST between A. mellifera subspecies from the same lineage ranged from 0.05 to 0.15, with an average of 0.10 (Wallberg et al. 2014). F ST between different lineages of A. mellifera range from 0.20 to 0.56 as estimated by Wallberg et al. (2014), 0.324 to 0.540 as estimated by Harpur et al. (2014), and 0.134 to 0.423 as estimated by Cridland et al. (2017). Overall, F ST for the more isolated populations in our study indicate differentiation at the subspecies level, corresponding to the differentiation level of more distantly related subspecies in A. mellifera. The divergence time also suggest early differentiation among the populations between 300 and 500 Ka ( fig. 2A), which is comparable to the time of divergence between different A. mellifera lineages. Therefore, differentiation among the more isolated A. cerana populations in our study occurred at the subspecies level. In general, a species can survive climate changes in three ways: migration, plasticity, and adaptive evolution (Williams et al. 2008). A. cerana shows the potential to utilize all three strategies. Firstly, as discussed earlier, honey bees showed rapid dispersal across habitats to track a suitable climate space. However, researchers have not clearly determined how habitat fragmentation affects this ability. Based on our results, populations in islands and mountainous areas have low/no gene flow, indicating that migration may not be an option for these populations. Secondly, the eusocial honey bees may have high plasticity because they can maintain inner nest homeostasis to counteract change in the external environments (Seeley 1985;Schmickl and Crailsheim 2004). Thirdly, A. cerana may have a high intrinsic capacity to adapt to future climates through high genetic diversity. Finally, according to the population history, warmer climates may be beneficial for A. cerana, as the effective population size historically increased during warmer periods, and showed a rapid increase in the recent millennia. Historical population sizes also indicates the resilience of A. cerana, as rapid recovery of populations was observed after the disturbance between 1 and 1.7 Ka in southwestern China. In summary, as a species with a wide range in size, A. cerana shows good potential to cope with climate change as a whole, but some Population Genomics of the Eastern Honey Bee . doi:10.1093/molbev/msy130 MBE populations may face greater challenges than others. Moreover, global warming is only one of the many stresses that honey bees are currently facing, and bees are also vulnerable to stresses such as pesticides, pollutants, pathogens, parasites, and limited floral resources (Goulson et al. 2015;Klein et al. 2017). We have identified genes that may related to the adaptation to temperature or other environmental factors. These genes can be potential candidates for further exploration of the underlying mechanisms of the important traits and will be useful for conservation of honey bees in the presence of various challenges. Conclusions Our study takes advantage of the widely distributed A. cerana species and provides insights into the differentiation, adaptation, and history using population genomic approaches. A. cerana exhibits high genetic diversity, and physical barriers rather than distance are the driving factor of divergence in this highly migratory species. Our analyses highlight the role of historical global climates in the population dynamics of A. cerana, with warm climates favoring population growth. We identified adaptive genes along environmental gradients and other stresses. Our results provide insights into the evolution of A. cerana to diverse environments and advance our understanding of its vulnerability to climate change. Materials and Methods Sampling A. cerana samples were collected in 18 locations in China ( fig. 1A and supplementary table S1, Supplementary Material online). Bees from ten colonies were collected from each location, and one worker bee was randomly chosen from each colony for further sequencing. Sequencing, Read Mapping, and Quality Control For each sample, we prepared a paired-end library with high quality DNA and sequenced the DNA on the Illumina HiSeq 2500 sequencing platform following the standard procedures. We also download sequences for A. m. sinisxinyuan as an outgroup (Chen et al. 2016) and sequences for A. m. mellifera, A. m. jementica, A. m. carnica, and A. m. scutellata (Harpur et al. 2014) to calculation the divergence time. We used a custom Perl script to filter out low quality reads pairs, including reads with >50% of low quality bases (Q 5) in either of the paired reads, and reads containing >10% Ns. Clean reads were then mapped to the reference genome of A. cerana (Park et al. 2015) using the BWA-MEM aligner ) with the option "-t 4 -k 32 -M." The resulting bam files were sorted using SAMtools ) and duplicated reads were removed. For variant calling, we first used SAMtools to collect summary information from BAM files and compute the likelihood of possible genotypes, and called variants using the call function in Bcftools ). The reference genome was subdivided into segments and analyzed in parallel. Raw SNPs were then filtered based on quality scores and SNPs with a quality score <20 removed. Furthermore, a SNP was discarded if the total coverage of the SNP was less than onethird or greater than five times the overall coverage. If two SNPs were <5 bp apart, both SNPs were removed. Genetic Diversity and Differentiation The inbreeding coefficient (F), expected heterozygosity (He), and observed heterozygosity (Ho) were calculated using PLINK (Purcell et al. 2007). The minor allele frequency (MAF) and the proportion of polymorphic SNPs (Pn) were calculated using a custom Perl script. Pairwise F ST were calculated using SNPRelate package (Zheng et al. 2012). We then performed Mantel tests with F ST matrixes and distance matrixes using the ade4 package (Dray and Dufour 2007). Population Structure We used ADMIXTURE (Alexander et al. 2009) to investigate the population structure, with coancestry clusters ranging from two to seven. The principal component analysis was performed using GCTA software (Yang et al. 2011). Furthermore, we calculated the genome-wide allele sharing distance using PLINK (Purcell et al. 2007), and constructed a network using the procedure described in the NetView pipeline (Neuditschko et al. 2012). We also calculated f 3 statistics for each possible combination of three populations using the threepop program in the TreeMix package (Pickrell and Pritchard 2012). A neighbor-joining phylogenetic tree was constructed using TreeBest software (http://treesoft.sourceforge.net/treebest.shtml), with A. mellifera as an outgroup. We included QY and MY populations in the analyses of population structure; however, due to the presence of questionable samples, we excluded QY and MY from analyses of population history and genes under selective sweep. Population History To estimate divergence time between populations, we first obtained single copy genes in A. cerana and A. mellifera genome based on the result of blastp (E-value cutoff of 1 Â 10 À7 ) and treefam clustering (Li et al. 2006), and obtained 5, 854 single copy genes. Next, we constructed a supergene for each population using putative sequences of coding regions for single copy genes with SNPs, and calculated the divergence time using the MCMCtree package in PAML 4.5 software (Yang 2007) with the HKY85 model and two calibrations: 1) a divergence time of 6 to 8 Ma between A. mellifera and A. cerana (Arias and Sheppard 1996), and 2) a divergence time of 0.03 to 0.165 Ma between A. m. sinisxinyuan and A. m. mellifera (Chen et al. 2016). We used SMCþþ (Terhorst et al. 2017) to estimate the historical effective population size with a mutation rate set to 5.3 Â 10 À9 (Wallberg et al. 2014) and a polarization error of 0.5 as the identity of the ancestral allele is not known. To estimate historical gene flow between populations, we first performed haplotype phasing using Shapeit2 package (https://mathgen.stats.ox.ac.uk/genetics_software/shapeit/ shapeit.html; last accessed February 20, 2017) and calculated relative cross coalescence rate using MSMC based on Chen et al. . doi:10.1093/molbev/msy130 MBE eight haplotypes (four haplotypes for each population) (Schiffels and Durbin 2014) for each pair of populations. Genes under Selective Sweep We used SPA software (Yang et al. 2012) to identify SNPs with extreme frequency gradients. A subset of data including YL, GY, AK, SN, NY, and XS was obtained from the original VCF file. Loci were first pruned for linkage disequilibrium using PLINK (-indep 50 5 1.1) (Purcell et al. 2007) and then filtered to only retain biallelic loci as required by SPA. The SPA analysis was performed under an unsupervised scenario. F ST and Tajima's D statistics for sliding windows were calculated using VCFtools (Danecek et al. 2011), with a window size of 10, 000. The top 10% of windows from the distribution of F ST and the negative end of Tajima's D values were selected as regions under selective sweep. CLRs were calculated using SweepFinder2 (DeGiorgio et al. 2016) with a window size of 200 bp, and the top 10% of windows were selected as sweep region. Genes overlapping the sweep regions were identified. Additionally, the top 10% of windows from the positive end of Tajima's D values were identified, and genes overlapping the windows were categorized as genes under balancing selection. To perform enrichment analysis, we first assigned genes to GO terms and KEGG pathways based on their orthologues in the fruit fly (Drosophila melanogaster). The enrichment of selected genes in KEGG pathways was performed using the KOBAS system (Mao et al. 2005). Multiple comparisons were corrected using the FDR method. We used the REVIGO tool (Supek et al. 2011) to summarize nonredundant GO terms, with an allowed similarity of 0.4 based on the D. melanogaster database. Supplementary Material Supplementary data are available at Molecular Biology and Evolution online.
2018-07-03T21:17:20.530Z
2018-06-20T00:00:00.000
{ "year": 2018, "sha1": "39cfd7621db9185292e2247c5d7deeffc5159799", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/mbe/article-pdf/35/9/2260/25534580/msy130.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "39cfd7621db9185292e2247c5d7deeffc5159799", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
244649315
pes2o/s2orc
v3-fos-license
Effects of lateral episiotomy on the emergence of urinary incontinence during the first postpartum year in primiparas: prospective cohort study Aim of the study Lateral episiotomy is a widely used procedure, although it is rarely mentioned in the literature and its effects on the pelvic floor are largely unexplored. The purpose of this study is to evaluate the impact of lateral episiotomy on the incidence of urinary incontinence (UI) after vaginal delivery in primiparas. Material and methods The study design is a prospective cohort study. The primiparas were divided into two groups. The first group consisted of women who gave birth with lateral episiotomy, while the second group included women who gave birth with an intact perineum or with perineal tears of first and second degree. Assessments of UI were performed at 5 and 8 months after childbirth using the International Consultation on Incontinence Questionnaire – Short Form (ICIQ-SF) questionnaire followed by the stress test. Results The results revealed no significant differences (p > 0.05) in emergence of stress urinary incontinence (SUI) between the groups at the two time points. There were no statistically significant differences in overall rate of UI, urge urinary incontinence (UUI), or mixed urinary incontinence according to the ICIQ-SF questionnaire. The overall incontinence rate on the first examination was 24% in the episiotomy group and 36% in the perineal laceration group, although the difference was not statistically significant (p = 0.064). On the second examination, rates were similar and without a statistically significant difference. Conclusions Lateral episiotomy has a neutral effect on the onset of UI in primiparous women in the first year after delivery. Introduction Episiotomy is an obstetrical procedure that extends the vaginal vestibule during fetal expulsion. It is done to avoid severe perineal trauma and to decrease the risk of traumatic delivery for the fetus [1]. Episiotomy was widely used for many years in a routine manner and was considered safe and reliable. The first report that challenged these beliefs was published in the 1980s, claiming that episiotomy had no benefits for the pelvic floor [2]. Moreover, large studies suggested that routine implementation of episiotomy should be abandoned [3][4][5][6][7]. In addition, available data largely supported the theory of the negative [8,9], or neutral effect [10,11] of different types of episiotomies on the onset of urinary incontinence (UI). Since the effects of episiotomy on women's wellbeing remain controversial, it is important to use this operative procedure with great attention and with indication. Nowadays, the use of a restrictive approach of episiotomy, according to the obstetric in dication, is recommended [6,7]. Lateral episiotomy is a widely used technique but is rarely mentioned in the literature and its effect on the pelvic floor is largely un explored. This type of episiotomy is characterized by an incision, starting 1-2 cm from the posterior vaginal joint in the direction of the ischial tuberosity [11][12][13]. Most of the scientific papers deal with median and mediolateral episiotomies. The etiology of UI is multifactorial and not com pletely understood, although it is known that obesity, aging, and obstetric trauma are probably the most important risk factors [14]. Thom et al. reported a range of relative risk (RR) from 1.3 to 4.6 for the occurrence of UI in parous women as compared to nulliparous wom en [15,16]. It has been shown that most UI with onset in pregnancy or in the puerperium will spontaneously withdraw through several months after delivery due to functional recovery occurring in younger women [16]. On the other hand, in some patients, familial tendency for connective tissue weakness is an additional causal factor for stress urinary incontinence (SUI) onset [17]. It is well known that UI is often associated with vaginal birth. The situation of delivering a hypertrophic new born, especially in combination with prolonged time at the pelvic outlet, could result in severe stretching or damage to the levator ani muscles and parts of the en dopelvic fascia, as well as injury to the pudendal nerve [18,19]. The prevalence of urinary continence disorders increases with age. Rates rise from 20 to 30% in young woman to 30 to 40% in middle age and 30 to 50% in the third age [20]. Stress urinary incontinence is the most common type of incontinence in younger women with an incidence peak between years 45 and 49 [21,22]. The aim of our study is to evaluate the impact of lateral episiotomy on UI frequency after vaginal delivery in primiparas during the first postpartum year. Study design and inclusion/exclusion criteria We conducted a prospective cohort study at the Department of Obstetrics and Gynecology, "Sveti Duh" University Hospital, Zagreb from February 2016 to Oc tober 2017 as part of the clinical trial "Effects of lateral episiotomy on the function of pelvic floor and sexual function after vaginal delivery in primiparas", which was previously registered on Australian New Zealand Clinical Trials Registry (ANZCTR). The research was approved by the Ethics Committee of the "Sveti Duh" University Hospital. All patients signed a document approved by our institution for the anonymous use of their clinical data for scientific purposes according to the European privacy law. The puerperas were divided into two groups: -women who gave birth with lateral episiotomy, -women who gave birth with an intact perineum or with perineal tears of first and second degree. The inclusion criteria were as follows: primiparity, singleton pregnancies and spontaneous onset of deliv ery. Restrictive lateral episiotomy was used with indica tion. Exclusion criteria were: cesarean delivery, third and fourthdegree perineal tears, preterm delivery, breech presentation, instrumental delivery, multiparity, multi ple pregnancies, fetal head rotation abnormalities during delivery, deflection of the fetal head, preexisting dyspa reunia, incontinence of urine and stool during pregnan cy, positive personal and family history of pelvic floor dysfunction and any surgery performed in the pelvis before the current pregnancy. Participants who became pregnant during the study or were doing exercises of pelvic floor muscles after delivery until the first pelvic examination were excluded from further research. Out of a total of 400 informed female respondents, sampling of female respondents was conducted and 100 women (n1 = 100, n2 = 100) were included in each group accord ing to the results of the test power analysis. Data collection The participants were called 5 and 8 months after the delivery for a pelvic floor function evaluation. The collected data were entered in the appropriate form and were related to childbirth and general characteristics of the population in both groups. The Questionnaire International Consultation on Incontinence Question naire -Urinary Incontinence -Short Form (ICIQUISF) was used to assess the degree of UI and its impact on daily life [23]. The questionnaires included a retrograde period of 4 weeks prior to arrival for evaluation (5 and 8 months after birth). The presence and type of symp tomatic UI were defined according to the answer to question number 6 from ICIQUISF. The cough stress test was used for more precise investigation of SUI. Since the investigation included a younger population of women with recent delivery, the test was performed by a modified method. The bladder itself was not filled with 200-250 ml of saline by a catheter, but the subjects were told to drink up to 1 liter of water 1 hour before the examination and not to empty the bladder before the examination. Before the stress test was performed, the bladder volume was assessed using 2D ellipse ul trasound (volume = length x width x height x 0.52), and the test was only performed at measured volumes of more than 200 ml. Statistical analysis The statistical analysis was performed with the SPSS version 25.0 software (www.ibm.com) An analysis of the normality of data distribution was made. Quantita tive data are presented through median and interquar tile ranges. The categorical data are presented through absolute frequencies and associated proportions. Dif ferences in quantitative values between groups were evaluated by the MannWhitney U test. Differences in categorical variables between the studied groups were analyzed by the χ 2 test. All p-values less than 0.05 were considered as significant. Results The general, anthropometric and obstetric charac teristics of the participants by groups are shown in Tables 1 and 2. There were no statistically significant differences between groups except for the gestational age variable, where there was a significant difference (40.07 weeks of gestation vs. 39.86, p = 0.046). In the episiotomy group after 5 months, 46 women breastfed (46%), and in the comparative group of perineal rup tures, 47 women (47%) breastfed, which was a statis tically insignificant difference (χ 2 = 0.020, p = 0.887). The initial test for assessment of SUI in both groups was a stress test. That showed no differences between the groups. In the episiotomy group, 16% of examin ees had a positive result of the stress test at the first examination (5 months after delivery), which is com parable to 20% in the perineal tears group (χ 2 = 0.542, p = 0.462). At the second examination, which took place 3 months after the first evaluation, there were 13.5% positive findings in the episiotomy group and 14.9% in the perineal tears group (χ 2 = 0.071, p = 0.790) ( Table 3). Symptomatic incontinence was classified into spe cific types of incontinence by the groups studied, based on the answer to question 6 from the ICIQUISF. The first examination revealed that symptomatic SUI was present in 7% of examinees in the episiotomy group and in 16% of women in the perineal tears group (χ 2 = 3.9, p = 0.05). Urge urinary incontinence (UUI) symptoms were present in 10% of examinees in the episiotomy group as compared to 16% in the perineal tears group (χ 2 = 1.592, p = 0.207). Symptoms of mixed UI were present in both groups with a similar rate of 2%. On the other hand, other nonspecific incontinence symptoms were present in the episiotomy group with 5%, as compared to 2% in the perineal tears group (χ 2 = 1.322, p = 0.248). The overall incontinence rate was 24% in the episiotomy group and 36% in the peri neal tears group (χ 2 = 3,429, p = 0.064) ( Table 4). At the second examination, all symptoms of differ ent types of incontinence were found in both groups with similar prevalence ( Table 4). Symptoms of SUI were found in 9.4% of women in the episiotomy group and 18.1% in the perineal tears group (χ 2 = 3.051, p = 0.081). Symptoms of UUI were found with compa rable prevalence in both groups as well (9.4% vs. 8.5%). The overall incontinence rate evaluated at the sec ond examination was 26% in the episiotomy group and 29.8% in the perineal tears group (χ 2 = 0.331, p = 0.565) ( Table 4). The values of the median of total sum of ICIQUISF in the episiotomy group as well as in the perineal tears group were not statistically sig nificantly different at the first (U = 4508.5, p = 0.138) or second examination (U = 4420.0, p = 0.759). Discussion According to the results of this study, lateral episiot omy in childbirth of the primiparous women does not have a protective effect on the possible occurrence of urinary continence during the first year after childbirth. There is also no adverse effect of lateral episiotomy on the investigated pelvic floor function [24,25]. The incidence of SUI as well as other types of UI was the same in the lateral episiotomy group as in the peri SUI -stress urinary incontinence, UUI -urge urinary incontinence, MUI -mixed urinary incontinence1, 0 -first examination, 1 -second examination, n -number of participants, p -level of significance neal tears group. Lateral episiotomy did not increase the risk of developing SUI at 5 and 8 months postpar tum. The frequency of UI was the same in the lateral episiotomy group as in the lesser perineal lacerations and intact perineum group. No substantial associa tions between episiotomy and UI were found, which is in accordance with the majority of research on other types of episiotomy. Pelvic floor dysfunction is related to childbirth to a certain extent, although childbirth it self does not appear to be the only influencing factor [9,26,27]. It has not been proven that collagen weak ness is the most important factor in pathogenesis of postpartum UI [28]. Although pelvic floor muscle dener vation could be an etiological factor in the development of SUI, which is recorded in more than half of women after vaginal delivery, it has been shown to recover in the first year after delivery in most cases [29,30]. A study by Wesnes et al. indicated that being incon tinent during pregnancy increases the odds ratio of be ing incontinent six months after delivery by 3.5 times in a population of primiparous women [31]. In order to better explore the effect of delivery on pelvic floor dysfunctions, in our study we enrolled only primiparous women who had been continent through their preg nancy. We already knew that midline and mediolater al episiotomy does not protect against the onset of UI [3,5], but a similar level of evidence on the use of lat eral episiotomy was not published. Nevertheless, there are no studies of the midterm effects of lateral episiot omy on the incidence of UI in the available databases. Therefore, we tried to interpret our results in compari son with other types of episiotomy. Sartore et al. found that mediolateral episiotomy was related to lower pelvic muscle strength 3 months after vaginal delivery [11], which may be associated with emergence of SUI. Our results showed similar in cidence of SUI in both study groups, suggesting that the time period of 5 and 8 months postpartum could be the optimal period for evaluation of the postpartum pelvic floor when restoration of functions is expected to be completed. This is important to note, since certain studies indicate that the symptoms will be present long term if there is no improvement in SUI 3 months after the delivery [9]. On the one hand, our results indicated high percent ages of UI during the first postpartum year, confirm ing some previous reports on high rates of postpartum UI [28]. On the other hand, our results also revealed that women with an episiotomy had similar UI rates and scores at 5 and 8 months postpartum as well as women with no episiotomy, i.e., with perineal tears of lesser degree, suggesting that lateral episiotomy could not be a risk factor for UI occurrence after delivery. Sim ilar results were reported by Sartore and Karacam, who found no significant difference in UI rates 3 months af ter delivery between women who had mediolateral epi siotomy and those who delivered their babies without episiotomy [11,32]. The percentage of women with SUI was, accord ing to the diagnostic test (symptoms or stress test), in the episiotomy group 7 (16%), and in the perineal rup ture group 16 (20%) after 5 months. In a similar study by Sartore et al., 3 months postpartum, SUI rates of 12.9% for episiotomy and 12.1% for intact perineum and lesser perineal tears were noted, but the study was performed within the framework of mediolateral epi siotomy [11]. Our results showed similar incidence of UI in general and SUI in the primiparous women after vaginal delivery as compared to other published results [11,33]. Incidence of UUI is comparable to certain prev alence studies as well (4 months postpartum -12% in all vaginal births regardless of parity and mode of deliv ery) [34,35]. The study by Baydock showed that the risk of UUI is significantly higher in women with episiotomy (RR 1.9; 95% CI: 1.2-2.9, p < 0.01), which was not con firmed in our study, since the incidence of UUI was similar regardless of performed episiotomy. Arrue et al. reported a general rate of SUI of 15.1% 6 months after the vaginal birth in the primiparas, although the study was not detailed about the mode of vaginal delivery and with restrictive use of mediolateral episiotomy [36]. Episiotomy itself was not a risk factor for UI, while the most important risk factor for postpartum SUI was the occurrence of UI in pregnancy. Epidural analgesia did not affect the onset of UI in medium term, regardless of the mode of delivery [37,38]. The main strength of the present study is exclusion of women with prior UI during pregnancy, which en abled us to perform more precise analysis of the effect of lateral episiotomy on the occurrence of postpartum UI. It has been previously reported that the prevalence of UI and SUI 6 months after delivery was similar re gardless of the performance of mediolateral and lateral episiotomy [37]. Although we used different time points for investigations, our results are comparable to previ ously mentioned ones, with the conclusion that lateral episiotomy has no impact on the occurrence of UI and SUI in the first postpartum year. Compared to other re sults in one major observational study in primiparas, after 3, 6 and 9 months, UI rates were approximately 16, 24, and 20%, respectively, which is to a certain extent a similar trend. At the same time intervals, the propor tions of SUI were about 10, 18, and 12%, respectively [39,40]. To our knowledge, this is the largest study carried out in Croatia dealing with the effect of episiotomy and vaginal birth itself on the occurrence of UI in primipa rous women in the first year after vaginal delivery. Al though it is well known that vaginal delivery represents a significant risk factor for the emergence of UI, lateral episiotomy has not shown any protective effect on dif ferent types of UI. A great effort was made to exclude any possible external factors that could affect the re sults of the comparison between the groups. It is important to investigate pelvic floor dysfunc tion in the primiparous population as it eliminates the impact of a second birth that may exacerbate existing intrapartum injury. The first childbirth appears to have the greatest and most significant impact on the devel opment of pelvic floor dysfunction [9]. Given that the use of lateral episiotomy in the study population was restrictive, the results of this study are even better and another reason to promote the use of restrictive lateral episiotomy. Despite these strengths, our results have some lim itations deriving from the retrospective nature of the study and the small sample size. The short time (one year) of followup seems to be a limitation of our study. We recommend that postpartum pelvic floor dys function could be medically evaluated and treated one year after childbirth. Conclusions We concluded that lateral episiotomy in restrictive settings: -represents a very good incision of the perineum with acceptable comorbidities respecting the pelvic floor functions, -does not cause different changes of the pelvic floor in terms of emergence of UI and SUI than vaginal de livery with spontaneous minor perineal lacerations during the first year after delivery, -has a neutral effect on the onset of UI in primiparas during the first year after delivery, -is a safe obstetric operation when indicated.
2021-11-26T16:49:11.186Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "e449da7402dfb65a7d265ccbc1ec2f99945a5703", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-4/pdf-45663-10?filename=Effects%20of%20lateral.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0c6898558c6c244c59d8a04c54692c9e0e1b6a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15922263
pes2o/s2orc
v3-fos-license
Predictors for treatment outcomes after corneal crosslinking for keratoconus: a validation study Previous research suggested that baseline corrected distance visual acuity (CDVA) and maximum keratometry (Kmax) are the predictors for effectiveness of corneal crosslinking (CXL) for keratoconus. The aim of this study was to validate the previously determined predictors in a new treatment cohort. A prospective cohort of 112 eyes in 90 consecutive patients was used to validate the results of 102 eyes in 79 patients from our previous prospective cohort. All patients were treated using epithelium-off corneal CXL in a tertiary hospital setting. Primary outcomes were changes in CDVA (LogMAR) and Kmax between baseline and 1-year post-treatment. Predictive factors for both outcomes were determined using univariable and multivariable analyses. Lower pretreatment CDVA was found to be the sole independent factor predicting an improvement in CDVA 1 year after CXL (β coefficient: −0.476, P < 0.01). Kmax flattening is more likely to take place in eyes with preoperative central cones (β coefficient: 0.655, P < 0.01). These results are consistent with our initial research and indicate high reproducibility in the new cohort. The previously postulated prediction model for postoperative CDVA showed limited predictive value in the validation cohort (R 2 = 0.15). The clinical implication of these results is that patients with lower pretreatment visual acuity are more likely to benefit from CXL (with respect to visual acuity), and patients with more central cones will benefit more in terms of cone flattening. Furthermore, those results can be used to guide customization of the crosslinking treatment. Introduction Keratoconus is a progressive disease in which protrusion of the cornea causes visual impairment through the formation of irregular astigmatism [1,2]. The typical age of onset for keratoconus is early adulthood, and the disease is likely multifactorial in origin [3]. Genetic factors, environmental factors, positive family history, atopic constitution, contact lens use, and eye-rubbing have all been associated with keratoconus [4][5][6][7][8][9]. To halt disease progression, corneal crosslinking (CXL) has shown promising results in patients with keratoconus [10][11][12]. However, continued disease progression and a further decrease in visual acuity have been reported following CXL [13]. Many factors that might be related to the efficacy of CXL have been studied previously. For example, age is associated with changes in visual acuity, as pediatric patients show better improvement than older patients in terms of corrected distance visual acuity (CDVA) following CXL [14,15]. A Snellen CDVA value worse than 20/40 is also correlated with an improvement in visual acuity following CXL [16,17]. With respect to flattening of maximum keratometry (Kmax) following CXL, higher pretreatment Kmax (C54 D), a more central cone, and central cornea thickness C450 lm have all been reported as predictive factors [16][17][18][19][20][21]. The majority of these associations were established using univariable analyses, although CDVA is known to be influenced by many interrelated factors [22]. Predictors of CXL efficacy have previously been studied by our group in a consecutive treatment cohort; specifically, differences in CDVA and Kmax 1 year after CXL were assessed using a multivariable model [23]. Interestingly, baseline CDVA was the only independent factor for predicting change in CDVA 1 year after CXL, and cone eccentricity was the only independent factor associated with change in Kmax following CXL. Moreover, a reliable model for predicting post-CXL changes in CDVA was constructed (R 2 = 0.45). In order to determine the reliability and generalizability of this model, external validation of these findings is essential. Predictors are often not generalizable to patients outside the study population. Our primary purpose was to validate the reproducibility of previously determined predictors in a new treatment cohort. Only after validation such results should be implemented in clinical practice. Our secondary purpose was to validate the previously published model for the prediction of visual outcomes for individual patients following CXL [23]. Dataset and study design Our current cohort included patients who were treated with epithelium-off CXL for progressive keratoconus in our institution from January 1, 2012 to October 31, 2013. Here, we refer to this cohort as the validation cohort. The study design, inclusion and exclusion criteria, data collection, and surgical procedure were adapted from the initial treatment cohort, which included patients who were treated from January 1, 2010 through December 31, 2011 [23]. The inclusion criteria included a prior Kmax progression of C1.0 D within 6-12 months and thinnest corneal pachymetry C400 lm. Patients with corneal scarring or infection, pregnant patients, and lactating patients were excluded. The following primary outcomes were examined: (1) change in CDVA (logMAR CDVA) between baseline and 1-year post-CXL, and (2) change in Kmax between baseline and the 1-year post-CXL. This study was approved by the Ethics Review Board of the University Medical Center Utrecht and was performed in accordance with local laws, the European guidelines of Good Clinical Practice, and the tenets of the Declaration of Helsinki. Data collection Standard measurements were obtained at all follow-up visits and included uncorrected distance visual acuity (UDVA), CDVA, manifest refraction, Scheimpflug corneal tomography (Pentacam HR type 70900, Oculus GmbH, Wetzlar, Germany), and slit lamp evaluation. Parameters were measured prior to treatment and at regular follow-up visits (1, 3, 6, 12, and 18 months of post-treatment). Patient-related factors, including family history, atopic constitution, and smoking history, were collected from the patient charts and supplemented using standardized forms completed by phone or e-mail in case they were not noted in the patient charts. Family history was considered positive if a first-degree or second-degree relative had been diagnosed with keratoconus. Patients with asthma, eczema, hay fever, or anti-allergy medication were marked as positive for atopic constitution. Patients who were current smokers or previous smokers were marked as smokers, and the number of pack-years was noted. Statistical analysis Progression of keratoconus 1 year after CXL treatment was defined as an increase in Kmax C1 D. The paired Student's t test was used to analyze the differences in logMAR CDVA and Kmax between baseline and the 12-month follow-up visit. Five patients missed the 12-month follow-up visit, but they did attend the 6-and 18-month visits; for these patients, simple longitudinal imputation was used to estimate their CDVA and Kmax values at 12 months [26]. In this validation cohort, univariable analysis was performed in order to identify factors associated with the primary outcome parameters (i.e., change in CDVA and Kmax). All factors with P B 0.20 from the univariable analysis were entered into a multivariable linear regression analysis to identify independent predictive factors. This method is consistent with the statistical method used to analyze the initial treatment cohort. The analysis was performed using generalized estimating equations, with correction for patients in which both eyes were included in the study. The prediction model, which was postulated based on the initial treatment cohort, was sequentially validated; pretreatment logMAR-transformed visual acuity measurements of the validation cohort were entered in the model. The predicted and observed differences in logMAR CDVA values were compared using linear regression and presented in a calibration plot. Discrimination was summarized using R 2 to quantify the model's performance. A new prediction model based on the validation cohort was produced by stepwise, backward removal of the least significant factors derived from the multivariable analysis. To validate the performance of the refined prediction model, both calibration and discrimination were tested. A calibration plot and additional R 2 of the observed and predicted values were obtained. Data were collected and analyzed using SPSS 21.0 (IBM, Armonk, NY). Dataset characteristics The validation cohort consisted of 112 eyes from 90 patients who were treated using CXL within the study period. Ten eyes were lost to follow-up due to the patients moving abroad (n = 2) or unknown reasons (n = 8). These patients did not differ from the remaining study sample with respect to their baseline characteristics. Patient-related factors were unknown in five patients. The baseline characteristics of the initial and validation cohorts were similar, and are summarized in Table 1. At the 1-year post-CXL follow-up, progression had halted in 94 of the 102 eyes that were still in the study (92 %). The remaining eight eyes had progressed, with a mean increase in Kmax of 3.9 D had progressed, with a mean increase in Kmax of 3.9 D (range 1.40-9.40 D). Both visual acuity and Kmax had improved significantly 1 year after CXL treatment. On average, LogMAR CDVA improved from 0.30 to 0.21 (P \ 0.01), and Kmax decreased from 57.2 to 56.2 D (P \ 0.01). Univariable analysis The outcomes of our univariable analysis of the initial treatment cohort and the current validation cohort are summarized in Table 2. Both age (b coefficient: 0.006, P = 0.04) and pretreatment CDVA (b coefficient: -0.385, P \ 0.01) were associated with a change in visual acuity. Of those two factors, only pretreatment CDVA had been identified in the initial cohort. None of the baseline factors was significantly associated with Kmax outcome in the validation cohort. Because pretreatment Kmax and cone eccentricity demonstrated a trend toward association (b coefficient: -0.046, P = 0.15 and b coefficient: 0.356, P = 0.17, respectively), they were entered in the multivariable analysis. In the initial cohort, gender, cone eccentricity, and corneal thickness were associated with Kmax outcome. Table 3 summarizes the results of the multivariable analyses in both cohorts. In the validation cohort, age (b coefficient: 0.007, P = 0.03) and pretreatment CDVA (b coefficient: -0.476, P \ 0.01) were related independently to a change in visual acuity at the 1-year follow-up visit. Age was not identified as an independent predictor in the initial cohort. With respect to change in Kmax, cone eccentricity was confirmed as an independent predictor of CXL outcome 1 year after treatment (b coefficient: 0.655, P \ 0.01). Multivariable analysis Visual acuity and cone eccentricity were found to be the sole repeatable and independent factors influencing outcomes of keratoconus patients undergoing CXL, demonstrating that patients with lower pretreatment visual acuity are more likely to benefit from CXL (in terms of visual acuity), and patients with more central cones will benefit more in terms of cone flattening. Validation of prediction model The following equation was used in the initial model to predict the change in logMAR CDVA 1 year after CXL [23]: Difference in logMAR CDVA one year after CXL ¼ ðÀ0:518  baseline logMAR CDVAÞ þ 0:043: This model showed robust predictive value in the initial treatment cohort (R 2 = 0.45), explaining 45 % of the variation in CDVA. Validation of the model, in which the validation cohort data are entered into the existing prediction model, showed a mediocre fit (R 2 = 0.18), only explaining 18 % of the variation. It was not possible to create a better prediction model based on the validation dataset (R 2 = 0.19). With respect to change in Kmax 1 year after CXL, the model showed limited predictive value in our initial cohort (R 2 = 0.15) [23]. Fitting a new model to the validation data showed even worst predictive value (R 2 = 0.02). Although postoperative CDVA was accurately predicted based on pretreatment patient characteristics in the original study, both visual acuity and maximum keratometry were not predictable for individual patients in this validation study. Discussion The aim of this study was to validate and test the reproducibility of previously determined predictors of CXL effectiveness in a new treatment cohort. This validation cohort confirmed that pretreatment visual acuity and cone eccentricity are the only two independent factors for predicting change in postoperative CDVA and Kmax, respectively. Repeatability of those results is essential to apply these findings in practice and to guide clinicians in their decision-making process. With respect to cone flattening and visual acuity development, the clinical outcomes following CXL in our cohorts are consistent with previous studies [11,27,28]. This again underscores the ability to compare our results to other populations that were treated using the Dresden protocol. Our initial and validation cohorts are relatively large, and only a limited number of cases were lost to follow-up. Interestingly, the univariable analysis revealed major differences N Number of patients, SD standard deviation, CDVA corrected distance visual acuity, D diopters, logMAR logarithm of the minimal angle of resolution, Kmax maximum keratometry between the initial and validation datasets, demonstrating the variability among outcomes when interfactor correlation is not taken into account. Some baseline measurements are interrelated and therefore could potentially be (incorrectly) identified as predictors when correlated to a true predictor. The predictive factors derived from our multivariable analysis were consistent between the study cohorts, which reflects good reproducibility and stresses the importance of performing a multivariable analysis. The clinical implication of these results is that patients with lower pretreatment visual acuity are more likely to benefit from CXL (with respect to visual acuity), and patients with more central cones will benefit more in terms of CDVA Corrected distance visual acuity, logMAR logarithm of minimal angle of resolution, CI confidence interval, D diopters, Kmax maximum keratometry, UDVA uncorrected distance visual acuity § ß coefficient is the value referring to how a dependent variable will change, per unit increase in the predictor variable à P \ 0.05 indicates significance and this factor is included in multivariable analysis P values B 0.20 were also included in the multivariable analysis Int Ophthalmol (2017) 37:341-348 345 cone flattening. This is consistent with other studies in which patients with a pretreatment CDVA of 20/40 or worse had significant visual improvement [16,17]. Therefore, it might be advisable to explain to patients with low pretreatment CDCVA that improvement might be expected, while on the other hand, this is not likely for patients with pre-existent high CDVA. Our finding that Kmax is more likely to flatten in eyes with a more central cone is in concordance with results from Greenstein et al. [20]. This latter finding might be due to exposure to UV light perpendicular to the center of the cornea during CXL. The peripheral cornea receives light rays that are less potent due to their oblique incidence. The UV light source used in this study is in accordance with this principle. Furthermore, it is known that CXL and aberrations interact and it could be that aberrations in the vicinity of a peripheral cone alter the angle of incidence of the light rays on the corneal surface even further, causing deflection and thereby resulting in less UV light penetration in the cornea and lesser treatment results [29]. Therefore, a more central cone is likely to be treated more effectively, resulting in more flattening of the cone. Focusing the UV light on the cone instead of the center of the cornea could be considered for treatment customization in patients with more peripheral cones. Our initial study cohort did not identify age as a predictive factor for either treatment outcomes. However, in the validation dataset, younger patients benefitted significantly more with respect to visual acuity. Soeters et al. also identified age as a prognostic factor; interestingly, they also found that their younger patients had more centrally located cones [14]. Léoni-Mesplié et al. reported that disease progression in pediatric patients is more aggressive than in adults, suggesting that CXL treatment is more effective at preventing deterioration in pediatric patients [30]. Consistent with our initial study, a variety of factors associated with keratoconus were not identified as independent contributors to the effectiveness of CXL treatment. These factors include gender, family history, atopic constitution, smoking history, spherical equivalent, logMAR UDVA, Kmax, and central corneal thickness; none of these factors were predictive in terms of changes in visual acuity or Kmax 1 year after CXL. However, using univariable analyses, other studies found that pretreatment corneal thickness B450 lm and Kmax C54 D were predictors of a decrease in Kmax [16,17,19]. Although we also found a relationship between corneal thickness and Kmax flattening in our initial univariable analysis, this relationship did not hold when examined in a multivariable analysis. This finding was not replicated in the univariable analysis of our validation cohort. Previously, Greenstein et al. used a multivariable approach and found that patients with higher keratometry readings showed more improvement in response to CXL [16]. However, this finding is not supported by our data. Additional analysis using a dichotomous cut-off of 54 D was not associated with either outcome parameter (data not shown). One explanation for this difference in findings could be the difference in sample size and the fact that Greenstein et al. examined a heterogeneous study cohort that included both patients with keratoconus and patients with post-LASIK ectasia. Creating a model for the prediction of individual Kmax after CXL was ultimately cumbersome and yielded little additional clinical value. Moreover, our initial reliable model for the prediction of CDVA could not be confirmed in this validation study, which is an additional argument why validation studies are extremely important. The inability to validate this prediction model can be due to either overfitting of the original model or to large individual variation in the reaction to CXL treatment. Both options leading to the conclusion that the formerly proposed prediction model for individual visual outcomes after CXL treatment should not be applied for patient counseling. In conclusion, the clinical implication of these results is that patients with lower pretreatment visual acuity are more likely to benefit from CXL (with respect to visual acuity), and patients with more central cones will benefit more in terms of cone flattening. Repeatability of those predictors supports applicability for the decision-making process of clinicians. Furthermore, those results can be used to guide customization of the crosslinking treatment.
2017-08-02T22:43:48.215Z
2016-05-24T00:00:00.000
{ "year": 2016, "sha1": "b9127fe3967fe069b7e5bb3eae619d71fd5e8004", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10792-016-0262-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b9127fe3967fe069b7e5bb3eae619d71fd5e8004", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248266494
pes2o/s2orc
v3-fos-license
Spectroscopy of light baryons with strangeness -1, -2, and -3 The present article contains the descriptive study of light strange baryons Λ, Σ, Ξ, and Ω. The Regge phenomenology with linear Regge trajectories has been employed and the relations between Regge slopes, intercepts, and baryon masses have been extracted. With the aid of these relations, ground state masses are obtained for Ξ and Ω baryons. Regge parameters have also been estimated to calculate the excited state masses for Λ, Σ, Ξ, and Ω baryons in both the ( J, M 2 ) and ( n, M 2 ) planes. The obtained results are compared with available experimental data and various theoretical approaches. We predict the spin-parity of recently observed light baryons and our predictions may be useful in future experimental searches of these baryons and their J P assignments. I. INTRODUCTION Hadron spectroscopy is a tool for understanding the dynamics of quark interactions in composite systems like mesons, baryons, and exotics. The spectroscopic study of baryon with a strange quark(s) will have a great interest because the strange quark would be slightly heavier than up and down quarks and considerably lighter than charm and bottom quarks. In this paper we present a description of hyperons which are Λ, Σ baryons with strangeness -1, Ξ and Ω baryons with strangeness -2 and -3, respectively. Many experimental facilities around the world have been seeking to study the strange baryons. Recently the ALICE Collaboration observed an attractive interaction between protons and Ξ − baryon [1]. In 2019 the BESIII Collaboration observed Ξ(1530) in a baryonantibaryon pair from charmonium decay [2]. At Jefferson Lab, the photoproduction of Ξ has been observed using the CLAS detector [3]. Furthermore, JLab has proposed to investigate the strange hyperon spectroscopy using a secondary KL beam and a GlueX experiment and the results are expected to provide more knowledge of strange hyperons Λ, Σ, and Ξ [4]. Extensive research have also been conducted by the BABAR Collaboration [5]. Recently in 2018, the Belle Collaboration observed new excited Ω − state as Ω(2012) through e − e + annihilations decaying into Ξ 0 K − and Ξ −K 0 channels [6]. By studying the properties these, so-called, strange baryons or hyperons, hadron physicists will be able to make progress in answering various critical questions, namely; what is the intrinsic structure of these baryons? what are the important degrees of freedom in a baryon? and, are there exotic forms of baryon-like matter?. Eventually, addressing these questions helps in shedding light on the deeper fundamental question, namely "how to understand the underlying confinement mechanism?". The anticipated multiplet structure of the baryons must be established experimentally to understand the symmetries and dynamics of the strong interaction, and the details of their excitation spectrum are vital for that. However, experimental information is presently very limited, in particular for Ξ and Ω baryons with strangeness -2 and -3, respectively. For a large part, the lack of experimental knowledge can be understood by the fact that with the widely usage of electromagnetic probes the production cross section of strange baryons is very limited making it difficult to generate sufficient statistics. The Ξ baryons are produced only at the final state of a decay process and have small cross-sections (typically a few µb) [7]. Somewhat this comment is also valid for Ω − baryons. Also, Ω − baryons have zero isospins, which means that Ω * − → Ω − π 0 decays are highly suppressed, and this restricts the possible decays of excited states. Therefore, ΞK being the excepted decay mode for the low-lying Ω − states [6]. These decays are similar to the Ω 0 c → Ξ + c K − decays discovered by the LHCb [8] Collaboration and confirmed by the Belle [9] Collaboration shortly after. PANDA (antiProton ANnihilations at DArmstadt), the upcoming experimental facility at FAIR (Facility for An-tiProton and Ion Research), will have a task to establish the whole spectrum of hyperons through antiprotonproton annihilation [10][11][12][13][14]16]. At the latest, a member of the PANDA group studied the feasibility of the reaction pp →Ξ + Ξ * − and its charge conjugate channel, where the Ξ * denotes the following intermediate resonance states: Ξ (1530), Ξ (1690), and Ξ (1820) [15]. A major goal of the Ξ spectroscopy program at PANDA is the determination of the spin and parity quantum numbers of the Ξ states [17]. Unlike Ξ and Ω, Λ and Σ baryons have quite a number of experimentally established states. Table I represents the excited Ξ and Ω baryons listed by the latest version of the Particle Data Group (PDG) [18]. Spin and parity quantum numbers of newly observed Ξ and Ω states are not yet confirmed and the PDG needs more confirmations. It is crucial to assign the spin-parity (J P ) of hadrons which facilitate the determination of properties such as decay width, branching fraction, isospin mass splitting, polarization amplitude, etc. Since, we have studied the light baryons which are made of the quarks u, d, and s only. The SU(3) flavour symmetry can be only approximate because the mass of the strange quark is about 0.1 GeV greater than the masses of the up and down quarks, although this mass difference is relatively small compared to the typical QCD binding energy which is of order 1 GeV. The eightfold way and the standard SU(3) Gell-Mann-Okubo [18]. The status is given as poor(*), fair(**), very likely(***), and certain(****). Resonance Mass ( (GMO) formula [19] have played an important role in particle physics. However, the direct generalization of the GMO formula to the charmed and bottom hadrons cannot agree well with experimental data due to higher-order breaking effects. Hence for light baryons the breaking of SU(3) symmetry is minimum. Several phenomenological and theoretical model have been employed to study the properties of light baryons. The authors of Ref. [20] estimated the mass spectra of strange baryons using the relativistic quark model based on the quasipotential approach and baryons are treated as relativistic quark-diquark bound systems. In the recent study [21,22], the authors employed hypercentral Constituent Quark Model with linear confining potential along with first order correction term to obtain the mass spectra of light strange baryons with strangeness -1,-2, and -3. Regge trajectories are explored for the linearity of the calculated masses for (J, M 2 ) and (n, M 2 ). Ref. [23] present the complete excited Λ, Σ, Ξ, and Ω spectrum in a relativistic quark model based on the three quark Bethe-Salpeter equation with instantaneous two and three body forces. The investigation of instantoninduced effects in the baryon mass spectrum plays a central role in this work. The authors of Ref. [24] present a systematic analysis of spectra and transition rates of strange baryons within the framework of a collective string-like qqq model in which the orbital excitations are treated as rotations and vibrations of the strings. In the present article, we give a systematic study of strange baryons Λ, Σ, Ξ, and Ω. Here we employ the Regge phenomenology with the assumption of linear Regge trajectories. We find the relations between the intercept, slope ratios, and baryon masses in both the (J, M 2 ) and (n, M 2 ) planes. With the help of these relations, we determine the ground state masses of Ξ and Ω baryons. We extract the Regge parameters (a(0)and α , ) to determine the mass spectra of all light strange baryons in both the (J, M 2 ) and (n, M 2 ) planes. It is evident that the ground and low-lying resonance states are within a reasonable range for almost all of the models and approaches. However, the higher excited states exhibit huge variations in their mass predictions, which motivated us to study the experimental determination of the spin and parity quantum numbers of these light strange baryons. The remainder of this paper is organized as follows. After briefing various experimental and theoretical approaches, in Sec. II we describe Regge theory and extract the mass relations. By using these relations we obtain the masses for Λ, Σ, Ξ, and Ω baryons in the (J, M 2 ) plane for natural and unnatural parity states. Further we obtain the Regge parameters for each Regge line and calculate the radial and orbital excited states of these baryons in the (n, M 2 ) plane. We extend this model and try to determine the remaining states other than natural and unnatural parity states in the (J, M 2 ) plane. The detailed description of our results obtained is discussed in Sec. III. Finally we concluded our study in Sec. IV. II. THEORETICAL FRAMEWORK To study the hadron spectroscopy, the linear Regge trajectory is one of the most effective and widely used phenomenological approach. The plots of Regge trajectories of hadrons in the (J, M 2 ) plane are usually called Chew-frautschi plots [25]. They used the theory to study the strong quark gluon interaction and observed that the excited states of experimentally missing mesons and baryons exists on the linear trajectories in the (J, M 2 ) plane. The trajectory of a particular pole is characterized by a set of internal quantum numbers and the hadrons lying on the particular Regge line have the same internal quantum numbers. The most general form of linear Regge trajectories can be expressed as [26][27][28][29], where a(0) and α ′ are, respectively, the slope and intercept of the Regge trajectory. Regge intercepts and Regge slopes for different flavors of a baryon multiplet can be related by the following relations [30][31][32][33][34], where i, j, q represent the quark flavors. Using Eqs. (1) and (2) we obtain, We get two pairs of solutions after combining the relations (3) and (4) which are expressed as, and, These are the significant relationships that we have obtained between slope ratios and baryon masses. Now the above obtained Eq. (5) can also be expressed as, here k can be any quark flavor. Thus we have, In this section using the relations we have extracted above, we determine the ground-state masses of Ξ and Ω baryons, as well as the orbitally excited state masses of all the light strange baryons Λ, Σ, Ξ, and Ω for natural (P = (−1) J− 1 2 ) and unnatural (P = (−1) J+ 1 2 ) parities in the (J, M 2 ) plane. The quark combination of Ξ 0 baryon is uss, hence we put i = d, j = s, q = u, and k = d in Eq. (8). We obtain the mass expression for Ξ 0 as a function of squared masses of neutron (n) and Λ 0 baryons, which is expressed as substituting the masses of n and Λ 0 into Eq. (9) we get the ground-state mass of Ξ 0 as 1293 MeV for J P = 1 2 + . Similarly we can get 1537 MeV for J P = 3 2 + . In the same manner we can calculate for Ξ − baryon also (see Table V). Similarly to evaluate the ground state (J P = 3 2 + ) mass of Ω − baryon, composed of three strange quarks (sss), we put i = u, j = s, q = s, and k = u in Eq. (8) we get, again substituting the masses of Σ * + [18] and Ξ * 0 (calculated above) into the above equation, we obtain the ground-state mass of Ω − baryon as 1691 MeV. After evaluating the ground-state masses, the orbitally excited states of light strange baryons are calculated lying on 1 2 + and 3 2 + trajectories by obtaining the Regge slopes α ′ . For instance using Eq. (5), With the help of values α ′ extracted for light-strange baryons, from Eq. In this section the masses for radial and orbital excited states from are estimated in the (n, M 2 ) plane. The general equation for linear Regge trajectories in the (n, M 2 ) plane can be expressed as, where n = 1, 2, 3.... is the radial principal quantum number, β 0 , and β are the Regge intercept and slope of the trajectories. These parameters are extracted for each Regge line for Λ, Σ, Ξ, and Ω baryons. Since, the baryon multiplets lying on the single Regge line have the same Regge slope (β) and Regge intercept (β 0 ). Using relation (13) using the above relations, we get β 0(S) = 0.11315. With the help of β (S) and β 0(S) , we calculate the masses of the excited Ξ 0 baryon for n = 3, 4, 5... Similarly, we can express these relations for P and D-wave as, Table VII shows the values of β 0 and β for the Regge trajectories of S, P, and D states for spin S = 1/2 and 3/2. In the same manner, we estimated the radial and orbital excited states of other light strange baryons for natural and unnatural parity states. The calculated results are summarized in tables VIII-XI. C. Other states in the (J, M 2 ) plane So far, we have used conventional formulae to compute the masses of light strange baryons for natural and unnatural parity states. After the successful implementation of this model, now in this section, we try to obtain the remaining other states in the (J, M 2 ) plane by using the same method. Since we have calculated 1 2 P 3 2 and 1 4 P 5 2 states earlier, now here we firstly calculate the other three 1P states i.e., 1 2 P 1 2 , 1 4 P 1 2 , and 1 4 P 3 2 by using the Eq. (8). For Ξ baryon we put i = d, j = s, q = u, and k = d in Eq. (8) we have, After putting the masses in above equation we get masses for 1 2 P 1 2 , 1 4 P 1 2 , and 1 4 P 3 2 states. Similarly we can determine the other 1P states masses for Ω baryons. Once we have calculated the 1P states, we extract the Regge slopes for all the light strange baryons in the (J, M 2 ) plane as we have done in previous section. Again using Eq. (1) we have, Excited state masses can be obtained by using the above relation. Tables XII -XV shows our calculated results for the remaining other states for light strange baryons. We compared our estimated masses with experimental data and other theoretical studies. III. RESULTS AND DISCUSSION In the present work, an attempt has been made to obtain the mass spectra of hyperons under the methodology of Regge Phenomenology. Regge slopes were calculated in the (J, M 2 ) plane. With the aid of these Regge slopes, the masses of the orbitally excited baryons were estimated for both natural and unnatural parity states. After that, the Regge slopes and intercepts were extracted for each Regge line in the (n, M 2 ) plane, and with the help of these parameters mass spectra of light baryons were obtained successfully. Tables II-V and VI-IX summarizes the calculated masses in the (J, M 2 ) and (n, M 2 ) planes respectively, for natural and unnatural parity states along with the other theoretical outcomes and the experimental observations where available. Also, our estimated masses for remaining other states in the (J, M 2 ) plane are shown in tables X-XIII. 1. For Λ baryon the four star and three star status states Λ(1520), Λ(1820), Λ(1890), Λ(2080), Λ(2100), and Λ(2350) in the PDG [18] are in good agreement with our calculated results with a mass difference of 33-80 MeV. We confirmed the J P values of these states in the present work (see tables III and X). We compared our results with the predictions of other theoretical models also and our estimated masses are consistent with the results of Refs. [20,23,35] and Σ(2620) are fairly close to our estimated masses 2475 MeV and 2624 MeV respectively. So we can say that these two states may belong to 1G state having J P = 9 2 + and 11 2 + (see tables IV and XIII). The calculated masses for low lying resonance fairly matches with the results of other theoretical predictions [20,23,35], but a wide range of mass difference is shown for higher excited states. are found to be very close to the experimentally well established masses with a few difference of MeV and also matches very well with the predictions of other theoretical models (see table V). Very few states are confirmed with spin-parity in the Ξ family. The Ξ(1820) is the only negative parity state assigned with J P = 3 2 − in PDG having mass 1823 MeV which is some what close to our predicted mass 1783 having mass difference of 85 MeV and very close 1825 MeV with a slight difference of 2 MeV only for 1 2 P 3 2 and 1 4 P 3 2 respectively. The Ξ(2030) is assigned with angular momentum having value 5 2 , parity of this state is not confirmed yet. Here we predicted this state with positive parity having mass 2090 MeV belongs to 1D state having J P = 5 2 + . The J P value of three stared Ξ(1950) state is still not confirmed in PDG. Our predicted mass 1934 MeV for 1 4 P 5 2 state is near to 1950 Mev with a mass difference of 16 MeV. So, we predicted the spin-parity of this state to be 5 2 − for S = 3/2.
2022-04-21T12:55:39.975Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "fc9bed993ad8073ce7376cdc5784ad9fff4fa88c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fc9bed993ad8073ce7376cdc5784ad9fff4fa88c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
219968124
pes2o/s2orc
v3-fos-license
Entomophagy: A Narrative Review on Nutritional Value, Safety, Cultural Acceptance and A Focus on the Role of Food Neophobia in Italy In recent years, the consumption of insects, or entomophagy, has produced an increasing interest amongst scientists and ecologists as a potential source of animal protein. Eating insects is also interesting in terms of low greenhouse gas emissions and low land use. In contrast to tropical countries, where most of the 2000 edible insect species are traditionally consumed, the concept of eating insects is still new to Western culture and diet. Culture and eating habits exert a great influence on what is considered edible in the Mediterranean area, especially in Italy, where the preservation of culinary traditions is a predominant factor affecting dietary behaviour. The purpose of this narrative paper is to provide an overview of the main topics related to entomophagy. The introduction presents some information about the nutrient content and safety aspects, the second part summarises the cultural acceptance of insect in the world, while the role of food neophobia on the intention to consume insects in Italy is focused on in part three. The discussion displays important viewpoints of previously published studies and based on these perspectives it can be concluded that the Italian diet is still clearly influenced by local tradition. In conclusion, in order to introduce insects into the Italian diet, psychological motivation has to be enhanced. Introduction Edible insects have been suggested as a source of proteins, amino acids, essential fatty acids, fibre and micronutrients [1,2]. Insects are environmentally friendly, as they can recycle waste, require little food and water for their growth, and have a rapid growth rate [3,4]. For all these reasons, in recent years, entomophagy has reached global attention and currently the potential use of insects as a new source of food for humans appears extremely interesting [5]. It has been suggested that insects could be a promising alternative source of animal protein with a reduced environmental impact [6,7], however, further research is needed to verify any possible risk to human health. Legislation on production, transformation and commercialization aspects have been previously reported by other authors in some review articles [5,8,9]. In Italy, an informative note, published on 8 January 2018 by the Italian Health Ministry, clarified that insects belong to a group of novel foods, which have not been authorized yet [10]. The absence of Italian legislative authorization and the precautionary principle asserted in the Official Journal of the European Union (EU Regulation 2015/2283) are not the only Table 1. Nutrient composition of some edible insects (per 100 g edible portion on fresh weight). Coleoptera Lepidoptera Hymenoptera Orthoptera Hemiptera Isoptera The protein content of insects ranges from 7% to 68% (Table 1) and in current literature, huge declines in the edible insects' protein content during processing have been identified [20,21]. Non-protein nitrogen (NPN) in insects (chitin, nucleic acids, phospholipids, as well as ammonia in the intestinal tract) could lead to an overestimation of the protein content [22]. For this reason, instead of the conversion factor (Kp) of 6.25 generally used for proteins, a Kp of 4.76 has been suggested for whole larvae (from Tenebrio molitor, Alphitobius diaperinus and Hermetia illucens) and 5.60 for protein extracts derived from the larvae of the three insects studied [22,23]. Moreover, although some studies on rats have shown that Acheta domesticus cricket could be a good source of protein compared to vegetal sources (e.g., soy) [9,14], proteins from most insect species have limited amounts of tryptophan and lysine and their digestibility has been estimated between 77% and 98% [24,25]. Table 1 also reports the less saturated fatty acids, more monounsaturated and more polyunsaturated fats compared to meat total fat content present in edible insects, ranging from 1 to 57 g/100 g. Ramos-Elorduy et al. (1997) [25] reported that larvae and some adult insects with a soft body, such as termites, have the highest levels of fat, whilst insects with a hard exoskeleton, such as crickets and grasshoppers, contain smaller amounts. In terms of nutritional quality, the fatty acid composition is generally comparable to that found in poultry and fish, but insects probably contain less saturated fatty acids, more monounsaturated and more polyunsaturated fats compared to meat [1,2]. The fibre content of insects ranges from 1 to 15 g/100 g (Table 1) and is mainly sourced from the exoskeleton of chitin [26]. Although chitin could have positive effects on the immune system [5] and cholesterol levels [27], it has been suggested that chitin removal could improve the digestibility of insect proteins [22]. Micronutrient composition is highly variable and depends on the species of insect. A good source of iron is Mopane worm, with a content that reaches 508 mg/100 g [18]. In many species, the iron content is equal or greater than the iron content in beef [28]. For example, grasshoppers contain 8-20 mg/100 g of iron, whilst the iron content of beef is only 6 mg/100 g [13]. However, since iron absorption in humans is particularly complex and the iron compounds of insects are very different from those found in vertebrates [29], an assessment of the bioavailability, by means of human studies, is needed. Insect larvae also contain a great amount of zinc (98 mg/100 g) [18]. Recent literature reports that the domestic cricket (Acheta domesticus) contains 29.7 mg/100 g of zinc, whilst the domestic fly (Musca domestica) contains 85.8 mg/100 g [21]; values much higher than zinc levels in meat (an average of 12.5 mg/100 g in beef). Edible insects are also a significant source of vitamins, especially water-soluble vitamins such as B-vitamins. The Yellow Mealworm (or Flour Mealworm, Tenebrio molitor) is rich in vitamin B2, B6, B9 and B12, although the latter seems to be present only in Tenebrio molitor and Acheta domesticus [1]. Insects are not good sources of vitamin A, whereas commercially reared insects contain high levels of carotenoids [13]. Safety The use of insects as food has many potential risks [30]. Although several studies have confirmed that levels of oxalate, phytic acid, phenol and tannins in edible insect species were below the toxicity levels for human consumption [31,32], a recent study suggested the possible bioaccumulation of methylmercury (MeHg) in dragonflies [33]. Furthermore, the concentration of exogenous substances (pesticides, lipophilic pollutants, drug residues and other bio-accumulative substances) depends on the metabolic characteristics of the insect and on the rearing methods [9]. Insect production may require pharmacological treatments to counteract possible infections and antibiotics, fungicides and anti-protozoan drugs can be used as possible treatments. The Hazard Analysis Critical Control Points (HACCP) and pre-requisites program (PRP) for insect production, storage, transport and labelling have been recently suggested [34]. The potential of insects to be carriers of pathogens such as protozoans (Entamoeba histolytica, Giardia lamblia, Toxoplasma spp. and Sarcocystis spp.) [35] and trematoda (Dicrocoelium dendriticum) is also a potential risk [36]. Viruses, such as Arboviruses can replicate in insects, in particular in flies and ticks, and can also infect humans [37]. Arboviruses cause diseases such as Dengue, Chikungunya, West Nile Disease, haemorrhagic fever, and Rift Valley fever [37]. However, there is no evidence that these viruses are present in edible species [38]. Moreover, viruses are vulnerable to food processing [38]. Arthropods can induce allergic reactions in humans, due to the presence of tropomyosin (contained also in shellfish and house dust mites), arginine kinase, glyceraldehyde 3-phosphate dehydrogenase, hemocyanine and hexamerin 1B [9,39,40]. Possible cross-reactions to crickets in individuals with a known allergy to crustaceans have been reported [39]. The role of arthropods (such as the Musca domestica and Alphitobius diaperinus) as vectors of Salmonella and Campylobacter is widely demonstrated [22]. Casu Marzu, a Sardinian cheese containing living larvae of the fly Piophila casei, is one of the most disputable cheeses in regards to safety issues in Europe, and Article 14 of Regulation (EC) No 178/2002 states that food shall not be placed on the market if deemed unsafe. The results of national assessments conducted by authorities in Belgium, the Netherlands and France [41][42][43], have shown a high presence of aerobic and anaerobic bacteria in the Yellow Mealworm (Tenebrio molitor), locusts (Locusta migratoria) and in the larvae of the Giant Mealworm Beetle (Zophobas atratus). The process of roasting did not result in the total elimination of Enterobacteriaceae [44], whereas boiling at 100 • C for 8 minutes reduced the total aerobic bacterial count and the amount of Enterobacteriaceae to < 10 cfu/g [42]. It is also possible to reduce the total aerobic bacterial load and the amount of Enterobacteriaceae present in Mealworm larvae (Tenebrio molitor) and house crickets (Acheta domesticus) by drying the insects in an oven for 11 minutes at 90 • C [45]. The combination of high hydrostatic pressures (600 MPa) and high temperatures (90 • C) also reduced the bacterial count [1,46]. EFSA (2015) examined the potential risk related to the production and consumption of insects as food [38]. The microbial risk of edible species was found to be comparable to that of other animal protein sources [38], however, insects are commonly considered harmful by many consumers and the pursuit of pleasure, the preservation of traditions and the use of local and regional food are all known to affect food choices [47]. Entomophagy Versus Disgust in the World In many countries the consumption of insects is part of the culture and tradition and, according to the FAO, insects are a common food source [13]. Insects are currently consumed as part of the daily diet in many developing and non-developing countries, including Africa, Asia, Latin America and Oceania ( Figure 1). Arthropods can induce allergic reactions in humans, due to the presence of tropomyosin (contained also in shellfish and house dust mites), arginine kinase, glyceraldehyde 3-phosphate dehydrogenase, hemocyanine and hexamerin 1B [9,39,40]. Possible cross-reactions to crickets in individuals with a known allergy to crustaceans have been reported [39]. The role of arthropods (such as the Musca domestica and Alphitobius diaperinus) as vectors of Salmonella and Campylobacter is widely demonstrated [22]. Casu Marzu, a Sardinian cheese containing living larvae of the fly Piophila casei, is one of the most disputable cheeses in regards to safety issues in Europe, and Article 14 of Regulation (EC) No 178/2002 states that food shall not be placed on the market if deemed unsafe. The results of national assessments conducted by authorities in Belgium, the Netherlands and France [41][42][43], have shown a high presence of aerobic and anaerobic bacteria in the Yellow Mealworm (Tenebrio molitor), locusts (Locusta migratoria) and in the larvae of the Giant Mealworm Beetle (Zophobas atratus). The process of roasting did not result in the total elimination of Enterobacteriaceae [44], whereas boiling at 100°C for 8 minutes reduced the total aerobic bacterial count and the amount of Enterobacteriaceae to < 10 cfu/g [42]. It is also possible to reduce the total aerobic bacterial load and the amount of Enterobacteriaceae present in Mealworm larvae (Tenebrio molitor) and house crickets (Acheta domesticus) by drying the insects in an oven for 11 minutes at 90°C [45]. The combination of high hydrostatic pressures (600 MPa) and high temperatures (90°C) also reduced the bacterial count [1,46]. EFSA (2015) examined the potential risk related to the production and consumption of insects as food [38]. The microbial risk of edible species was found to be comparable to that of other animal protein sources [38], however, insects are commonly considered harmful by many consumers and the pursuit of pleasure, the preservation of traditions and the use of local and regional food are all known to affect food choices [47]. Entomophagy Versus Disgust in the World In many countries the consumption of insects is part of the culture and tradition and, according to the FAO, insects are a common food source [13]. Insects are currently consumed as part of the daily diet in many developing and non-developing countries, including Africa, Asia, Latin America and Oceania (figure 1). Insects consumed in developing countries are currently collected in the wild, so their stage of development (larval or adult) and their availability are strictly dependent on seasonality [51]. In Africa, insects can be found throughout the continent and, particularly during the rainy season, the availability of caterpillars may vary even within the same country according to climatic conditions [51]. In Asia, the palm weevil (Rhynchophorus ferrugineus) of the sago palm (Metroxylon sagu) is Insects consumed in developing countries are currently collected in the wild, so their stage of development (larval or adult) and their availability are strictly dependent on seasonality [51]. In Africa, insects can be found throughout the continent and, particularly during the rainy season, the availability of caterpillars may vary even within the same country according to climatic conditions [51]. In Asia, the palm weevil (Rhynchophorus ferrugineus) of the sago palm (Metroxylon sagu) is popular throughout the continent and considered a delicacy in many regions [51]. Edible insects are used also in the Lao People's Democratic Republic, Myanmar, Thailand and Vietnam [22,52]. Furthermore, over 50 species of insects are consumed in South Asia, 39 species in Papua New Guinea and the Pacific Islands, and between 150-200 species in Southeast Asia [51]. In the last fifteen years, Thailand has produced an average of 7500 tons of edible insects each year, including crickets, red palm weevils and bamboo caterpillars [53]. In Latin America, amongst the indigenous populations of Mexico and Brazil, there is a deep knowledge of the different species of insects that are traditionally part of their diet (Figure 1) [22]. Examples of traditional Mexican dishes are escamoles (the eggs of Liometopum apiculatum Mayr), larvae of Lepidoptera and Hemiptera [22,54], chapulines (Sphenarium purpurascens) and chicatanas (Atta mexicana) [55]. Furthermore, entomophagy was extremely common amongst Australian Aborigines in the last 200 years, but the consumption of insects has decreased significantly due to the increasing adoption of European diets [22]. In fact, associating insects and diets is often stereotyped as a hallmark of an underdeveloped country or society, however, even in Europe, some types of insects are consumed. Parts or products of insects are eaten as raw snack food in some areas of the Friuli region (North Eastern Italy) such as the ingluvies of adult Lepidoptera (Zygaenidae Zygaena spp. and Ctenuchidae Syntomis spp.) [56]. Casu Marzu a cheese that contains live insect larvae, has a regional identity and is obtained using the milk of goat or sheep in Sardinia. Other examples of these cheeses can be found in other Italian regions including Friuli (Saltarello cheese) and Abruzzo (Marcetto cheese), and many others, such as Gorgonzola delle Grotte and Formaggio di Fossa, also exist. Similar cheeses are found in Corsica (France) and Croatia, as well as the German Milbenkäse or the French Mimolette [56]. A large number of surveys, focusing on European consumers, have shown that the propensity to consume insects as a meat substitute is generally low [50,57,58]. Moreover, participants who have previously eaten insects are more likely to eat them again [59,60]. In Holland, typical Dutch dishes such as burgers, nuggets and pittige punten (a spicy triangular product, similar in appearance to a hash brown or potato croquette), which are usually made from meat, were produced using vegetables and the ground larvae of the beetle Alphitobius diaperinus [47]. Consumers stated that the larvae were not visible, the taste of insects was not particularly identifiable and that the products were cooked similarly to conventional vegetarian foods. In this study, repeated consumption of insect-based products was relatively low (58% testing them only once; 18% more than once but not regularly; 24% semi-regular consumption; around 3% once every two weeks, weekly or twice a week) [47]. Initial reasons of consumption were dictated by a general interest or curiosity, by the feeling that insects are more environmentally friendly or sustainable than conventional meat-based products and by the belief that they are an alternative source of protein for a healthy diet. The participants' general dietary guidelines had some influence with the preference for organic food being commonly reported amongst participants (mentioned by 42% of the group) [47]. Pambo et al. (2018) examined how consumers assess the appropriateness of sensory attributes of edible insects. The type of information that consumers received about the production process influenced the sensory assessment after tasting [61]. Another factor that could improve the acceptance of insects is the use of evocative names for insect recipes, which would attract attention and generate a good consumer expectation [62]. In addition, Castro and Chambers [63] suggested that insect-based product should not contain visible insect pieces, which trigger negative associations. In particular, the addition of cricket flour to traditionally consumed foods, could be an attractive option to introduce insects into diets without altering eating habits [22]. On the other hand, informing consumers about the reduced environmental impact deriving from the consumption of insects, can represent another strategy to increase insect consumption [20,60,64]. Biological, psychological and socio-cultural factors are known to influence food choices. Culture, in particular, influences what is considered edible, with many people in Western countries rejecting the idea of entomophagy mainly for cultural reasons. In a multi cross-cultural international survey including 630 individuals per country and representing 13 different countries (USA, Mexico, Peru, Brazil, UK, Spain, Russia, India, China, Thailand, Japan, South Africa, and Australia), authors identified "disgust" and "acceptor" countries ( Figure 1) [48,63]. Compared to the UK [49], the likelihood of eating insect-based protein sources was more than twice in the Netherlands and Finland and 1.5 times in Spain. Willingness to try insect-products was higher in Mexico (71%), Peru (58%), Thailand (56%), Brazil (45%) and China (44%), but surprisingly lower in Japan (21%), where insects are part of the traditional diet and wasps are considered a highly sought-after food. Other countries had intermediate values (32%-36%) [48]. Although there are still several species that are eaten and considered a delicacy in Japan [65], a low willingness to try new-foods was found, probably due to the Japanese traditional diet [48]. The Japanese and the Mediterranean diet are considered healthy eating models [66][67][68], and this probably increases the attachment to eating traditional. The Mediterranean diet is included in the UNESCO Representative List of the Intangible Culture Heritage of Humanity [2] as it is a cultural product in an anthropological sense, and a lifestyle based on the conviviality of meals. For instance, food may enhance family unity when members consume it together [69]. Whilst the traditional food "Italian zampone" has been avoided in the Food Disgust Picture Scale (FDPS) [70], food "neophobia" (the fear of new or unfamiliar foods) [71] is associated with distaste [72] and could be higher in populations with strong traditional eating habits. The Role of Food Neophobia on the Intention to Consume Insects in Italy Sixteen reviewed studies (from 19 publications), involving Italian individuals (Table 2), identified: food neophobia scale (FNS), willingness to try (WTT) or intention (ITE) or willingness to eat (WTE) or willingness to consume (WTC), willingness to pay (WTP), willingness to buy (WTB) and entomophagy attitude questionnaire (EAQ). Several definitions have been proposed to describe the intention to consume insect as food and in the light of these different definitions, it should not surprising that consumer intention for insects as food has been described in diverse way (WTT, WTE, WTC and ITE). People with Low level of food neophobia were significantly more willing to accept insects as feed, as food and served in an ethnic restaurant than people with a medium level of food neophobia who, in turn, showed a significantly higher readiness than neophobic consumers. Younger people more readily accepted insects. University students and staff (e.g., High level of education) more readily accepted insects. Environmental and nutritional benefits marginally affected the acceptability of insect-based foods. 47% believed that entomophagy might become a culinary trend in Italy, whilst the other half states that it would not be "successful", "appropriate" or "exciting. 67.5% indicated they would taste edible insects if they had the opportunity. "Bug-tasting session": 94% of the students agreed to eat the insect-based food. [77] North Italy (Parma) university students (n.231, 62% female, mean age 23.6 ± 3.8 years. FNS, WTT. Tasting two sweet jellies: one with a visible cricket (unprocessed) and one with a processed cricket. WTT is affected by the FNS. Males were more WTT new foods. WTT-unprocessed < WTT-processed insect-based product. 75% tasted both products 19% tasted only the insect-based jelly 6% did not try either product [11,80,81] Central Italy (Pisa) university students (n.165) Informative seminar entitled "Insects as Food and Feed: Future Prospects" n. 66 [40%] took part of a tasting session: two bread samples identical, except one was claimed to be supplemented with insect powder, e.g., "insect-labelled" bread, although it did not contain any insect ingredients. No gender impact. WTT is positively affected by behavioural intention. The belief of positive effects on health has a stronger influence on behavioural intentions when compared to beliefs about environmental protection and familiar taste. After the seminar, disgust factor and the fear of negative texture properties was reduced. [82] FNS was described by one third of the studies (Table 2). One study involved 88 subjects aged between 18 and 40 years, and included students and staff (43 males and 45 females) [11,80,81]. The participants came from different geographical areas of Italy (20% North East, 36% North West, 14% Central Italy and 30% Southern Italy) and the questionnaire included the FNS and WTT. At the end of the questionnaire, two insect-based products (two sweet jellies with visible or processed cricket) were tasted. The results confirmed that the intention to taste is the most decisive factor for predicting the behaviour of consuming a new insect-based product. This intention is significantly determined by food neophobia and males were more WTT new foods [11,80,81]. In this study 75% of participants tasted the products and, concerning the tasting session of the reviewed studies (Table 2), percentages between 23% and 94% of the individuals (in the majority of the studies University students) were reported. An online survey aimed at 3556 Italian university students aged 18-29 years old, found that 38% of the respondents were prone to consider that this food could be part of Italian diet. Reasonably, insect-food could be offered in a snack (where they would not be immediately recognizable) as a complementary source of proteins, considering that demand for proteins cannot be totally satisfied by the traditional livestock industry [74]. A study that evaluated WTE different insect-based food (cheddar cheese larvets, lollipops, chocolate covered scorpions, salt infused with chili and agave worm, dried crickets, baked grasshoppers and toasted scorpions) found that WTE was dependent on the form in which the products were presented [75]. Another analysis regarding Italy [76] showed that not only curiosity but also a focus on environmental benefits might be motivating factors to promote entomophagy amongst Italian consumers. The belief that insects have positive effects on the environment and relatively healthy and nutritious, increases the level of acceptance. Curiosity is also reported as a strong motivating factor [76] ( Table 2) in fact, it was found to be one of the most significant drivers for acceptance in Southern Italy, whereas disgust and food neophobia were related to low acceptance [86]. One study, conducted in 2015 on a sample of 45 consumers (24 females and 21 males, aged between 24 and 39 years, students or just graduates), revealed the major barriers to the acceptance of insects as food are low familiarity with insect-based ingredients, neophobia, and/or visibility of insects in the product [86]. A soft laddering interview (free conversation) was performed to discover the personal values linked to attributes of the product which were perceived as benefits by the consumer. Interviews were conducted in the area of Naples. Kelly's repertory grids (1955) were used and each consumer was shown three different imaginary products, similar to those already available in international online stores. From this study, it seems that curiosity is one of the most significant drivers for acceptance [86]. In another study conducted in South Italy, FNS significantly correlated with intention but not with disgust and the latter correlated significantly with intention [84]. The fear of insects and the idea that the taste might be disgusting, were the main barriers to the WTT entomophagy in Central Italy (Viterbo) [83] and gender also influenced consumer attitude ( Table 2). On the other hand, no gender impact was observed in a recent study conducted at the University of Pisa (Italy) amongst students attending a seminar titled "Insects as Food and Feed: Future Prospects" [82]. Disgust factor decreased after the seminar, but volunteers indicated that they were less likely to use the "insect-labelled" bread, which was claimed to be supplemented with insect powder, in the future, despite the higher overall liking. The perception of positive effects on health had a stronger influence on behavioural intentions when compared to beliefs about environmental protection and familiar taste. Despite bad taste being an important barrier to acceptance, disgust factor and the fear of negative texture properties were strongly reduced after the seminar generating a lower rejection. WTT was positively affected by behavioural intention. Students from South Italy [85] considered insect-based products either equivalent (the same WTP for the two versions of pasta) or slightly inferior (lower WTP in the case of cookies and chocolate) without information, whilst with information on benefits, consumers' WTP increased for all the products. Another study based in Italy, investigated the role that product attributes can have in driving the perceptions of consumers of insect-based products. To a sample of 135 individuals were shown a series of products cards describing the products and they were asked to express their opinion. All the respondents reacted more positively to products made out of insect flour compared to the ones made with whole insects. Moreover, there were no incisive differences between opaque and transparent packaging. Neither the effect of cacao flavor nor the high-protein claim were statistically significant. Finally, the effect of the environmental certification appears to be not important in the food decision [87]. In a study conducted in an international university (trilingual: English, German and Italian), despite using persuasion strategies (including grounding insects into flour, disguising insects with cocoa, other peers' reassuring statements on food safety, pleasant taste and availability) which positively influenced WTC [12], FNS negatively influenced both WTC and persuasion strategies [12]. In another study conducted in North Italy on 109 university students [77], more than two thirds of the subjects indicated they would taste edible insects if they had the opportunity and about half of the sample believed that entomophagy might become a culinary trend in Italy. However, high levels of education were shown to positively influence consumer attitude towards entomophagy [73,83]; therefore, opinions of university students could not be taken as being representative of the general population. Additionally, from a study conducted in North Italy and involving university students, employees and consumers from outside the university (223 females and 118 males-aged between 18-80 years), it was observed that younger people more readily accepted insects [73]. The authors, by grouping individuals according to FNS "low neophobia" (FNS scores ≤ 23), "medium neophobia" (FNS scores ≥ 24 and ≤ 41) and "high neophobia" (FNS scores ≥ 42), found a relationship between FNS and willingness to incorporate insects into diets and that environmental and nutritional benefits marginally affected the visual acceptability of insect-based foods. [73]. Although communication of both societal and individual benefits increased ITE, it has been reported that Danish individuals had higher ITE than Italians [60]. By using the EAQ (EAQ-I; EAQ-D; EAQ-F) differences were found between the Danes and Italians [88]. The EAQ includes three conceptual scores: disgust (EAQ-D), interest (EAQ-I) (including curiosity) and feeding animals (EAQ-F). The latter comprises the following sentence: "Using insects as feed is a good way of producing meat and I think it is fine to give insect-based feed to fish that are farmed for human consumption". In the Danish volunteers, EAQ-I was the main predictor and regression coefficient EAQ-D vs. WTE insects, was much smaller than that of EAQ-I. A negative relationship of EAQ-F vs. intention toward direct entomophagy was found only in the Danish population. Amongst Italians no such difference between predictive power of EAQ-D and EAQ-I on WTE was found [88]. Discussion and Conclusions Entomophagy is common in some Asian, American and African countries, whilst it is generally rejected by Western populations [12] and is often considered a "barbarian" tradition by Western culture [11]. Western food taboos, such as entomophagy, could be encouraged by cultural aspects in which insects are considered noxious products [89]. Western society generally regards insects as a food of emergency, not only associated with low prestige and indigent countries [76], but also with filth, danger or the psychological idea of possible contamination [72]. The Western reluctance towards entomophagy should not be classified as a mere form of disgust but as a form of acquired distaste derived from a lack of habits, and exposure not only to the flavour of insects but also their visual, tactile, olfactory and auditory properties and their sensory representation on dishes. Distaste has been defined as 'a form of motivated food rejection triggered by the ingestion of unpleasant tasting substances, prototypically those that are bitter' [72], whose function is to avoid the ingestion of toxic compounds. Although it has been reported that curiosity is a strong motivational factor for trying insects [72], Italians tend to follow a diet based on the protective Mediterranean model [2]. Leon Rappoport analysing the social and psychological components of food claims: 'Consciously or not, when we eat we swallow not only a certain alimentary product, but also the concept, the culture, and the land to which it is associated with' [90]. It has been suggested that rational theoretical assumptions encouraging "healthier" alimentary choices via prescriptive and legislative measures (e.g., sugar tax) are not the optimal strategies in the Italian context [2]. In Italy it has been reported that the belief of positive effects on health has a stronger influence on behavioural intentions than beliefs about environmental protection [82]. The potential success of a strategy in a country might be not suitable for another one. For example, the UK sugar tax may improve public health [91] but it seems not to be a useful strategy in Italy [2]. Briggs (2016) stated: 'Agriculture is responsible for up to 30% of the world's greenhouse-gas emissions, yet it is often overlooked in climate discussions and was barely mentioned at December's United Nations climate talks in Paris. Taxing food that is responsible for high greenhouse-gas emissions when it is produced and transported could benefit the health of both people and the planet. Sugar is a good start, but we can aim higher' [91]. These sentences should make people engaged in health promotion campaigns of agri-food products and could alarm Italians, particularly tied to Mediterranean culinary traditions. It has been suggested that in Italy traditional values, such as the Mediterranean diet, might reduce the diffusion of genetically modified organisms-based foods [92]. Despite organic aquaculture might be a new and important strategy for diversification and labelling/certification are not taken into consideration, the added value of the production method might not be perceived by the final consumers that show a higher WTP for the sea bass country of origin than for the breeding method used [93]. Moreover, local origin geographic origin of honey accounted for 72.9% of the log-likelihood, followed by price and organic production [94]. Geographic origin is important also when choosing bovine meat and celebrating "Protected Geographical Indication" (PGI) Italy ranked first for an anniversary and a meal with friends [95]. In a study that explored consumers' attitude towards cultured meat in Italy, people from northern Italy showed a significantly more positive perception towards attributes such as safety and sustainability than respondents from central and southern Italy [96]. In Italy the Sardinian sheep milk cheese "Casu Marzu" is well known and eaten, as well as other regional cheeses and some parts of Lepidoptera in Northern Italy are eaten [56]. In the present review, studies have been conducted in North [12,73,[75][76][77][78][79], Central [82,83] and South Italy [84][85][86] and two studies involved volunteers from different Italian regions [11,74,80,81]. However, more research needs to be carried out to evaluate the effect of cultural variations exist among Italians of North, Centre and South on food neophobia and insects' acceptance. In conclusion, in order to introduce insects into the Italian diet, psychological motivation has to be enhanced.
2020-06-11T09:10:58.839Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "d0896378695b5692b036c6d9086a2caad712b480", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2254-9625/10/2/46/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97f745966b34a53d3e3cf1598a18536347548907", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
3336126
pes2o/s2orc
v3-fos-license
Valproate induced hyperammonemic encephalopathy treated by haemodialysis Valproate (VPA)-induced hyperammonemic encephalopathy is an unusual, but serious, adverse effect of divalproex sodium (DVPX) treatment and if untreated can lead to raised intracranial pressure, seizures, coma, and eventually death. It can, however, be reversed if an early diagnosis is made. It is therefore extremely important to recognize it and discontinue DVPX treatment. Our patient developed sudden deterioration of sensorium, drowsiness, lethargy, and later severe comatose state after few days of starting DVPX with high levels of serum ammonia despite therapeutic levels of VPA and normal liver function test. He responded to hemodialysis, cerebral decongestants, and other intensive supportive measures. D ivalproex sodium (DVPX) is a stable coordination compound comprising sodium valproate (VPA) and valproic acid in a 1:1 molar relationship and formed during the partial neutralization of valproic acid with 0.5 equivalent of sodium hydroxide. [1] DVPX is an antiepileptic drug that is approved for the treatment of several types of seizures. [2] It is also an effective treatment option for bipolar affective disorder, [3] schizoaffective disorder, [4] neuropathic pain, [5] and for the prophylaxis and treatment of migraine headaches. [6] Valproate-induced hyperammonemic encephalopathy (VHE)/delirium with normal liver function tests (LFTs) is a relatively uncommon adverse effect. It may be mistaken for psychosis or worsening of mania leading to wrong diagnosis and improper management. It may be confused with delirious mania and may result in further increased dose of DVPX. [7] Serum ammonia levels should be monitored in all patients developing altered mental status after receiving VPA therapy. [8] in the absence of delusions or perceptual disturbances in a setting of clear sensorium. Young's Mania Rating Scale score was 47 (cutoff of 20 for hypomania and 25 for mania). Relevant investigations at the time of admission including complete blood count, LFT, renal function test, blood sugar, thyroid profile, electrocardiogram, serum electrolytes, and computed tomography (CT) scan brain were within normal limits. Urine for drug screen for Cannabis, opioids, cocaine, amphetamines, benzodiazepines, and barbiturates was negative. He was diagnosed as a case of bipolar affective disorder, current episode manic without psychotic symptoms (International Classification of Diseases, 10 th Edition [ICD 10] -F31.1) as per the diagnostic criteria of ICD-10, [9] and started on injection haloperidol (5 mg) + injection phenergan (25 mg) intramuscular SOS, tablet DVPX (1500 mg/day in divided doses), tablet risperidone (4 mg/day in divided doses), and tablet clonazepam (4 mg/day in divided doses). The patient continued to be irritable, hyperactive, overtalkative, displaying disruptive behavior over the next 1 week despite ensuring regular compliance to psychotropics. On day 10 of the hospitalization, he was noted to be drowsy but able to carry out his routine activities; hence, all his medications were withheld. On day 11 of hospitalization, drowsiness further worsened Glasgow coma scale (GCS 12/15) score, and he had irrelevant speech and disorientation. Clinical examination revealed the following: temperature -98.6°F, pulse -74/min, regular blood pressure (BP) -120/70 mmHg, pupils -normal in size and normally reacting to light, no rigidity, and plantars were flexor. All medications were withdrawn. Relevant urgent investigations revealed -creatine phosphokinase -113.5 IU/L (0-170 IU/L), normal LFT, and repeat CT scan brain. He was shifted to intensive care unit (ICU). By evening, his drowsiness further worsened (GCS-E1V1M3), temperature -98.6°F, Pulse rate (80-90/min), and BP (systolic 110-120 mmHg and diastolic 70-84 mmHg). His urgent serum ammonia level was 396 µmol/L (normal -11-32 µmol/L), serum VPA level was 67 (50-100 µg/ml), and other findings were pH -7.53, PO 2 -86, Na + -139 meq/L, and K + -3.6 meq/L. He was diagnosed as a case of nonhepatic hyperammonemic encephalopathy caused by VPA. He was promptly managed with hemodialysis, injection mannitol 100 mg intravenous (IV) 8 hourly, and ICU supportive care. On day 12, he was conscious but drowsy and partially oriented, tablet L-carnitine 1000 mg for every 12 h was also added while being continued on hemodialysis and injection mannitol. On day 14, he was conscious, obeying commands, but restless. His repeat serum ammonia levels showed a declining trend (44 µmol/L) and his hemodialysis was discontinued. His restlessness was subsequently managed with tablet lorazepam. He was started on tablet oxcarbazepine as a mood stabilizer, which was well tolerated and later discharged on maintenance oxcarbazepine (900 mg/day in divided doses). DISCUSSION Ammonia is a normal constituent of all body fluids. [10] At physiologic pH, it exists mainly as ammonium ion. [11] Reference serum levels are <35 µmol/L. [12] Excess ammonia is excreted as urea, which is synthesized in the liver through the urea cycle. [13] Various causes of hyperammonemia include congenital deficiencies of urea cycle enzymes, hepatic encephalopathies, renal or hepatic failure, congenital lactic acidosis, organic acidemias, drug induced (VPA, 5-fluorouracil, and salicylates), and Reye's syndrome. [14] Although VHE is rare, VPA frequently causes a rise in serum ammonia levels, usually resulting in asymptomatic hyperammonemia. Murphy and Marquardt studied the frequency of hyperammonemia in asymptomatic patients receiving valproic acid; plasma ammonia concentrations were measured in 55 patients receiving this drug and in 12 patients receiving other anticonvulsants. Twenty-nine of the 55 patients receiving valproic acid and none of the control patients had plasma ammonia levels above the normal range. [15] Hyperammonemia can have varied presentation from 4 days to 4 years after valproic acid therapy initiation. [16] The mechanism of valproic acid and its derivatives causing hyperammonemia is multifactorial. VHE pathogenesis is related to urea cycle defect mostly in the form of carbamoyl phosphate synthetase-1 inhibition leading to decreased utilization of ammonia followed by a hyperammonemic state. [17] VPA also reduces the hepatic synthesis of carnitine and increases its renal excretion, thereby precipitating hyperammonemia. [18] Hyperammonemic encephalopathy can lead to edema of astrocytes through glutamate uptake inhibition, which may lead to cerebral edema and neuronal injury. [19] There is apparently no link between the development of VHE and serum levels and doses of valproic acid. A relationship between daily doses of VPA and the appearance and severity of VHE has not been found. Serum VPA levels are within normal range in most VHE cases. [20] The Naranjo Adverse Drug Reaction Probability Scale [21] of this case scored 9 (definite adverse drug reaction). Risk factors for the development of hyperammonemia are high initial dose (probably slightly higher in our case), long-term VPA therapy, concomitant medicines such as antipsychotics (as in our case) or anticonvulsants (topiramate, phenobarbitone, phenytoin, and carbamazapine) added to VPA, urea cycle disorders, low plasma carnitine levels, rich protein diet, and fasting. [20] Some of the unusual features of the present case are sudden deterioration to severe comatose state (GCS-5/15), absence of seizures, DVPX [22][23][24][25] in which there was rapid neurological deterioration which responded to hemodialysis [ Table 1]. Principles of management include correction of the biochemical abnormalities; ensuring adequate nutritional intake and compounds that increase the removal of nitrogen waste. Treatment involves withdrawal of VPA, cessation of protein and/or nitrogen intake, hemodialysis, and supportive care with parenteral intake of calories. Complete recovery generally occurs over a period of 24 h to a few days. L-carnitine supplementation has been shown to improve the symptoms of VPA-related toxicities. L-carnitine has also been shown to be effective in reducing ammonia levels and in improving the symptoms of hyperammonemia. It is generally safe and may be given orally or IV at a dose of 50-100 mg/kg/day. [19] There are currently no specific recommendations for screening people for asymptomatic hyperammonemia. This case report purports to caution the psychiatrist that there should be a high index of suspicion for VPA-induced hyperammonemia in case a patient shows deterioration in clinical recovery or develops delirium/encephalopathy while on treatment with VPA as hyperammonemia is a potentially reversible condition. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T02:19:24.684Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "888b40d94a0ad61ff7c03cdc2b41c38f19bccaaf", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ipj.ipj_37_16", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "70566f90caa415c5fa5399980c7e3a34bf044846", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
21510101
pes2o/s2orc
v3-fos-license
Nucleolin Maintains Embryonic Stem Cell Self-renewal by Suppression of p53 Protein-dependent Pathway* Background: Nucleolin is a multifunctional nucleolar protein. Its function in ESCs remains unknown. Results: Knockdown of nucleolin induces cell cycle arrest, apoptosis, and differentiation in ESCs by up-regulating p53 protein level. Conclusion: Nucleolin regulates ESC self-renewal through a p53-dependent pathway. Significance: Learning distinct functions of nucleolin is of great importance for elucidating the molecular basis of ESC self-renewal and differentiation. Embryonic stem cells (ESCs) can undergo unlimited self-renewal and retain pluripotent developmental potential. The unique characteristics of ESCs, including a distinct transcriptional network, a poised epigenetic state, and a specific cell cycle profile, distinguish them from somatic cells. However, the molecular mechanisms underlying these special properties of ESCs are not fully understood. Here, we report that nucleolin, a nucleolar protein highly expressed in undifferentiated ESCs, plays an essential role for the maintenance of ESC self-renewal. When nucleolin is knocked down by specific short hairpin RNA (shRNA), ESCs display dramatically reduced cell proliferation rate, increased cell apoptosis, and G1 phase accumulation. Down-regulation of nucleolin also leads to evident ESC differentiation as well as decreased self-renewal ability. Interestingly, expression of pluripotency markers (Oct4 and Nanog) is unaltered in these differentiated cells. Mechanistically, depletion of nucleolin up-regulates the p53 protein level and activates the p53-dependent pathway, at least in part, via increasing p53 protein stability. Silencing of p53 rescues G1 phase accumulation and apoptosis caused by nucleolin deficiency entirely, although it partially blocks abnormal differentiation in nucleolin-depleted ESCs. It is noteworthy that knocking down nucleolin in NIH3T3 cells affected cell survival and proliferation in a much milder way, despite the comparable silencing efficiency obtained in ESCs and NIH3T3 cells. Collectively, our data demonstrate that nucleolin is a critical regulator of ESC self-renewal and that suppression of the p53-dependent pathway is the major molecular mechanism underlying functions of nucleolin in ESCs. Embryonic stem cells (ESCs) can undergo unlimited self-renewal and retain pluripotent developmental potential. The unique characteristics of ESCs, including a distinct transcriptional network, a poised epigenetic state, and a specific cell cycle profile, distinguish them from somatic cells. However, the molecular mechanisms underlying these special properties of ESCs are not fully understood. Here, we report that nucleolin, a nucleolar protein highly expressed in undifferentiated ESCs, plays an essential role for the maintenance of ESC self-renewal. When nucleolin is knocked down by specific short hairpin RNA (shRNA), ESCs display dramatically reduced cell proliferation rate, increased cell apoptosis, and G 1 phase accumulation. Down-regulation of nucleolin also leads to evident ESC differentiation as well as decreased self-renewal ability. Interestingly, expression of pluripotency markers (Oct4 and Nanog) is unaltered in these differentiated cells. Mechanistically, depletion of nucleolin up-regulates the p53 protein level and activates the p53-dependent pathway, at least in part, via increasing p53 protein stability. Silencing of p53 rescues G 1 phase accumulation and apoptosis caused by nucleolin deficiency entirely, although it partially blocks abnormal differentiation in nucleolin-depleted ESCs. It is noteworthy that knocking down nucleolin in NIH3T3 cells affected cell survival and proliferation in a much milder way, despite the comparable silencing efficiency obtained in ESCs and NIH3T3 cells. Collectively, our data demonstrate that nucleolin is a critical regulator of ESC self-renewal and that suppression of the p53-dependent pathway is the major molecular mechanism underlying functions of nucleolin in ESCs. Embryonic stem cells (ESCs) 2 can undergo unlimited selfrenewal and retain the ability to differentiate into any cell type in the body, making them attractive for fundamental research and regenerative medicine (1). A distinct transcriptional hierarchy, a poised epigenetic state, and a specific cell cycle profile distinguish ESCs from somatic cells (2). A comprehensive understanding of the molecular mechanisms underlying these special properties of ESCs is required to achieve the goal of clinical applications. Mouse ESCs can be maintained in an undifferentiated self-renewal state in the presence of leukemia inhibitory factor (LIF). Withdrawal of LIF results in extensive ESC differentiation with reduced expression of pluripotencyassociated factors (Oct4, Sox2, and Nanog), which play crucial roles in the maintenance of ESC properties (3)(4)(5)(6). Recently, the Oct4-centered transcriptional and protein interaction networks have been intensively investigated (7)(8)(9). Roles of epigenetic regulators in the control of ESC self-renewal and pluripotency have also been reported (10 -12). In contrast, the regulation of cell cycle progression and survival of ESCs has not been highly regarded, although they are important determinants for maintaining self-renewal. Mouse ESCs proliferate extraordinarily rapidly and take only 8 -12 h to progress through a whole cell cycle, because of a lack of G 0 phase and G 1 checkpoint as well as a shortened G 1 phase (2,13). However, the factors responsible for the distinct cell cycle characteristics and survival of ESCs remain largely unknown. Continued research and identification of critical regulators for the balance between ESC self-renewal and lineage commitment are essential for efficient differentiation of ESCs into clinically useful cells as well as reprogramming of somatic cells into the pluripotent state. Recently, several studies have indicated that nucleolar proteins play pivotal roles in controlling stem cell proliferation and viability. For example, nucleostemin was shown to participate in controlling cell proliferation and survival in ESCs and adult stem cells (14 -16). In addition, studies from our group and other groups have shown that nucleophosmin 1 (NPM1) is essential for mouse ESC growth (17,18). Recently, we reported that another nucleolar protein Ly1 antibody reactive clone (LYAR) is required for cell proliferation and viability in ESCs (19). Interestingly, LYAR interacts with and inhibits the autocleavage activity of a nucleolar phosphoprotein, nucleolin, to stabilize its protein level. Nucleolin is known to be highly expressed in actively dividing cells (20), although it degrades because of auto-cleavage when cells become quiescent (21)(22)(23). Since the first description (24), numerous studies have focused on the structure, localization, and functions of nucleolin (25)(26)(27). This multiple domain-containing protein has a broad range of localizations and interacts with various RNAs, DNAs, and proteins, all of which are compatible with its multiple functions involved in the regulation of ribosome biogenesis and maturation, cell cycle, proliferation, apoptosis, transcription, and nucleogenesis (27,28). Despite the great progress, the role of nucleolin in controlling ESC self-renewal has not been well defined. In this study, we generated tetracycline (Tc)-inducible nucleolin short hairpin RNA (shRNA) expressing ESC lines to study the function and underlying mechanism of nucleolin in mouse ESCs. Our data demonstrate that nucleolin is essential for maintaining the self-renewal ability of ESCs, because of its role in regulating cell cycle progression, proliferation, as well as prevention of apoptosis and differentiation. Mechanistically, knockdown of nucleolin resulted in an elevation in the protein level and activity of p53, a key transcription factor in controlling cell cycle progression and apoptosis. The elevated p53 protein level was, at least in part, due to the enhanced p53 protein stability. Activated p53 pathways led to the accumulation of cells in G 1 phase, increased apoptosis, and obvious cell differentiation. Functionally, silencing of p53 expression completely canceled out G 1 phase accumulation and apoptosis caused by depletion of nucleolin, whereas the abnormal differentiation phenomenon observed in nucleolin-deficient ESCs was partially rescued when p53 was knocked down. In contrast, nucleolin silencing affected cell proliferation and survival slightly in the differentiated cell type, NIH3T3 cells. Therefore, this study uncovers an important new regulator in the maintenance of the ESC self-renewal ability and establishes the functional link between nucleolar protein nucleolin and transcriptional factor p53 in mouse ESCs. Cell Culture-All cell lines used in this study were cultured as described previously (19). Tc-inducible shRNA ESC lines or NIH3T3 cell lines were cultured in the medium for CGR8 cells or NIH3T3 cells, supplemented with 1 g/ml puromycin (Sigma) and 50 g/ml Zeocin (Invitrogen) in the absence or presence of Tc at the concentration of 100 -300 ng/ml. Embryoid Body (EB) Formation-CGR8 ESCs were plated at the density of 1 ϫ 10 6 per 10-cm Petri dish and suspended to form EB without LIF for the indicated time. Cell Transfection-The establishment of inducible shRNA stable cell lines has been described previously (18). siRNA oligos were transfected into shRNA EGFP and shRNA nucleolin ESCs by Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. Quantitative Real Time PCR (qPCR)-Total RNA was extracted from cells according to the manufacturer's instructions using TRIzol reagent (Invitrogen) and reverse-transcribed into cDNA using oligo(dT) 15 and ReverTra Ace reverse transcriptase (Toyobo). qPCR was performed in the ABI PRISM 7900 using SYBR Green PCR master mix (ABI), and the data were analyzed by the Sequence Detection System 2.3 software (ABI). Each sample was analyzed in triplicate with Gapdh as the internal control. The primer sequences for different genes are listed in Table 1. Gene name Sequences Oct4 caspase 3 (1:1000, Cell Signaling Technology), or ␣-tubulin (1:2000, Sigma) and horseradish peroxidase-linked secondary anti-rabbit (Santa Cruz Biotechnology) or anti-mouse antibodies (Sigma). Determination of Turnover of p53-Tc-inducible shRNA ESCs were cultured with or without Tc for 2 days, then treated with cycloheximide (30 g/ml, Sigma), and harvested at the indicated time points. The p53 protein levels were detected by Western blotting and quantified using ImageJ software. Bromodeoxyuridine (BrdU) Incorporation Assay-Cells were incubated with BrdU (Sigma) at the concentration of 10 M for 30 min (ESCs) or 1 h (NIH3T3 cells) before harvesting. The amount of incorporated BrdU was detected by a Flow Cytometer (BD Biosciences) using a FITC-conjugated anti-BrdU monoclonal antibody (BD Biosciences) according to the manufacturer's instructions. Cell Cycle Analysis-ESCs or NIH3T3 cells were fixed with 70% ethanol for 24 h at 4°C, followed by RNase (Sigma, 100 g/ml) treatment for 30 min and propidium iodide (PI) (Sigma, 50 g/ml) staining. The stained cells were analyzed by a Flow Cytometer (BD Biosciences) to determine the cell cycle distribution pattern. Annexin V/PI Staining-Tc-inducible shRNA ESCs were cultured with or without Tc for 3 days. Then these cells were harvested, and 10 5 cells were taken and stained with FITC-conjugated annexin V and PI (BD Biosciences) according to the manufacturer's instructions for flow cytometric analysis. Alkaline Phosphatase (AKP) Staining-Tc-inducible shRNA ESCs were seeded at a low density in 6-well plates. After culturing with or without Tc for the indicated days, the colonies were fixed and stained with an AKP staining kit (Vector Laboratories) by a standard protocol. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium Bromide (MTT) Assay-NIH3T3 cells were seeded onto 96-well plates at a density of 300 cells per well. 12 h after plating, 10 l per well of 5 mg/ml MTT was added at the indicated time points, and 100 l of dissolving buffer containing 10% SDS, 5% isopropyl alcohol, and 0.01 M HCl was added after incubation for 24 h. The absorbance was measured at 570 and 630 nm as the absorption and reference wavelength, respectively, after the cells were incubated for additional 12 h. Microarray Analysis-Tc-inducible shRNA ESCs were treated with or without Tc for 3 days and then lysed in TRIzol for RNA extraction. RNA amplification for array analysis was performed with an Illumina TotalPrep RNA amplification kit (Ambion). Five milligrams of total RNA per sample were amplified and hybridized to Illumina Sentrix Mouse 6 BeadChip arrays according to the Illumina's instruction manual. Raw intensity data from BeadChip were exported to GeneSpring GX 9 and subjected to hierarchical clustering or pathway analysis. Three independent sets of samples were applied to microarray analysis. Statistical Analysis-All values are shown as means Ϯ S.D. The Student's t test was used to determine the significance of differences in comparisons. Values of p Յ 0.05 were considered to be statistically significant. Nucleolin Is Highly Expressed in Undifferentiated ESCs-We began by examining the expression pattern of nucleolin during ESC differentiation. When cultured in suspension without LIF, ESCs can spontaneously form aggregates called EBs, which contain heterogeneous cell types representing all three embryonic germ layers (30). As shown in Fig. 1A, the transcript level of nucleolin quickly decreased upon EB formation at day 4 and remained at a low level up to day 12. In line with ESC differentiation, the expression levels of the pluripotency marker Oct4 reduced dramatically, and the levels of differentiationassociated markers (Cdx2, Gata6, brachyury, and nestin) increased significantly during EB formation. In addition, the steady-state level of nucleolin proteins reduced evidently upon retinoic acid (RA) treatment, similar to the reduction in the level of Oct4 protein in RA-treated cells (Fig. 1B). These findings suggest that the expression of nucleolin is associated with the ESC status. Nucleolin Is Required for Cell Proliferation and Survival of ESCs-To study the function of nucleolin in ESCs, we established ESC lines stably integrated with Tc-inducible shRNAs targeting nucleolin sequences. To rule out the off-target effect of the shRNA, two sequences (shRNA nucleolin 1 and shRNA nucleolin 2) targeting the nucleolin coding sequence were used. The efficiency of shRNA induced by Tc treatment was validated by both qPCR and Western blot analyses (Fig. 2, A and B). Clearly, Tc treatment remarkably reduced both mRNA and protein levels of nucleolin in either shRNA nucleolin ESC line but did not affect nucleolin expression in shRNA EGFP-expressing cells, indicating that the silencing of nucleolin expres- DECEMBER 16, 2011 • VOLUME 286 • NUMBER 50 sion was specifically brought about by Tc-induced expression of shRNA nucleolin but not by Tc treatment per se. Roles of Nucleolin in Maintaining ESC Properties Our previous study reported reduced cell growth and increased apoptosis when nucleolin expression was down-regulated in ESCs (19). To further define the role of nucleolin in controlling ESC proliferation, BrdU incorporation assays were performed. Significantly, knockdown of nucleolin reduced the proportion of BrdU ϩ cell population in both shRNA nucleolinexpressing ESC lines (Fig. 2C). In agreement with this finding, both nucleolin-deficient ESC lines had a higher percentage of cells in G 1 phase with a reduction in S phase cell proportion (Fig. 2D). These results suggest that nucleolin is critical for ESC proliferation and cell cycle progression. Next, we determined the effect of nucleolin silencing on early apoptosis using annexin V/PI staining approach. As shown in Fig. 2E, Tc-induced down-regulation of nucleolin led to an obvious increase in the annexin V ϩ /PI Ϫ early apoptotic cell population. Consistently, Western blot analysis detected an elevation in the protein level of activated caspase 3 (17 kDa) in nucleolin-deficient ESCs (Fig. 2F), arguing for the requirement of nucleolin in the maintenance of ESC viability. Taken together, data obtained in this study demonstrate that the decrease in ESC growth rate caused by nucleolin depletion indicated in our previous study could be attributed to both increased cell apoptosis and reduced cell proliferation as a result from the cell accumulation in the G 1 phase. Nucleolin Plays a Critical Role in Maintaining ESC Self-renewal-In addition to the abnormality in cell cycle progression and apoptosis, we observed evidently differentiated cell morphology in shRNA nucleolin ESCs under the self-renewal culture conditions, suggesting that nucleolin might play a role in the maintenance of ESC at an undifferentiated state. We conducted ESC colony-forming assays (4, 5) using Tc-inducible ESCs to evaluate the self-renewal ability of ESCs. Strikingly, knockdown of nucleolin expression for 3 days resulted in extensive cell differentiation, manifested as negative AKP staining and the differentiated cell morphology, whereas only few cells survived when the shRNA nucleolin ESCs were treated with Tc for 7 days (Fig. 3A). In contrast, shRNA EGFP ESCs formed typically compact AKP-positive colonies regardless of the presence or absence of Tc (Fig. 3A). This result suggests that nucleolin is required for ESC self-renewal, and its depletion could induce ESC differentiation and concomitant or consequent cell death. We then asked whether the differentiation induced by nucleolin silencing was specific to certain lineages or not. Transcript levels of various pluripotency-and differentiation-associated markers in shRNA nucleolin and shRNA EGFP ESCs treated with or without Tc for 3 days were analyzed. Data from qPCR analysis (Fig. 3B) indicated significant activation of all germ layer markers examined, including Cdx2 (trophectoderm), Gata6 (endoderm), brachyury (mesoderm), Fgf5 (primitive ectoderm), and nestin (ectoderm), upon silencing of nucleolin, implicating that ESC differentiation induced by nucleolin depletion is not biased to a specific lineage. Unexpectedly, the depletion of nucleolin did not reduce the expression of pluripotency genes (Oct4 and Nanog) as often seen in most ESC differentiation models. To verify the continued expression of pluripotency-associated markers in the nucleolin-depleted and substantially differentiated ESCs, their protein levels were examined. As shown in Fig. 3C, silencing of nucleolin did not affect the steady-state levels of Oct4, Nanog, and Sox2 proteins either. These observations indicate that down-regulation of nucleolin disrupts ESC self-renewal without altering the expression of pluripotencyrelated genes. Depletion of Nucleolin Activates p53-dependent Pathway in ESCs-To understand the molecular mechanism responsible for the function of nucleolin in ESCs, we carried out a genome-wide microarray analysis to identify genes or pathways associated with silencing of nucleolin. RNA samples of two shRNA nucleolin ESC lines and shRNA EGFP ESCs cultured with or without Tc for 3 days were subjected to microarray analysis. The expression profiles uncovered 803 genes whose expression levels changed by Ն2-fold (p values Յ0.05) when nucleolin expression was knocked down. These genes were then subjected to the hierarchical clustering analysis (Fig. 4A) and enriched pathway analysis (Fig. 4B) using the GeneSpring GX9 software. Clustering analysis indicated that the number of activated genes (670) was much larger than the number of repressed genes (258) in nucleolinsilenced ESCs (Fig. 4A). The activated genes mostly participate in regulation of cell apoptosis, proliferation, and differentiation processes. The pathway analysis revealed that the p53-dependent G 1 /S DNA damage checkpoint pathway and the p53-dependent G 1 DNA damage-response pathway were enriched after nucleolin expression was silenced (Fig. 4B), suggesting that p53-dependent pathways might be involved in the function of nucleolin in ESCs. To validate the effect of nucleolin depletion on the activation of p53-dependent pathways, we examined the transcript levels of several p53 downstream genes related to its role in cell cycle regulation and apoptosis, including P21, cyclin G1, Bax, Wig1, Apaf1, etc. Data from qPCR analysis showed that all of the p53 target genes examined were significantly upregulated when Tc induced depletion of nucleolin for 1 day (Fig. 4C), supporting that p53-dependent pathways were indeed activated in shRNA nucleolin ESCs, whereas the transcript level of p53 itself was only slightly elevated when Tc was added for 3 days (Fig. 4C). Consistent with activated p53-dependent pathways, we found that p53 protein levels increased gradually and significantly, whereas nucleolin protein levels decreased in a Tc treatment duration-dependent manner (supplemental Fig. S1A). The quantification data shown in Fig. 4D clearly revealed the inverse relationship between the protein levels of nucleolin and p53. To understand how nucleolin regulates p53 protein levels, we examined the influence of nucleolin down-regulation on p53 protein stability, as p53 protein levels are primarily controlled by regulation of its protein stability. The turnover rate of p53 protein in control cells and shRNA nucleolin cells was compared in the presence of the protein synthesis inhibitor cycloheximide. We found that the stability of p53 proteins was significantly increased in nucleolin down-regulated ESCs (Fig. 4E and supplemental Fig. S1B). The result indicates that elevated p53 protein levels in shRNA nucleolin cells were, at least in part, due to the increased p53 protein stability. Silencing of p53 Rescues Nucleolin Depletion-induced G 1 Phase Accumulation and Apoptosis Completely-The contribution of activated p53-dependent pathways to nucleolin depletion-induced ESC phenotypes remains unknown, although regulation of p53 protein levels by nucleolin has been previously described in different types of human cells (31)(32)(33). To address this issue, we knocked down p53 expression using the synthesized p53 siRNA duplex oligoribonucleotides (oligos). Two p53 siRNA oligos (p53i-1 and p53i-2) as well as a negative control (N.C.) oligo were utilized. As shown in Fig. 5A, both p53i-1 and p53i-2 could substantially reduce the transcript level of p53 to less than 10% that in N.C.-transfected FIGURE 3. Depletion of nucleolin leads to ESC differentiation toward multiple lineages with continuous expression of pluripotency-associated factors. A, colony forming assay was used to determine the self-renewal ability of shRNA nucleolin and shRNA EGFP ESCs cultured under the indicated conditions. "Ϫ7d" indicates the absence of Tc for 7 days; "Ϫ4dϩ3d" indicates the absence of Tc for 4 days followed by Tc treatment for 3 days; "ϩ7d" indicates continuous treatment with Tc for 7 days. B, transcript levels of pluripotency-or differentiation-related markers were determined by qPCR in shRNA nucleolin and shRNA EGFP ESCs treated with or without Tc for 3 days. All values are shown as means Ϯ S.D. of results from three independent experiments. * denotes p Ͻ 0.05 and ** denotes p Ͻ 0.01. C, protein levels of Oct4, Nanog, and Sox2 were detected by Western blot (WB) analysis in ESCs as shown in B. DECEMBER 16, 2011 • VOLUME 286 • NUMBER 50 JOURNAL OF BIOLOGICAL CHEMISTRY 43375 ESCs, without affecting the nucleolin level. Moreover, silencing of p53 expression entirely negated the activation of p53 target genes induced by silencing of nucleolin (Fig. 5A). We then examined the cell cycle profile of shRNA nucleolin 1 ESCs and shRNA EGFP ESCs transfected with N.C., p53i-1 and p53i-2, respectively, in the absence or presence of Tc. As shown in Fig. 5B, both p53 siRNA oligos completely abolished the abnormal cell cycle profile induced by nucleolin depletion, suggesting a primary role of p53 for the accumulation of cells in G 1 phase. Moreover, nucleolin depletion-induced apoptosis totally vanished as indicated by the reduction of the active form of caspase 3 protein levels when either p53 siRNA oligo was introduced into these cells (Fig. 5C). These findings support the notion that the activated p53-dependent pathway is a major contributor to the enhanced apoptosis detected in nucleolindepleted ESCs. Down-regulation of p53 Partially Blocks Nucleolin Depletioninduced ESC Differentiation-Having shown the predominant role of the activated p53 pathway in the abnormal cell cycle profile and enhanced apoptosis when nucleolin was silenced, we wanted to know whether depletion of p53 could also block nucleolin silencing-mediated disruption of ESC self-renewal. As expected, based on the cell morphology, we found that silencing of nucleolin resulted in severe differentiation in N.C.transfected ESCs (Fig. 6A). Interestingly, silence of p53 in Tctreated shRNA nucleolin 1 ESCs partially but obviously recov- DECEMBER 16, 2011 • VOLUME 286 • NUMBER 50 ered the differentiated cell morphology (Fig. 6A), suggesting that p53 may be also responsible for the disrupted self-renewal in the nucleolin-depleted ESCs. Roles of Nucleolin in Maintaining ESC Properties To further characterize the role of p53 in nucleolin depletion-mediated disruption of ESC self-renewal, we examined transcript levels of various differentiation-associated markers as well as Oct4 and Nanog. Data from qPCR analysis showed that depletion of p53 completely negated the activation of Cdx2 and Gata6 expression induced by nucleolin deficiency (Fig. 6B). In contrast, it could only partially prevent the nucleolin depletion-induced activation of Fgf5 but could not recover the expression level of nestin (Fig. 6B). Therefore, it appears that activation of the p53-dependent pathway was in part associated with ESC differentiation caused by the silencing of nucleolin. In addition, the mRNA levels of Oct4 and Nanog were not affected by silencing of p53, excluding the possibility that rescue of ESC differentiation under the condition of nucleolin depletion is due to the increase in expression of pluripotency-associated genes. Depletion of Nucleolin Does Not Affect Cell Survival and Proliferation Evidently in NIH3T3 Cells-To determine whether the role of nucleolin in regulation of cell proliferation and viability is unique to ESCs, we generated Tc-inducible shRNA nucleolin NIH3T3 cell lines and examined whether depletion of nucleolin in NIH3T3 could result in similar phenotypes to those observed in ESCs. First, we conducted BrdU incorporation assay to examine the impact of nucleolin deletion on the proliferation ability in NIH3T3 cells, and we found that, after a 3-day Tc treatment, the percentage of BrdU ϩ cells was comparable between control NIH3T3 cells and nucleolin knockdown NIH3T3 cells (Fig. 7A), although the Tc-induced depletion of nucleolin is as efficient as that in ESCs (Fig. 7, B and E). Analysis of cell cycle profile revealed that nucleolin deficiency did not affect the cell cycle progression in NIH3T3 cells when they were treated with Tc for 3 days (Fig. 7C). Moreover, there were few obvious apoptotic cells in culture, and the protein level of active form caspase 3 was undetectable in NIH3T3 cells after induction of nucleolin shRNA for 3 days (Fig. 7B), indicating that short term nucleolin depletion did not induce apoptosis in NIH3T3 cells. To find out whether knockdown nucleolin for a long time would cause more obvious defects in NIH3T3 cells, we performed MTT assays for measuring the cell growth rate in a 7-day time course. As shown in Fig. 7D, long term induction of nucleolin shRNA expression resulted in a reduced cell growth rate in NIH3T3 cells, although the extent of reduction was much less than that observed in ESCs. These observations suggest that nucleolin might not be so important for the proliferation and survival in NIH3T3 cells as it was in pluripotent ESCs. We next asked whether nucleolin would control p53 protein levels as it did in ESCs. Consistent with its minor influence on cell proliferation and cell survival, knockdown of nucleolin in NIH3T3 cells slightly elevated the p53 protein level (Fig. 7B) and the transcription level of its downstream genes (Fig. 7E). These findings reveal that ESCs and differentiated cells response differently to the silence of nucleolin. DISCUSSION Nucleolin is an intensively studied nucleolar protein highly expressed in rapidly dividing cells, and it has been demonstrated to participate in numerous biological processes. Our previous work suggested that nucleolin is situated in the nucleolus of mouse ESCs and might play an important role in maintaining cell growth of undifferentiated mouse ESCs (19). Other than that, no information about the function of nucleolin in ESC has been reported to date. In this study, using the shRNAmediated depletion approach, we systematically investigated the function of nucleolin in controlling cell proliferation, apoptosis, cell cycle progression, and differentiation of ESCs, and we found that knockdown of nucleolin resulted in enhanced ESC apoptosis and reduced cell proliferation, which probably due to G 1 phase accumulation and reduction in S phase cell population. In addition, depletion of nucleolin resulted in extensive ESC differentiation into multiple lineages. Thus, our data demonstrate that nucleolin is required for maintenance of ESC self-renewal, adding a new variety to the already long list of nucleolin functions and a new factor required to sustain the ESC identity. Numerous reports have implicated the involvement of nucleolin in regulation of cell proliferation and cell growth in many cell types (28,32). However, the molecular mechanisms responsible for its function remain ambiguous. An elevated protein level of p53 was observed when nucleolin was downregulated in human HeLa and primary fibroblast cells (32) as well as in MCF-7 cells after DNA damage (33). The increase in p53 protein levels has been considered as a result of the increased p53 mRNA stability or protein translation in nucleolin-depleted human cells. Interestingly, it was also reported that nucleolin stabilized p53 proteins through inhibiting Hdm2 in unstressed human cells (31). In this study, we found that nucleolin silencing could elevate p53 protein levels partially through increasing its protein stability. Therefore, the interplay between these two factors is complex and probably is nucleolin dosage-and cell context-dependent. However, what functional roles the elevated p53 protein level played in the phenotypes caused by the nucleolin depletion in these studies was not tested. Here, we first report that down-regulation of nucleolin in unstressed mouse ESCs is accompanied by the activation of p53-dependent pathway based on the fact that the protein level of p53 and the expression of its major target genes are markedly elevated in nucleolin-deficient ESCs. Most importantly, we experimentally demonstrated that most, if not all, of the phenotypes observed in ESCs depleted of nucleolin could be attributed to activated p53-dependent pathways, because knocking down p53 expression could rescue the majority of cell defects brought about by nucleolin depletion. Therefore, p53 is likely a major mediator downstream of nucleolin to control unique properties of ESCs. Further investigations are required to identify other factors associated with nucleolin deficiency-induced ESC differentiation, as down-regulation of p53 could only partially abrogate the ESC differentiation phenotypes. In addition to the control of p53 protein levels by nucleolin, nucleolin depletion-induced cellular stress and, in particular, nucleolar disruption could also activate p53-dependent pathways in ESCs. As a major component of the nucleolus, nucleolin probably is required for the survival and proliferation of most, if not all, mammalian cell types. However, it is known that its expression level is higher in tumors and actively dividing cells (34). Here, we report that nucleolin protein levels are higher in undifferentiated ESCs than in differentiated ESCs and are rapidly down-regulated upon differentiation, implying that nucleolin might have a distinct role in controlling ESC proliferation and survival. Supporting this notion, silencing of nucleolin to a comparable extent in differentiated cell type, NIH3T3 fibroblast cells, did not activate p53-dependent pathways and affect cell survival as well as proliferation significantly, as it did in ESCs. Furthermore, down-regulation of nucleolin could not enhance the protein level of p53 markedly in MCF-7 cells in the absence of ionizing irradiation (33). It appears that undifferentiated ESCs are more sensitive to the level of nucleolin than differentiated cell types. The role of p53 in controlling ESC self-renewal and differentiation as well as somatic cell reprogramming has been a focus point for researchers. Induction of ESC differentiation by p53 has been reported in both mouse and human ESCs when they were exposed to DNA-damaging agents (35,36). The pro-differentiation role of p53 in mouse ESCs was thought mediated through its repression of Nanog, one of the core transcription factors for pluripotency and self-renewal of ESCs (5). Unexpectedly, we did not observe any reduction in the expression of Nanog, Oct4, and Sox2 at all, despite the obvious differentiation of cell morphology as well as the elevated expression levels of various differentiation associated markers in nucleolin-depleted ESCs. A similar phenomenon was detected in Foxd3depleted ESCs, in which precocious differentiation along multiple lineages occurred with continuous expression of three core pluripotency-associated transcription factors (37). These data suggest that expression of the pluripotency factor alone is not sufficient to sustain ESC self-renewal. However, our results suggest that p53 triggers ESC differentiation in a Nanog-independent manner. Considering the evident higher percentage of cells in G 1 phase in nucleolin-depleted ESCs, we postulate that p53-mediated accumulation of cells in G 1 phase may be a determining factor leading to ESC differentiation. It is well known that ESCs possess distinctive cell cycle features (13). In particular, a very short cell cycle and the absence of early G 1 in mouse ESCs allow them to avoid differentiation-inducing signals. Further supporting this, Singh and Dalton (38) recently proposed that lengthening of G 1 phase appears to be a cause rather than a consequence of ESC differentiation (39). It is possible that nucleolin depletion-mediated activation of p53 pathways causes ESCs to acquire the early G 1 phase and thus be able to receive differentiation-inducing stimuli. This may represent the unique response of ESCs to the nucleolin depletion-activated p53-dependent pathways. Collectively, our study describes an essential role for nucleolin in control of ESC self-renewal and provides experimental evidence of p53 as a major target of nucleolin functions in maintaining the unique cell cycle profile and survival in ESCs. In addition, our study emphasizes the notion that the unique cell cycle feature of ESC is an important mediator in choosing between self-renewal and lineage commitment. Thus, maintaining the unique cell cycle features of ESCs appears to be a critical factor in preventing differentiation.
2018-04-03T02:54:44.533Z
2011-10-19T00:00:00.000
{ "year": 2011, "sha1": "84a6c98177283160f1e293ef799166da70ad0361", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/50/43370.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "b6fa8614cd08c488fe23167b4f66f82101ee28bd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
53309155
pes2o/s2orc
v3-fos-license
Bilateral simultaneous multifocal primary tuberculous pyomyositis in a renal transplant recipient-a rare presentation Tuberculosis (TB) is a serious infection with high mortality of 22-31% in solid organ transplantation which is higher than in the general population [1]. Primary infection is rare and in the majority of cases TB is due to reactivation of latent infection. The diagnosis of TB in renal transplant recipients is a diagnostic dilemma due to the atypical and extra pulmonary presentations. Primary muscular TB without the involvement of the bones is rarer and limited to case reports [2]. This is mainly due to the fact that striated muscle is most resistant to the bacteria because of its poor oxygen content, high lactic acid and a paucity of reticuloendothelial tissue [3]. CASE REPORTS Tuberculosis (TB) is a serious infection with high mortality of 22-31% in solid organ transplantation which is higher than in the general population [1].Primary infection is rare and in the majority of cases TB is due to reactivation of latent infection.The diagnosis of TB in renal transplant recipients is a diagnostic dilemma due to the atypical and extra pulmonary presentations.Primary muscular TB without the involvement of the bones is rarer and limited to case reports [2].This is mainly due to the fact that striated muscle is most resistant to the bacteria because of its poor oxygen content, high lactic acid and a paucity of reticuloendothelial tissue [3]. Case report A 29-year-old man who was well following a live donor renal transplant performed three years back presented with a complaint of thigh pain and pain on walking for two weeks.Examination was unremarkable with normal neurovascular status of the limbs.Routine investigations including muscle enzymes were normal.Worsening myalgia warranted a muscle biopsy which showed non specific inflammatory changes.A few weeks later the pain worsened with induration of the skin and the subcutaneous tissues of the thigh.CT showed no bony d e s t r u c t i o n b u t r e v e a l e d m u l t i l o c u l a t e d heterogeneous collections in the posterior compartments of both thighs (Fig 1).MRI showed Key words: Pyomyositis; acid fast bacilli; renal transplantation extensive posterior compartmental, inter and intra muscular abscesses occupying the region from the gluteus maximus to the hamstrings with high signal on T2 -weighted images (Fig 2).Chest radiograph was normal.Due to the extensive nature of the abscess open surgical drainage was done via bilateral small vertical skin incisions with extensive subcutaneous fasciotomy (Figure 3).Bacterial and fungal cultures and microscopy for AFB were negative.However PCR for TB was positive.Extended course of antituberculous drugs was given resulting a slow, steady recovery. Discussion This is the first reported case of primary TB involving both lower limbs involving the same muscular compartment simultaneously.TB pyomyositis has been documented in both immunocompetent individuals via syringe transmitted infections and in immunocompromised patients such as in HIIV, systemic lupus erythematosus, leukaemia and 4 transplanted patients.The classic inflammatory signs are absent, making an early diagnosis difficult.Low grade fever and myalgia are less common and the finding of a non tender mass is the most common clinical sign.The pathogenesis of primary muscular TB has not been clearly established.All published reports of primary TB of gluteal muscles cite that contaminated syringes were as the source.Haematogenous spread and direct extension from lung, bone or other tissues are potential mechanisms.In this case we found no other primary focus nor was there a history of trauma or injections.The pathogenesis remained a mystery.Interestingly this patient had the entire posterior compartmental muscles involved including glutei and hamstrings.Also surprisingly both limbs were involved more or less to the same extent.Interestingly, although extensive myofibrillar destruction occurs in pyomyositis, it is uncommon to see muscle enzyme elevation, and normal enzyme levels do not exclude the diagnosis as highlighted in this case. CT, MRI and scintigraphic scanning with gallium-67 can be helpful in the diagnosis of occult abscesses.MR imaging is more accurate in identifying and evaluating the loco-regional extension of pyomyositis.Gadolinium enhancement during MRI, if performed early will differentiate the pre-abscess invasive stage 5 from the suppurative stage. CT guided percutaneous drainage of the the abscess is as effective for both diagnosis and treatment in early stages.In this case CT guided drainage was not Standard antituberculous treatment yields a good response in most cases.This case highlights the importance of having a high index of suspicion of extrapulmonary TB and having a low threshold for imaging since it is the corner stone to making the an early diagnosis in order to prevent significant morbidity. The muscle groups commonly involved are the psoas, The Sri Lanka Journal of Surgery 2013; 31(3):57-58 57 Bilateral simultaneous multifocal primary tuberculous pyomyositis in a renal transplant recipient -a rare presentation L N Senevirathna, B Lindsey, D Nicol .Department of Nephrology, Urology & Renal Transplant, Royal Free Hospital, London, UK Correspondence : Seneviratne LN Email : neerosene@yahoo.combiceps, and gluteus maximus.
2018-10-15T23:40:09.027Z
2014-01-23T00:00:00.000
{ "year": 2014, "sha1": "9e9b869aeb863cedf31b3d90d2123e9130974d73", "oa_license": "CCBY", "oa_url": "http://sljs.sljol.info/articles/10.4038/sljs.v31i3.6433/galley/5035/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9e9b869aeb863cedf31b3d90d2123e9130974d73", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
117784794
pes2o/s2orc
v3-fos-license
A regularity theory for multiple-valued Dirichlet minimizing maps This paper discusses the regularity of multiple-valued Dirichlet minimizing maps into the sphere. It shows that even at branched point, as long as the normalized energy is small enough, we have the energy decay estimate. Combined with the previous work by Chun-Chi Lin, we get our first estimate that m-2 dimensional Hausdorff measure of singular set is zero. Furthermore, by looking at the tangent map and using dimension reduction argument, we show that the singular set is at least of codimension 3. Introduction The regularity of harmonic maps between Riemannian manifolds has been a fascinating subject in recent years. The very first general result on this is due to [SU1], in which they proved that a bounded, energy minimizing map u : M n → N k is regular (in the interior) except for a closed set S of Hausdorff dimension at most n − 3. One important technique they use in the paper is for lowering the dimension of S under the condition that certain smooth harmonic maps of spheres into N are trivial. This can be checked in some interesting cases, for example if N has nonpositive curvature. They showed S = ∅, i.e, any energy minimizing map into such a manifold is smooth. Use that method, they [SU2] are also able to reduce the dimension of S if N is a sphere. The result is as follows: Theorem 1.1 ( [SU2], Theorem 2.7). For k ≥ 2, define a number d(k) by setting d(2) = 2, d(3) = 3 where [·] denotes the greatest integer in a number. If n ≤ d(k), then every energy minimizing map from a manifold M of dimension n into S k ⊂ R k+1 is smooth in the interior of M . If n = d(k)+1, such a map has at most isolated singularities, and in general the singular set is a closed set of Hausdorff dimension at most n − d(k) − 1. This same question in liquid crystal configurations setting(n = 3, k = 2) has been studied independently by Hardt-Kinderlehrer-Lin using blowing-up argument in [HKL]. A few years later, Theorem 1.1 was extended to stable-stationary harmonic maps u ∈ H 1 (Ω, S k ), k ≥ 3 by Hong-Wang [HW]. Stable-stationary harmonic maps are harmonic maps with zero domain first variation and nonnegative range second variation. Examples of stable-stationary harmonic maps include energy minimizing maps. In a recent work of Lin-Wang [LW], they improved the theorems by [SU2], [HW] for 4 ≤ k ≤ 7 as follows: Theorem 1.2 ( [LW], Theorem 1). Definẽ For k ≥ 3, let u ∈ H 1 (Ω, S k ) be a stable-stationary harmonic map, then the singular set S of u has Hausdorff dimension at most n −d(k) − 1. We can also talk about the energy minimizing problems in the setting of multiple-valued functions (maps) thanks to the monumental work [AF]. After Almgren gave suitable definitions of derivative and Sobolev space for multiplevalued functions, the question of minimizing energy among functions with the prescribed boundary data becomes legitimate. Furthermore, he was able to show that any Dirichlet energy minimizing multiple-valued function is regular in the interior and has branch point of codimension at least 2. Although the primary purpose in [AF] of introducing multiple-valued functions is to approximate almost flat mass-minimizing integral currents by graphs of Dirichlet minimizing multiple-valued functions, the subject of multiple-valued functions in the sense of Almgren turns out to be also interesting in its own. See some recent works [CS], [GJ], [LC1], [LC2], [MP], [ZW1], [ZW2], [ZW3]. In the same spirit of [AF], Chun-Chi Lin (in [LC1]) considered the energy minimizing multiple-valued map into spheres. Specifically, he showed that for points not in the branch set B 0 , as long as the normalized energy is small, the map is regular there (see more of this discussion in section three). We will continue his work by examining the local behavior of points in B 0 . The main idea is to use the blowing-up analysis at this point. The blowing-up sequence converges strongly to a Dirichlet minimizing function which is regular due to [AF]. Hence it guarantees the energy of the original map near this point satisfies some growth condition. Combining our result with the result in [LC1], we conclude that the minimizing map is regular at any point as long as the normalized energy there is small enough thanks to Morrey's growth lemma. This gives us the first m − 2 estimate. Then, using dimension reduction argument, we get our main result: Theorem 1.3. Let u ∈ Y 2 (B m 1 (0), Q(S n−1 )) be a strictly defined, Dirichlet minimizing map. Then it is Hölder continuous away from the boundary except for a closed subset S ⊂ B m 1 (0) such that dim(S) ≤ m − 3. The assumption that we are looking at points in B 0 is important in the blowing-up process because we need to get suitable constant of the form Q[[b]] for some b ∈ S n−1 in order for the subtraction between two Q-tuples to make sense. There are some other interesting questions which are not addressed in this paper, and still open to the author's knowledge. A first one is whether our result is an optimal one. We are hoping to have some similar results as in [SU2], [LW]. Some new techniques are expected because [SU2] [LW] both use Bochner formula, which is no longer available in the multiple-valued functions setting. A second one is the regularity for stationary harmonic multiple-valued functions. There are already some positive results for this in the two dimensional case, see [LC2]. Another one is the branching behavior. Chun-Chi Lin (in [LC1]) has done some work on this. But there was some problem with that. Basically speaking, the monotonicity formula for frequency function he used in his proof actually does not necessarily hold for multiple-valued maps. Some new idea is probably needed to get around this obstacle. It is my great pleasure to thank my thesis advisor Professor Robert Hardt for his support, encouragement and kindness during the years at Rice. A lot of this work was stimulated by [HKL]. Preliminaries Most of the notations, definitions and known results about multiple-valued functions that we need can be found in [ZW1]. The reader is also referred to [AF] for more details. We also use standard terminology in geometric measure theory, all of which can be found on page 669-671 of the treatise Geometric Measure Theory by H. Federer [FH]. For reader's convenience, here we state some useful results not included in [ZW1]. The proofs of them can be found in [AF]. provided this limit exists. (1) For convenience, we will use ∂f /∂r to represent Df (x, x/|x|) for any multiple-valued function f whenever the derivative exits. (2) Noticing that the squeeze deformation comes from a domain deformation, the squeeze deformation formula still holds for multiple-valued maps. (2) Whenever 0 < δ < 1 and p, q ∈ B m 1−δ (0), in particular, f |B m 1−δ (0) is Hölder continuous with exponent ω 2.13 . (3) Corresponding to each bounded open set A such that ∂A is a compact m − 1 dimensional submanifold of R m of class 1, there is a constant 0 < Γ A < ∞ with the following property. Whenever g ∈ Y 2 (A, Q) is Dir minimizing and p, q ∈ A, 3 Some Remarks on [LC1] In [LC1], Chun-Chi Lin introduced the set , for any small enough radius r > 0, b r ∈ R n and AV r, He proved that for a point not in B 0 , if the normalized energy of f is small enough there, then the energy of f near this point satisfies some growth condition. The key ingredients are the induction on Q and finding a comparison map. In order to use the induction, we need J ≥ 2. This is guaranteed by our assumption that the point we are looking at is not in B 0 . He did not explain that in his paper. Here is the detail: for any b ∈ R n . We may as well just assume that Now instead of letting q * ∈ Q * be the point in Q * such that . With these points q 1 , q 2 , · · ·, q Q , 1 < K < ∞ and constant s 0 to be chosen later, we find J ∈ {1, 2, · · ·, Q}, k 1 , k 2 , · · ·, k J ∈ {1, 2, · · ·, Q}, distinct points p 1 , · · ·, p J ∈ R n as in ) is a fixed positive number. So we can choose s 0 small enough to guarantee that J ≥ 2. We also have to show that the rest of the proof in [LC1] is still valid after we choose the different q * . This is because the only place where q * is used in [LC1] is to show We still have this because Another thing that worth mentioning is in the proof of Lemma 4 in [LC1], more precisely (2.12). He was claiming that g j is Hölder continuous hence having growth condition on the energy. But in fact since his work is only on points outside B 0 , and we do not know whether the origin is inside or outside of the set B 0 for each g j , the induction seems to be a problem. However, using our result on branched points, we can overcome this. Let's look at our result Theorem 7.1 in advance (notice that our proof does not depend on induction or the result in [LC1]) , which says that at branched point, for some constant C depending on the dimensions and the total energy D(1). Now we claim that for each g j in [LC1], there exists a positive constant α such that This is because if the origin is not in the corresponding set B 0 of g j , then the induction argument gives us the above estimate. Otherwise, our result applies. Finally, we modify the end of proof of Lemma 4 in [LC1] as following: (reason that the original proof did not work is that by considering two cases, the integration did not necessarily work) Multiply both sides by r −7(m−1) , we get while the last inequality follows because 7(m − 1) > m − 2 + α. Maximum Principle for Multiple-Valued Dirichlet Minimizing Functions Lemma 4.1. Given a positive number M , and ǫ > 0, define the retraction function Π M as follows Theorem 4.1. If f : B m 1 (0) → Q is strictly defined and Dir minimizing with boundary data g : ∂B m Proof. We may assume that M := sup x∈∂B m which has boundary data g and whose energy is no more than that of f because Lip(Π M ) ≤ 1. Take any point y ∈ S, because of the continuity of f , there is a neighborhood Therefore f must be constant in U (otherwise, its energy is nonzero. But the energy of h in U is strictly smaller than that of f , contradicting to the fact that f is Dir minimizing). So S is also open in B m 1 (0). Therefore, S = B m 1 (0), which is a contradiction to the assumption that f |∂B m 1 (0) = g. Hybrid Inequality From now on, m ≥ 2 and n ≥ 2. Lemma 5.1. If u : B m 1 (0) → Q(S n−1 ) is strictly defined and Dir minimizing, then for a.e. 0 < r < 1, Proof. For minimizing maps, we still have the squeeze formula: gives us the desired inequality. Proof. Were the theorem false, there would be, for each 0 < θ < 1/2, a sequence Let Π be the projection onto the unit sphere in R n , i.e. Π(x) = x |x| . It is easy to check that when we restrict our attention to the set U ǫ = {x : 1−ǫ < |x| < 1+ǫ}, the Lipschitz constant of Π is no more than 1/(1 − ǫ). where L is defined in Corollary 6.1. Consider the following blowing-up sequence The energy of each one is one by the definition of ǫ i . As for their L 2 norms, we estimate as follows: Hence the L 2 -norm of the blowing-up sequence in B m L (0) is uniformly bounded. (Technically, we should therefore from now on, focus on B m L (0) instead of B m 1 (0). But since the regularity is only a local property, we may just stick to B m 1 (0) for convenience.) We use Compactness Theorem 4.2 in [ZW1] to get a subsequence(for convenience, whenever we have to take a subsequence, we do not change the notation) such that for some w ∈ Y 2 (B m 1 (0), Q). Blowing-up the Constraint Take the norm of both sides, Let i go to infinity, we know w ∈ Y 2 (B m 1 (0), Q(P )) for some n − 1 dimensional plane P passing through the origin and perpendicular to b. Strong Convergence and Minimality Now we want to show that w is Dir minimizing in Y 2 (B m 1 (0), Q(P )) and the convergence of w i in Y 2 is actually strong. Let B m ρ0 (y) ⊂ B m 1 (0), and let δ > 0, θ ∈ (0, 1) be given. Choose any M ∈ {1, 2, · · ·} such that lim sup and note that if ǫ ∈ (0, (1 − θ)/M ), we must have some integer l ∈ {1, 2, · · ·, M } such that This is because that otherwise we get that ρ 2−m 0 B m ρ 0 (y) |D(u i /ǫ i )| 2 ≥ M δ for all sufficiently large i by summation over l, contrary to the definition of M . Thus choosing such an l, letting ρ = ρ 0 (θ + (l − 1)ǫ), and noting that ρ(1 + ǫ) ≤ ρ 0 (θ + lǫ) < ρ 0 , we get ρ ∈ [θρ 0 , ρ 0 ) such that By weak convergence, we have We can not use the Luckhaus-type Theorem 3.2 in [ZW1] now, because ǫ i w + Q[[b i ]] ∈ Q(S n−1 ). But we can use the technique "(Π a |S n−1 ) −1 • Π ′′ a as we did in proving the hybrid inequality to get a map denoted as where C depends only on m, n, Q. Now let v ∈ Y 2 (B m θρ0 (y), Q(P )) such that v = w in a neighborhood of ∂B m θρ0 (y). Therefore Since δ was arbitrary, we have Therefore, w is minimizing on B m θρ0 (y), and in view of the arbitrariness of θ and ρ 0 , this shows that w is minimizing on all balls B m ρ (y) with B m ρ (y) ⊂ B m 1 (0). Finally to prove that the convergence is strong we note that if we use v = w as above, we can conclude |Dw| 2 + Cδ and hence, in view of the arbitrariness of θ and δ, for each ρ 1 < ρ 0 . Evidently it follows from this(keeping in mind the arbitrariness of ρ 0 ) that we can evidently select a subsequence which converges strongly to Dw on B m ρ (y). Since this holds for arbitrary B m ρ (y) ⊂ B m 1 (0), it is then easy to see(by covering B m 1 (0) by a countable collection of balls B m ρj (y j ) with B m ρj (y j ) ⊂ B m 1 (0)) that there is a subsequence such that D(u i /ǫ i ) converges strongly locally in all of B m 1 (0). Let's choose θ small enough such that θ ω2.13− m + ω 2.13 k ≤ 1/4K. This is possible because it is equivalent to Noting that θ ≥ 2 −1−k , the right side of above one is greater than which is bounded from below although when θ goes to zero, k goes to infinity. Thus for i sufficiently large enough,we have contradicting the choice of u i . Energy Decay Theorem 7.1 (Energy decay). If u ∈ F is Dir minimizing, B m where ǫ and θ are as in the Energy Improvement. Proof. Let u θ i R ≡ u(θ i Rx), i = 0, 1, · · ·. It is easy to check This is because u R ∈ F ,and E 1 (u R ) = R 2−m Dir(u, B m R (0)) ≤ ǫ 2 by our assumption. Hence we can use the energy improvement to the function u R to get that. Claim: E θ (u θR ) ≤ θ 2ω2.13 ǫ 2 . Obviously, u θR ∈ F , moreover, Hence using the energy improvement to function u θR , we get Continuing the process, we get for k = 1, 2, 3, · · ·. Given 0 < r ≤ R, choose k such that θ k+1 R < r ≤ θ k R. Theorem 8.1. Let u ∈ Y 2 (B m 1 (0), Q(S n−1 )) be a strictly defined, Dirichlet minimizing map. Then it is Hölder continuous away from the boundary except for a closed subset S ⊂ B m 1 (0) such that H m−2 (S) = 0. Proof. Let Obviously, S is closed and H m−2 (S) = 0 (see for example Lemma 2.1.1 in [LY]). Let's look at a point a ∈ B m 1 (0) ∼ S. We may assume a = 0. Let ǫ be the constant in the Energy Improvement, and k = k(Q, m, n) be the constant in the "small energy regularity" theorem in [LC1]. For any b ∈ B m R (0), We have two possibilities: Case 1: b ∈ B 0 . By the "small energy regularity" theorem in [LC1], we have where β is the constant given in [LC1]. 9 Dimension Reduction 9.1 Monotonicity Formula is Dir minimizing, although ξ • u is not necessarily harmonic, we still have the following results: Consider the domain variation We should have d ds | s=0 Energy of ξ • u s = 0. Proof. Just apply the argument of ( [SL], §2.4) to the function ξ • u to get and notice that |D v (ξ • u)| = |D v u|. Remark 9.1. (1) From above, ρ 2−m B m ρ (x) |Du| 2 is an increasing function of ρ for ρ ∈ (0, ρ 0 ), and hence the limit as ρ → 0 of ρ 2−m B m ρ (x) |Du| 2 exists; this limit is denoted as Θ u (x). It is also easy to see that the density Θ u is upper semi-continuous on B m 1 (0). (2) Another important additional conclusion, which we see by taking the limit as σ → 0 in the monotonicity formula, is that B m ρ (x) R 2−m | ∂u ∂R | 2 < ∞ and Definition of Tangent Maps Let B m ρ0 (y) with B m ρ0 (y) ⊂ B m 1 (0), and for any ρ > 0 consider the scaled function u y,ρ defined by u y,ρ (x) = u(y + ρx). Properties of Tangent Maps Let ρ j ↓ 0 be one of the sequences such that the re-scaled maps u y,ρj → ϕ as described above. Since u y,ρj converges in energy to ϕ, we have, after setting ρ = ρ j and taking limits on each side of (1) as j → ∞, Thus any tangent map of u at y has constant scaled energy and equal to the density of u at y. Furthermore, we apply the monotonicity formula to ϕ to get So that ∂ϕ/∂R = 0 a.e, and since ϕ ∈ Y 2 (R m , Q(S n−1 )) it is correct to conclude from this, by integration along rays, that ϕ(λx) ≡ ϕ(x) ∀λ > 0, x ∈ R m Theorem 9.2. y ∈ reg u ⇔ Θ u (y) = 0 ⇔ ∃ a constant tangent map ϕ of u at y Proof. The first part of the statement is easily obtained from Theorem 8.1. The second part comes from (2).
2019-04-12T09:20:45.662Z
2006-08-07T00:00:00.000
{ "year": 2006, "sha1": "358c394a6da087f344dbc5abe0f20afe241c1960", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "358c394a6da087f344dbc5abe0f20afe241c1960", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
213986937
pes2o/s2orc
v3-fos-license
Daily Photovoltaic Power Prediction Enhanced by Hybrid GWO ‐ MLP, ALO ‐ MLP and WOA ‐ MLP Models Using Meteorological Information : Solar energy is a safe, clean, environmentally ‐ friendly and renewable energy source without any carbon emissions to the atmosphere. Therefore, there are many studies in the field of solar energy in order to obtain the maximum solar radiation during the day time, to estimate the amount of solar energy to be produced, and to increase the efficiency of solar energy systems. In this study, it was aimed to predict the daily photovoltaic power production using air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation parameters as multi ‐ tupled inputs. For this purpose, grey wolf, ant lion and whale optimization algorithms were integrated to the multilayer perceptron. In addition, the effects of sigmoid, sinus and hyperbolic tangent activation functions on the prediction performance were analyzed in detail. As a result of overall accuracy indictors achieved, the grey wolf optimization algorithm ‐ based multilayer perceptron model was found to be more successful and competitive for the daily photovoltaic power prediction. Furthermore, many meaningful patterns were revealed about the constructed models, input tuples and activation functions. Introduction According to the Renewables 2019 Global Status Report [1], the global renewable power capacity reached 2.378 GW by the end of 2018 and more than 33% of world's total power generation was covered by renewable energy sources. During 2018, new capacity additions accounted for 55% from solar photovoltaic power, 28% from wind power, and 11% from hydropower. Especially, solar photovoltaic power achieved the high penetration level with around 100 GW addition. As a result, solar photovoltaic power became the world's fastest-growing renewable energy in 2018. Despite being the most competitive option for electricity generation, it is still needed to predict the photovoltaic power generation for energy trading. Li et al. tuned a support vector machine with a hybrid improved multi-verse optimizer for photovoltaic output prediction. Historical power generation data and weather type were processed, and the MSE value was decreased by at least 0.0012 [2]. Behera et al. applied an accelerated particle swarm optimization-based extreme learning machine to predict photovoltaic power, and the MAPE accuracy was obtained as 1.4440% [3]. Eseye et al. developed a wavelet-particle swarm optimization-support vector machine model based on SCADA data and meteorological information, and the NMAE value was found as 0.4 [4]. Koster et al. characterized the photovoltaic reference systems for regionalized photovoltaic power prediction, and the MD value was reduced 1.1% of the This paper is organized as follows: Section 2 explains the hybrid prediction models developed for daily photovoltaic power prediction. Section 3 introduces the detailed prediction results in terms of different accuracy measures. Finally, conclusions are provided in Section 4. Hybrid Prediction Models Developed In this section, multilayer perceptron and grey wolf, ant lion and whale optimization algorithms, which are utilized in the stage of developing the hybrid prediction models, are explained in detail. The architecture of the training sample is shown in Figure 1. Grey wolf, ant lion and whale optimization algorithms provide the biases and weights of multilayer perceptron, and receive the R 2 , MAE and MAPE values for training samples. The variables of multilayer perceptron such as weights and biases are sent to the grey wolf, ant lion and whale optimization algorithms in a series of values for training. Later, the mentioned optimization algorithms recursively change the weights and biases in order to minimize the average error of all training samples. It should be noted that the raw data used in this study were taken from DKA Solar Center in Australia [22]. It contains a total of 365 one-day measurements covering air temperature, relative humidity, total horizontal solar radiation, diffuse horizontal solar radiation and photovoltaic power production parameters. The units of them are assigned as °C, %, W/m 2 , W/m 2 and kW, respectively. Multilayer Perceptron Multilayer perceptron is one of the feed-forward artificial neural networks. Feed-forward artificial neural networks consist of 3 layers, which are called input, hidden and output. Neurons are in the form of regular layers from input to output. There is only a link from one layer to the next layers. The input layer is responsible for transferring the external data to the hidden layer. The hidden layer is responsible for sending the data from input layer to the output layer. In the output layer, the data from hidden layer are processed to produce the output. Therefore, the data coming to the input of the feed-forward artificial neural network are transmitted to the cells in the hidden layer without any change. It is then processed through the output layer and transferred to the external environment, respectively. The structure of multilayer perceptron with a single hidden layer is illustrated in Figure 2. The sigmoid, hyperbolic tangent and sinus activation functions used between the layers of multilayer perceptron are given in Equations (1)-(3), respectively [23]. The sigmoid activation function produces the values between 0 and 1, whereas hyperbolic tangent and sinus activation functions produce the values between −1 and 1. In addition, for the purpose of increasing the consistency of the total dataset, the data are reduced to the range between 0 and 1 by using the min-max normalization method given in Equation (4) [24]. In addition, we used the persistence reference model, which is also known as Naïve Predictor [25,26] and which is widely used for the benchmark tests [27][28][29], in order to compare with other models in this study. In this reference model, the forecasted value at time t + 1 is equal to the value at time t. In other words, the persistence reference model is only based on the linear correlation between the present and the future photovoltaic power values. The improvement percentage formula is given in Equation (5), where is the relevant error of hybrid model and is the relevant error of persistence method. Grey Wolf Optimization Algorithm The grey wolf optimization algorithm mimics the social hierarchy and hunting behavior of grey wolves [30]. Grey wolves mostly live in groups and their group size is between 5 and 12 members on average. There are alpha, beta, delta and omega species in which social dominance decreases respectively. The alphas are the most dominant wolves that govern the group best. The betas are the second-level wolves that help the alphas in decision-making or other activities of the group. Omegas are the wolves at the disposal of other wolves that dominate them. If a wolf is not alpha, beta or omega, it is called as the delta. The deltas govern the omegas while serving the alphas and betas. Therefore, the most appropriate solution in the grey wolf optimization algorithm is considered as the alpha (α). The most appropriate second, third and fourth solutions after the alphas are considered as beta (β), delta (δ) and omega (ω), respectively. The prey encircling behavior of grey wolves is modeled with Equations (6) and (7). In these equations, t represents the number of current iterations, ⃗ and ⃗ represent the coefficient vectors, ⃗ represents the position vector of the prey, and ⃗ represents the position of a grey wolf. ⃗ and ⃗ vectors are calculated using Equations (8) and (9). During the iteration, the value of ⃗ is reduced linearly from 2 to 0 and ⃗ and ⃗ are the random vectors in the range [0, 1]. To model the hunting behavior of grey wolves, alpha, beta and delta are taken as the top three best solutions by assuming that they have better knowledge about the potential position of the prey. It is then ensured that other search agents update their positions according to the position of the best search agent. The following equations are used for these operations. In these equations, ⃗ is a random value in the range [−2a, 2a]. | | 1 forces the grey wolves to attack their prey, while | | 1 forces the grey wolves to move away from the prey to find a more appropriate prey. Finally, the grey wolf optimization algorithm is ended by fulfilling a termination criterion. Ant Lion Optimization Algorithm The ant lion optimization algorithm mimics the interaction between ant lions and ants in their traps [31]. The life cycle of ant lions consists of two main stages: larvae and adulthood. Natural life cycles of them are up to three years. They spend most of this time as the larvae, and only 3-5 weeks of their life is spent with the adulthood. Since ants move stochastically when looking for food in nature, a random walk is selected by using Equations (13) and (14). In these equations, represents the cumulative sum, represents the maximum number of iterations, is the step of random walk, is a stochastic function and represents a random number generated with a uniform distribution in the range of [0, 1]. Since each search space has a boundary, Equations (13) and (14) cannot be used directly to update the position of ants. In order to keep the random walk of ants in the search space, normalization is performed at each iteration by using Equation (15). (15) In this equation represents the minimum of the random walk of the ith variable, represents the maximum of the random walk of the ith variable, represents the minimum of the ith variable at tth iteration and represents the maximum of the ith variable at tth iteration. The random walks of ants are affected by the traps of ant lions. Equations (16) and (17) are used for modelling this assumption mathematically. In these equations, c represents the minimum of all variables at tth iteration, represents the maximum of all variables at tth iteration and represents the position of jth ant lion selected at tth iteration. Ant lions throw sand out of the center of the pit when they realize that an ant is trapped. This behavior causes the trapped ant to slip down. To mathematically model this behavior, the radius of the random walk of ants in a hyper-sphere is adaptively reduced using Equations (18) and (19). In these equations, 10 , where t represents the current iteration, T represents the maximum number of iterations, and w represents a constant determined according to the current iteration. When ant reaches the bottom of the pit, and when it is caught in the ant lion's jaw, the final stage of hunting takes place. After this stage, the ant lion draws the ant into the sand and consumes it. This process is modeled using Equation (20). (20) In this equation, indicates the position of ith ant at tth iteration. On the other hand, in each iteration, the most suitable ant lion obtained so far is recorded as the elite and it is assumed that all ants walk randomly around a selected ant lion through the roulette wheel in order to be affected by the movement of all ants with the elite. The elite becomes like Equation (21). (21) In this equation, represents the random walk around the ant lion selected by the roulette wheel at tth iteration, and represents the random walk around the elite at the tth iteration. Whale Optimization Algorithm The whale optimization algorithm mimics the hunting behavior of humpback whales [32]. In the hunting strategy, which is called as the bubble-net feeding method, humpback whales first dive down to a certain depth. Then, they begin to form bubbles spirally around the prey and swim to the surface. In this way, they both conceal themselves and feed by keeping their prey in the bubble-net. Since the location of the optimal design in the search space is not known in advance, the whale optimization algorithm assumes that the best available candidate solution is target hunting or near optimal. Once the best search agent is identified, other search agents try to update their location according to the best search agent. This behavior is modeled using Equations (6)-(9), similar to the grey wolf optimization algorithm. In addition, during the iteration, the value of ⃗ is reduced from 2 to 0, and the shrinking encircling mechanism in the bubble-net feeding method is realized. On the other hand, Equations (22) and (23) are used for the spiral position update in the bubble-net feeding method. By means of these equations, the spiral motion between the humpback whale position and the prey position is modeled. In these equations, ′ ⃗ represents the distance of ith whale to the hunt (the best solution so far), b is a constant for defining the logarithmic spiral shape and is a random number in the range [−1, 1]. In addition, Equation (24) is used for simultaneously modelling the shrinking encircling mechanism and the spiral position updating of humpback whales around the prey. In this equation, is a random number in the range [0,1]. As in the grey wolf optimization algorithm, the whales search globally in the case of | | 1, while the elite whale is selected and other whales update their positions according to the elite whale in the case of | | 1. Daily Photovoltaic Power Prediction In this section, grey wolf, ant lion and whale optimization algorithms are integrated to the multilayer perceptron algorithm for the daily photovoltaic power prediction. Sigmoid, sinus and hyperbolic tangent activation functions are used in the multilayer perceptron algorithm. The results belong to 2 activation functions, which provide the best prediction performance, are given for each hybrid prediction model developed. In addition, meteorological parameters of air temperature (TA), relative humidity (HR), total horizontal solar radiation (SRTH) and diffuse horizontal solar radiation (SRDH) are used as the multi-tupled input data. The prediction results obtained are compared in terms of the coefficient of determination, mean absolute error and mean absolute percentage error measures. In the optimization algorithms, which used 4-tupled meteorological input, the number of search agents and the values of lower and upper bounds were assigned as 20, −20 and 20, respectively. In this model, we used nine hidden nodes for the multilayer perceptron algorithm. In the optimization algorithms, which used 3-tupled meteorological inputs, the number of search agents and the values of lower and upper bounds were defined as 20, −10 and 10, respectively. In this model, we used seven hidden nodes for the multilayer perceptron algorithm. In the optimization algorithms, which used 2-tupled meteorological inputs, the number of search agents and the values of lower and upper bounds were assigned as 20, −15 and 15, respectively. In this model, we used five hidden nodes for the multilayer perceptron algorithm. The maximum number of iterations in all optimization algorithms was defined as 250. These characteristic assignments/definitions were determined as a result of the experimental studies. In addition, each hybrid prediction algorithm was run 10 times independently in order to eliminate the unexpected (stochastic) cases. All experiments are executed on a 2.2 GHz Intel (R) Core (TM) personal computer with 8 GB RAM under MATLAB 2016a. Moreover, the performance of the hybrid prediction models developed was compared with the persistence reference model. The performance of the persistence reference model in the daily photovoltaic power prediction was found as 0.1589 for the coefficient of determination (R 2 ), 0.081 for the mean absolute error (MAE) and 15.702% for the mean absolute percent error (MAPE). In the next subsections, the smallest error results for each input tuple are highlighted in boldface in each table. Daily Photovoltaic Power Prediction Using Grey Wolf Optimization Algorithm-Based Multilayer Perceptron (GWO-MLP) The daily photovoltaic power prediction results for the grey wolf optimization algorithm-based multilayer perceptron, which used the sigmoid activation function, are presented in Table 1. In case of examining the error values in this table, R 2 of 0.9791, MAE of 0.017 and MAPE of 2.598% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the best prediction performance was obtained as 0.9841 for R 2 , 0.016 for MAE and 2.632% for MAPE using air temperature, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the best prediction performance was obtained as 0.9633 for R 2 , 0.022 for MAE and 3.076% for MAPE using total horizontal solar radiation and diffuse horizontal solar radiation inputs. As a result, among the error results occurred using the sigmoid activation function, the best prediction performance was achieved by the GWO-MLP method, which used air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. The predicted photovoltaic power values that belong to this hybrid method are illustrated in Figure 3. On the other hand, the worst prediction performance is caused by the GWO-MLP method, which used relative humidity and diffuse horizontal solar radiation inputs, with R 2 of 0.4046, MAE of 0.091 and MAPE of 14.936%. The daily photovoltaic power prediction results for the grey wolf optimization algorithm-based multilayer perceptron, which used the hyperbolic tangent activation function, are listed in Table 2. In case of investigating the error values in this table, R 2 of 0.4423, MAE of 0.066 and MAPE of 11.614% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.9003 for R 2 , 0.032 for MAE and 5.208% for MAPE using air temperature, relative humidity and total horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.9508 for R 2 , 0.027 for MAE and 4.248% for MAPE using air temperature and total horizontal solar radiation inputs. In consequence, among the error results occurred using the hyperbolic tangent activation function, the most accurate prediction performance was accomplished by the GWO-MLP method, which used air temperature and total horizontal solar radiation inputs. The predicted photovoltaic power values of this hybrid method are depicted in Figure 4. However, the most erroneous prediction performance was produced by the GWO-MLP method, which used relative humidity and diffuse horizontal solar radiation inputs, with R 2 of 0.0714, MAE of 0.151 and MAPE of 22.291%. In case of evaluating the prediction results in general, the GWO-MLP method, which used the sigmoid activation function, shows better prediction performance than the one using the hyperbolic tangent activation function. Furthermore, it respectively improves the R 2 , MAE and MAPE in the ratios of 80.62%, 79.01% and 83.45% in comparison to the persistence reference model. Finally, the lowest MAPE values achieved by the GWO-MLP method are visualized in Figure 5. Daily Photovoltaic Power Prediction Using Ant Lion Optimization Algorithm-Based Multilayer Perceptron (ALO-MLP) The daily photovoltaic power prediction results for the ant lion optimization algorithm-based multilayer perceptron, which used the sigmoid activation function, are presented in Table 3. In case of examining the error values in this table, R 2 of 0.7101, MAE of 0.068 and MAPE of 12.106% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the best prediction performance was obtained as 0.9334 for R 2 , 0.029 for MAE and 4.702% for MAPE using relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the best prediction performance was obtained as 0.9600 for R 2 , 0.037 for MAE and 5.959% for MAPE using total horizontal solar radiation and diffuse horizontal solar radiation inputs. As a result, among the error results occurred using the sigmoid activation function, the best prediction performance was achieved by the ALO-MLP method, which used relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. The predicted photovoltaic power values of this hybrid method are illustrated in Figure 6. On the other hand, the worst prediction performance was caused by the ALO-MLP method, which used relative humidity and diffuse horizontal solar radiation inputs, with R 2 of 0.2274, MAE of 0.117 and MAPE of 17.785%. The daily photovoltaic power prediction results for the ant lion optimization algorithm-based multilayer perceptron, which used the hyperbolic tangent activation function, are listed in Table 4. In case of investigating the error values in this table, R 2 of 0.3179, MAE of 0.129 and MAPE of 18.232% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.8113 for R 2 , 0.048 for MAE and 6.738% for MAPE using air temperature, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.8139 for R 2 , 0.071 for MAE and 10.430% for MAPE using relative humidity and total horizontal solar radiation inputs. In consequence, among the error results occurred using the hyperbolic tangent activation function, the most accurate prediction performance was accomplished by the ALO-MLP method, which used air temperature, total horizontal solar radiation and diffuse horizontal solar radiation inputs. The predicted photovoltaic power values of this hybrid method are depicted in Figure 7. However, the most erroneous prediction performance was produced by the ALO-MLP method, which used relative humidity and diffuse horizontal solar radiation inputs, with R 2 of 0.0117, MAE of 0.137 and MAPE of 21.296%. In case of evaluating the prediction results in general, the ALO-MLP method, which used the sigmoid activation function, showed better prediction performance than the one, which used the hyperbolic tangent activation function. Besides, it respectively improved the R 2 , MAE and MAPE in the ratios of 79.48%, 64.19% and 70.05% in comparison to the persistence reference model. Finally, the lowest MAPE values achieved by the ALO-MLP method are visualized in Figure 8. Daily Photovoltaic Power Prediction Using Whale Optimization Algorithm-Based Multilayer Perceptron (WOA-MLP) The daily photovoltaic power prediction results for the whale optimization algorithm-based multilayer perceptron, which used the sigmoid activation function, are presented in Table 5. In case of examining the error values in this table, R 2 of 0.8362, MAE of 0.050 and MAPE of 7.316% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the best prediction performance was obtained as 0.8985 for R 2 , 0.040 for MAE and 6.187% for MAPE using air temperature, relative humidity and total horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the best prediction performance was obtained as 0.8959 for R 2 , 0.034 for MAE and 5.514% for MAPE using total horizontal solar radiation and diffuse horizontal solar radiation inputs. As a result, among the error results occurred when using the sigmoid activation function, the best prediction performance was achieved by the WOA-MLP method, which used total horizontal solar radiation and diffuse horizontal solar radiation inputs. The predicted photovoltaic power values of this hybrid method are illustrated in Figure 9. On the other hand, the worst prediction performance was caused by the WOA-MLP method, which used relative humidity and diffuse horizontal solar radiation inputs, with R 2 of 0.0085, MAE of 0.156 and MAPE of 24.905%. The daily photovoltaic power prediction results for the whale optimization algorithm-based multilayer perceptron, which used the sinus activation function, are listed in Table 6. In case of investigating the error values in this table, R 2 of 0.4817, MAE of 0.086 and MAPE of 13.394% were found for air temperature, relative humidity, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 3-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.8242 for R 2 , 0.045 for MAE and 7.028% for MAPE using air temperature, total horizontal solar radiation and diffuse horizontal solar radiation inputs. Among 2-tupled meteorological inputs, the most accurate prediction performance was obtained as 0.7009 for R 2 , 0.075 for MAE and 11.892% for MAPE using total horizontal solar radiation and diffuse horizontal solar radiation inputs. In consequence, among the error results occurred when using the sinus activation function, the most accurate prediction performance was accomplished by the WOA-MLP method, which used air temperature, total horizontal solar radiation and diffuse horizontal solar radiation inputs. The predicted photovoltaic power values of this hybrid method are depicted in Figure 10. However, the most erroneous prediction performance was produced by the WOA-MLP method, which used air temperature and total horizontal solar radiation inputs, with R 2 of 0.2667, MAE of 0.186 and MAPE of 25.967%. In case of evaluating the prediction results in general, the WOA-MLP method, which used the sigmoid activation function, showed better prediction performance than the one which used the sinus activation function. Besides, it improved the R 2 , MAE and MAPE in the ratios of 78.43%, 58.02% and 64.88%, respectively in comparison to the persistence reference model. Finally, the lowest MAPE values achieved by the WOA-MLP method are visualized in Figure 11. Conclusions In this study, grey wolf, ant lion and whale optimization algorithms-based multilayer perceptron models were developed for the daily photovoltaic power prediction. Through the efficient prediction models developed, the effects of multi-tupled meteorological inputs and activation functions on the prediction performance were analyzed in detail, the prediction accuracy was highly improved according to the persistence reference model, and the uncertainty in the daily photovoltaic power prediction was reduced. In addition to these, the useful findings achieved are summarized one by one below:  The grey wolf optimization algorithm-based multilayer perceptron model provides more successful prediction results than ant lion and whale optimization algorithms-based multilayer perceptron models. On the other hand, it is observed that the ant lion optimization algorithm-based multilayer perceptron model shows better prediction results than whale optimization algorithm-based multilayer perceptron model.  In all of the multilayer perceptron models based on grey wolf, ant lion and whale optimization algorithms, the sigmoid activation function accomplishes lower prediction errors compared to hyperbolic tangent and sinus activation functions.  The best daily photovoltaic power prediction is achieved by the grey wolf optimization algorithm-based multilayer perceptron model, which uses air temperature, relative humidity, total horizontal solar irradiation and diffuse horizontal solar irradiation inputs along with the sigmoid activation function, with the MAPE of 2.598%. In addition, as a result of this error value, the persistence reference model is outperformed with the ratio of 83.45%.  In all of the multilayer perceptron models based on grey wolf, ant lion and whale optimization algorithms, which provide the most accurate prediction results, o Total horizontal solar radiation and diffuse horizontal solar radiation parameters are observed as the most suitable combination in 2-tupled meteorological inputs. o The air temperature parameter to be integrated with total horizontal solar radiation and diffuse horizontal solar radiation parameters comes into prominence in 3-tupled meteorological inputs.  The worst daily photovoltaic power prediction is caused by the whale optimization algorithm-based multilayer perceptron model, which uses air temperature and diffuse horizontal solar irradiation inputs along with the sinus activation function, with the MAPE of 25.967%.  In general of grey wolf, ant lion and whale optimization algorithms-based multilayer perceptron models, the usage of the relative humidity parameter as a meteorological input commonly produces the worst prediction results.  The grey wolf, ant lion and whale optimization algorithms-based multilayer perceptron models developed lead to lower error results than the commonly-used models based on artificial neural networks and support vector machines in the literature. o In future studies, the performance of the prediction models developed should also be tested based on the photovoltaic power prediction per minute, per hour and per week. In addition, the usage of other meteorological factors (affecting the photovoltaic power prediction) in the multi-tupled input structure should be analyzed in detail. Author Contributions: All authors contributed equally to the research activities and for its final presentation as a full manuscript. Funding: No source of funding was attained for this research activity. Conflicts of Interest: The authors declare no conflict of interest.
2020-02-27T09:35:12.588Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "58994d293d9f76cdd4e6d994369c618acec2ff32", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/4/901/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b6272b33f5b11ed733bd5338f098d71683f32373", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
235813297
pes2o/s2orc
v3-fos-license
Zeolite/Cellulose Acetate (ZCA) in Blend Fiber for Adsorption of Erythromycin Residue From Pharmaceutical Wastewater: Experimental and Theoretical Study The expanding amount of remaining drug substances in wastewater adversely affects both the climate and human well-being. In the current investigation, we developed new cellulose acetic acid derivation/zeolite fiber as an effective technique to eliminate erythromycin (ERY) from wastewater. The number of interchangeable sites in the adsorbent structures and the ratio of ERY to the three adsorbents were identified as the main reasons for the reduction in adsorption as the initial ERY concentrations increased. Additionally, for all adsorbents, the pseudo–second-order modeling showed better fitting for the adsorption than the pseudo–first-order modeling. However, the findings obtained in the pseudo–first-order model were still enough for explaining the sorption kinetics of ERY, showing that the surface displayed all chemisorption and physi-sorption adsorption processes by both adsorbents. The R 2 for the second order was very close to 1 for the three adsorbents in the case of pseudo–second-order. The adsorption capacity reached 17.76 mg/g. The three adsorbents showed negative values of ΔH, and these values were −6,200, −8,500, and −9600 kJ/mol for zeolite, CA, and ZCA, respectively, and this shows that the adsorption is exothermic. The desorption analysis shows no substantial loss of adsorption site after three trials, indicating higher stability and resilience of the three adsorbents, indicating a strong repeatability of their possible use in adsorption without contaminating the environment. In addition, the chemical attitude and possible donor–acceptor interactions of ERY were assessed by the quantum chemical parameters (QCPs) and NBO analysis performed, at the HF/6-311G** calculations. INTRODUCTION One of the most important reasons for the economic growth of developing countries and the expansion of urban areas is a society's ability to provide fresh water for sanitation and consumption to its population. However, as the population and urbanization increase, so does the release of radioactive materials into the atmosphere and surface water. There are many sources of surface and groundwater contamination, including agricultural, industrial, oil pollution, sewage, and wastewater (Al-Shaalan et al., 2019;El-Zawily et al., 2019;Khan et al., 2019;Chon et al., 2020). Several water pollution scenarios including the chiral pollution are a serious issue for our health and environment due to the enantioselective biodegradation of the chiral pollutants. It has adverse impact on our society and science. There is a big loss of our economy due to the use of racemic agrochemicals. The most notorious chiral pollutants are pesticides, polychloro biphenyls, polyaromatic hydrocarbons, brominated flame retardants, drugs, and pharmaceuticals (Basheer, 2018a;Basheer and Ali, 2018). Nowadays, water contamination due to the drugs and pharmaceutical residues is increasing and alarming. These contaminants are called as new emerging pollutants. The contamination due to the new emerging contaminants is of great concern due to their endocrine, hormonal, and genetic disturbance nature (Basheer, 2018b). In environmental samples such as surface water, groundwater, seawater, soil, and drinking water, pharmaceuticals were found (Arshad et al., 2020;Kiszkiel-Taudul, 2021), so they are referred to as emerging pollutants. The estimated global consumption of pharmaceuticals such as antibiotics is 100,000 to 200,000 tons per annum (Bungau et al., 2020). Based on the chemical properties of the drug, about 5-90% of the absorbed antibiotic doses are excreted by urine or stool as a metabolite or parent compound (Bhowmick et al., 2020). These drugs end up in drainage systems and eventually reach the ecosystem by sewage leakage, discharge of wastewater treatment plant (WWTP) effluents into marine systems, or disposal of unwanted or unfinished medications (Barchiesi et al., 2020). The use of sludge and animal waste as fertilizer in agriculture can also contribute to the degradation of agricultural soils, which can lead to the incorporation of antibiotics into marine environments by leaching into groundwater (Stevens and Jones, 2003). In recent years, the Environmental Protection Agency (EPA) has been more involved in informing the public about new pollutants of concern (CECs). CECs are a form of pollutant that is commonly found at trace levels in surface and groundwater (i.e., ppb and ppt). Examples of CECs are pesticides, chemicals, anti-infection agents, over-the-counter meds, mechanical synthetics, oil-based synthetic compounds, and others (Farré, 2020). Some of these processes, in particular, lack actual removal procedures, and the byproducts generated, such as organochlorine species, may be more toxic than the original compounds (Mery-Araya et al., 2019). To deal with this wastewater problem, lots of conventional and advanced technologies have been developed (Ali et al., 2018a;Mery-Araya et al., 2019). The conventional water treatments such as oxidation (Ma et al., 2020), electro precipitation, membrane separation, coagulation-flocculation, evaporation, floatation, and ion exchange (Yu et al., 2021) have been largely used, but these are inadequate techniques for water treatments (Tabassum, 2019). Many approaches have been used and reported for the removal of a variety of pesticides and drugs. Among the different methods, adsorption is the best approach because of several advantages associated with adsorption including time and cost (Ali et al., 2018a;Ali et al., 2018b;Ali et al., 2019). Erythromycin (ERY) is a natural antibiotic used to treat a variety of bacterial infections. Antibiotics pass into the human body after consistent treatment and ultimately enter inland areas and effluents; there is even a path of environmental degradation in the poultry and livestock breeding industries. Because of the structure of their aromatic ring, ERY molecules are resistant to the environment and difficult to degrade. Several reports (Ma et al., 2020;Yu et al., 2021) have reported the presence of ERY in water and wastewater to be above the average range. As a result, removing ERY residues from wastewater is important. Zeolite is a crystalline aluminosilicate with well-defined micropore dimensions and a strong crystal lattice form that is environmentally friendly. Zeolite structures are made up of tetrahedral SiO4 and AlO4 groups, and their alumina silica ratio (SAR) determines zeolite polarity (Martucci et al., 2012). Because of their three-dimensional framework, which creates nanometer-sized channels and cages, these materials have a high porosity and a large surface area. The shape of their internal pore structure can have a direct impact on their adsorption selectivity against host molecules, which is one of their distinguishing features (Zide et al., 2018). Cellulose acetate is an excellent candidate for use as a polymer matrix because it can be easily molded into a variety of shapes and because its hydrophilic surfaces can improve the mobility of aqueous solutions to the surface of hybrid materials (Das et al., 2020). The aim of this research was to use zeolite/cellulose acetate blended fiber as a reusable, simple-to-prepare adsorbent for erythromycin adsorption. The effects of several parameters, including contact time, concentration effect, temperature effect, and equilibrium and kinetics, on erythromycin adsorption by the composite fiber were studied. SEM, FT-IR spectroscopy, thermogravimetric analysis, and dynamic scanning calorimetry were used to characterize the zeolite/cellulose acetate fiber. The novelty of this work is shown by using three different adsorbents which showed very high percentage of removals. Also, theoretical studies were very supportive of the experimental findings. cation and "y" for the number of water molecules in the structure of zeolite, according to the Research Foundation at State University of New York (SUNY). Cellulose acetate (C 10 H 16 O 8 ) has been purchased from Al Quds Chemicals in Jerusalem. The zeolite chemical composition was included in the MSDS that has been supplied from the manufacturer. Acetone was bought from Guangzhou Chemi. Erythromycin with technical grade of 99% was purchased from Fluka (Fluka Chemie AG, Switzerland). Acetonitrile was purchased from Sigma-Aldrich, United States with analytical grade of more than 99%. The water was of the Milli-Q standard (Millipore, MA, United States). Preparation of ZCA Fiber Wet spinning was used to produce the zeolite/cellulose acetate blend fiber (ZCA); cellulose acetate (6 g) was dissolved in 50 ml of acetone/water solution (6:1, w/w). The zeolite rocks were ground and sieved to achieve an average dimension of approximately 800 mesh. 1.5 g of zeolite is added to the solution and scattered by mechanical stirring. To make a solid filament, the blended solution was spun in a stainless-steel spinner and then protruded into a water coagulation tank. The fiber was taken out of the bath and washed twice with filtered water. Finally, the fiber was dried at 30°C before being cut into very small fragments (Rodchanasuripron et al., 2020). Characterization of ZCA Fiber The scanning electron microscopy (SEM) manufactured by the Hitachi model (S-4700) in Japan was used to study the morphology of ZCA fiber. ZCA fiber was immersed in a liquid nitrogen atmosphere to create a very clean cross section for scanning. The Hitachi S-4700 FE-SEM is a cold field emission high-resolution scanning electron microscope. This SEM permits ultrahigh resolution imaging of thin films and semiconductor materials on exceptionally clean specimens. It is also suitable for polymeric materials. S-4700 is configured to detect secondary and backscattered electrons as well as characteristic X-rays. The X-Ray diffraction analysis was done using XRD-Shimadzu XD-1 with monochromatized graphite Cu-K alpha (15,418) and a scanning speed of 20°/min. The Bruker Alpha-P spectrophotometer was used to collect the Fourier transform infrared (FTIR) fiber spectrum. FT-IR spectra were reported from 400 to 4,000 cm −1 with 32 scans on Nicolet NEXUS-470 FT-IR (America) apparatus and a resolution of 4 cm −1 . The Shimadzu UV absorption spectrum of the sample was tested using an 1800 UV-Vis spectrophotometer with UV probe software. The ERY concentration was measured quantitatively using a UV-Vis spectrophotometer (SHIMADZU, UV-1201). The absorbance of the ERY solution was estimated at 481.5 nm, the wavelength at which ERY has the greatest absorbance. CuK Al radiation was used for X-ray diffraction on the Panalytical X'Pert Pro diffractometer (1.5418 Å) from 2°to 70°( 2θ), with a scanning rate of 1°per minute. The water intrusion process was also used to determine membrane porosity (Wang et al., 2017;Bagaev et al., 2021). Thermogravimetric analysis was carried out on DTG 60H equipment (Shimadzu Co., Japan). Around 3.0 mg of adsorbents were heated from 25 to 700°C in the nitrogen atmosphere (50 ml/ min) at a temperature of 10 0°C/min. The compounds' decomposition temperatures were calculated using the first mass loss (percentage) vs. temperature derivative (DTGA) (Güler et al., 2013;Zhang et al., 2014). Adsorption Procedure (Import) Erythromycin [C37H67NO13] with molecular mass of 733.937 g mol −1 is an antibiotic used for the treatment of a variety of Fluka. The chemical structure is presented in Figure 1. To study the adsorption equilibrium experiments, a sample of 10.0 mg of ZCA fiber was used in most of the analysis. Following that, 100 ml of aqueous solutions with varying initial ERY concentrations (10-50 mg/L) were applied and shaken at 200 rpm in an orbital incubator (Gallenkamp,. To achieve adsorption equilibrium, the contact time was varied between 5 and 90 min. The other study was performed to see the effect of temperature on the adsorbent activity and efficiency at different temperatures and constant contact time of 30 min, and the temperatures were 25, 35, 45, and 55°C. In each study, a UV-Vis (Varian, model Cary 1E) spectrophotometer (λmax: 482 nm) was used to measure ERY equilibrium concentrations using a calibration curve of different concentrations (Jamshaid et al., 2020). The effect of pH was studied from 2 to 12, and both 0.1 M NaOH and 0.1 M HCl solutions were used to change the pH as required. At 293 K, 100 ml of ERY solution containing 20 mg/L was shook with 10.0 mg of ZCA fiber. The pH study was carried using a micro pH 2002 Crison pH meter. All equilibrium concentrations of the adsorbed ERY by ZCA were presented using different adsorption parameters; q e (e.g., in mg/g) was calculated using the following equations (Eqs 1, 2) (Abujaber et al., 2018): Frontiers in Chemistry | www.frontiersin.org July 2021 | Volume 9 | Article 709600 where q e is the amount (mg g −1 ) adsorbed, C o and C e are the ERY initial and equilibrium concentrations in solution (mg/L), respectively, W is the adsorbent dosage (g/L), and R percent is the adsorption efficiency coefficient. The kinetic study was done by taking several dosages of ZCA (50, 100, and 150 mg/L). This study's tested temperatures were 293, 303, and 313 K, with a maximum contact time of 60 min. Adsorption Kinetics Pseudo-first-order and second-order models have been used to model the kinetic effects of ERY adsorption on the surface of ZCA fibers to achieve the control rate structure of adsorption including chemical reactions and mass transfer. As seen in Eq. 3, pseudo-first-order modeling is based on the premise that physical adsorption that occurs during the removal process is the rate-determining step (Azzaoui et al., 2017): where q e (mg/g) represents the equilibrium adsorbed ERY quantity, q t (mg/g) represents the equilibrium adsorbed ERY quantity at time t, and K1 (min −1 ) represents the pseudo-first-order modeling adsorption rate constant (Azzaoui et al., 2017). The modeling of the pseudo-secondorder, on the other hand, was based on the assumption that the ratedetermining process, as shown by Eq. 4, is chemi-sorption: where K 2 (g/mg/min) is used as the pseudo-second-order rate constant. The slope and intercept of a plot of t/q t vs. t are used to calculate the values of q e and K 2 , respectively. ERY was controlled for attachment to the ZCA surface through chemical bond forming in the chemical adsorption process. Adsorption Isotherm Adsorption isotherms usually have data on the distribution of adsorbed molecules in equilibrium between solid and liquid phases. Most experiments used the regression coefficient (R 2 ) to assess the best-fitting isotherms. Adsorption equilibrium results were discovered to be more appropriate for two types of Freundlich and Langmuir isothermal models. The most fundamental model is Langmuir, which assumes that all adsorption sites are equal and autonomous. The tendency of molecules to bind is separate from the neighboring populated sites (Radi et al., 2015). The isotherm of Langmuir can be given by the following equation: where Ce is the ERY equilibrium concentration (mg/L), qe is the sum of ERY adsorbed per gram of the three equilibrium adsorbents (mg/g), and qm is the full potential of monolayer coverage (mg/g) (Radi et al., 2015). The Langmuir isotherm (L/ mg) constant is K L . The Freundlich isotherm, on the other hand, demonstrates un-ideal and reversible adsorption. The best representation of heterogeneous structures is preferred. It is possible to approximate Freundlich isotherm by the following equation: where qe is the capacity of adsorption, Ce is the ERY concentration at equilibrium, and KF and n are constants. K F reflects the capacity of adsorption, whereas n reflects the deviation from linearity of adsorption. If n 1, the process of adsorption is linear; if n < 1 the process of chemical adsorption; and if n > 1, the process of adsorption is favorable. The Langmuir model is limited to monolayer adsorption. The Langmuir model is limited to monolayer adsorption systems, whereas in multilayer systems, the Freundlich model can be used. Adsorption Thermodynamics From the obtained kinetic data, the reaction rate and other thermodynamics parameters can be identified. Nonetheless, the response changes that will happen during the process of adsorption require the determination of the thermodynamic parameters, including entropy [(ΔS, kJ/mol), enthalpy, freeenergy Gibbs (ΔG, kJ/mol), and adsorption changes (ΔH, kJ/ mol). You can calculate the thermodynamic parameters from the van't Hoff Eq. 7]: where the gas constant is R (8.314 J/mol/T) and the temperature is T (K). Eq. 8 can be used to calculate the distribution coefficient (Kd) on the adsorbent surface. Gibbs free energy can be calculated by the following equation: Both ΔH and ΔS can be calculated using both slope and intercept from the van't Hoff plot of lnK vs. 1/T (Hanbali et al., 2020). Computational and Theoretical Study The geometry optimization of ERY was performed by G09W (frisch et al., 2009) with the Hartree-Fock (Roothaan, 1951;Pople and Nesbet, 1954) method and 6-311G** (Krishnan et al., 1980;McLean and Chandler, 1980) basis set in the gas phase. In theoretical predictions of the chemical reactivity, the Koopmans' theorem (Koopmans, 1934) is the first essential step to calculate the ionization energy (I) and electron affinity (A) values via the FMO energies. July 2021 | Volume 9 | Article 709600 A −E LUMO Moreover, the quantum chemical parameters (QCP) (Parr and Pearson, 1983;Pearson, 1986;Pearson, 1988;Parr et al., 1999), which are defined as χ "electronic chemical potential," η "global hardness," ω "electrophilicity index," and ΔN "fractional number of the electrons transferred" in case of B and C systems have contacted each other, and ΔN max "maximum charge transfer capability," have been also obtained from the I and A values using the following formula: . In addition, Gazquez and coworkers introduced two useful parameters to calculate the ω − "the electron-donating power" and ω + "the electron-accepting power" parameters (Gázquez et al., 2007) Also, the ΔE back-donation "back-donation energy" (Gómez et al., 2006) is a powerful value and defined as the following equation: In addition, the stabilization energy lowering obtained from the second-order perturbative energy analyses depending on the NBOs "Natural Bon Orbitals" (Foster and Weinhold, 1980;Reed and Weinhold, 1985;Reed et al., 1988) is defined as follows: For the molecular system, qi states the donor orbital occupancy, εi and εj are diagonal elements, and Fij is the offdiagonal NBO Fock matrix element where "i" and "j" are the filled and unfilled molecular orbitals. Regeneration of Adsorbent In the field of adsorption process applications, adsorbent regeneration is important. ZCA samples were pre-adsorbed for 12 h at 25°C with 10 ml of 50 mg/L ERY solution, then washed with methanol/acetic acid (v/v, 9:1) until no ERY was present in the eluent, and dried overnight at 50°C. Following that, regenerated materials were redistributed in 10 ml solutions containing an initial concentration of 50 mg/L. The effectiveness of ERY adsorption by regenerated materials has been studied after several adsorption-desorption processes. Adsorbent Characterization Results (BET) Nitrogen adsorption-desorption isotherm measurements were carried at 77 K using a Quantachrome Autosorb AS-1 instrument (United States). The BET specific surface area of ZCA was measured using the data of nitrogen adsorption isotherm at low temperature (Brunauer et al., 1938) and involving the adsorption data at P/P 0 of 0.05-0.2 and with 2.47 m 2 /g. The BJH model was used to measure the pore volume and the average pore size as other previous study (Barrett et al., 1951). The pore volume of ZCA sample was determined as 2.45 × 10 −2 cm 3 /g, and pore diameter was 3.5 nm. The ZCA pore diameter was considered as a mesoporous material as the classification by the Pure and Applied Chemistry International Union (IUPAC) (Foster and Weinhold, 1980). Frontiers in Chemistry | www.frontiersin.org July 2021 | Volume 9 | Article 709600 Characterization of ZCA Fiber Using SEM SEM was used to examine the morphology of ZCA fiber. The surface morphologies and cross-sectional configurations of the ZCA filament are shown in Figure 2. The surface of the ZCA fiber is relatively smooth, as seen in Figure 1, and the diameter of the as-prepared fiber is approximately 250 nm. As seen in the crosssection, the ZCA fiber has a sponge-like appearance. The ZCA fiber is composed of a homogeneous, highly porous material. The ZCA fiber network is embedded with zeolite crystals about 100°m in height. As seen in Figure 2, cellulose acetate serves as a matrix support, and the pore size of the fiber ranges between 5 and 10 m. ERY could rapidly disperse into the pores for contact with the adsorptive sites of the ZCA particles. The dispersion of zeolite attributed by the silica and aluminum shown in Figures 1B,C indicates that the zeolites were embedded in the cellulose acetate matrix. This is attributed to the interfacial interaction between zeolite and cellulose acetate. X-Ray Diffraction Analysis The diffractogram of the synthesized zeolite is identical to JCPDS No. PDF 0038-0241 for LTA form zeolite-A [Na96(AlO2) 96(SiO2)96.216H2O] as seen in Figure 3. Furthermore, diffractogram of CA, as shown in the figure, appropriates with a diffractogram reported by Fan et al. (2013), who stated that CA has distinctive angles at 2θ of 10°and 13.2°. These two typical angles were also recognized as the crystalline peaks of modified CTA II (Deus et al., 1991). Furthermore, Jayalakshmi et al. (2014) announced that the CA membrane diffractogram had a normal semicrystalline angle at 2 of 9.6°and two crystalline angles at diffraction angles of 20.1°and 26.8°. The diffractogram of CA membrane in this study was identified as a crystalline peak at 26.8°. Composite membrane also has a crystalline peak at 26.8°. Moreover, the composite membrane has also a weak peak at 10°and 13.2°, indicating the typical peak of CA in different intensities. It was caused by a decreasing crystallinity form in the membrane compared to CA solids. It was reviewed that the CA/ZA membrane has a peak at an angle of 10.3, 12.6, and 16.2, indicating the presence of zeolite-A. Based on the results of the composite membrane diffractograms, it was known that zeolite-A has better dispersity in the CA porous membrane as a filler. Figure 4 demonstrates the ZCA fiber FTIR spectrum before and after ERY adsorption. As can be seen, a peak of 600-800 cm −1 was observed, which is associated with T-O-T stretching and T-O zeolite bending (Armaroli et al., 2006). FT-IR Analysis A sharp peak in such regions indicates the presence of zeolite inside the membrane. In addition, the membrane showed a peak in the region of 1,000-1,200 cm −1 , indicating the interaction between Si-O-Si of zeolite and CA. Some peaks were also detected at 1,735-1,738 cm −1 assigned to carbonyl C O stretching of CA and broad peak at about 3,400 cm -1 assigned to O-H stretching. Furthermore, the absence of new peaks was observed on the membrane after the adsorption process. However, the peak was slightly shifted and the peak intensity decreased. This might be due to the presence of van der Waals force, indicating the physical adsorption between the metal ions and membrane. Frontiers in Chemistry | www.frontiersin.org July 2021 | Volume 9 | Article 709600 Thermogravimetric Analysis Thermogravimetric analysis for the three samples, namely, ZCA, cellulose acetate, and zeolite, is presented in Figure 5. From the TGA thermogram obtained for cellulose acetate (CA), there is initially a minor weight loss of about 3% up to 200°C, which is caused from the loss of volatile compounds and the moisture of H 2 O that is bound to the hydrophilic (OH) groups that is bonded in the chain of cellulose acetate chains (Hong et al., 2020). There are two steps of thermal decomposition: the first phase (300-400°C) which refers to the major loss with a proximate weight loss of 75%, while the second one (400 and 600°C) having a weight loss of 15% is referred to the complete degradation and composition (Hong et al., 2020). Two levels of mass reduction have been found for zeolite. The first stage was between 30 and 230°C, with a weight loss of 40% which can be due to the loss of H 2 O adsorbed to the material and to the deterioration of certain aluminum and silicate fractions which did not decompose at 400°C during the pyrolysis process. The second stage of zeolite thermal decomposition, starting at 380°C, was caused from the removal of minerals and salts from the material, which has 35% of its initial mass which is considered as its high mineral residue content. The ZCA fiber thermogram showed three levels of thermal decomposition between 30 and 200°C, 215 and 380°C, and above 380°C. This thermogram showed an intermediate profile in comparison to the CA and zeolite thermograms; that is, for both of the temperature scales of the thermal events referred to above, their mass variations occurred roughly as the sum of the other two thermograms, because the fiber is made up of 50% of the weight of each part. The first process, with a weight loss of approximately 20%, can be attributed mainly to the release of H 2 O from the material due to the presence of zeolite, with the CA mass being practically constant in this temperature range. The second stage of decomposition is probably due to the degradation of the CA chain, with the zeolite mass remaining almost unchanged. The CA mass loss at this stage was 80%. The third and final stage can be due to complete fiber degradation, and part of the fiber has thermal stability lower than CA, with maximum CA losses at 335 and 360°C, respectively. Differential Scanning Calorimetry The DSC thermogram obtained for the ZCA is shown in Figure 6. The peaks were shown at different temperatures (180, 211, and 225°C). ZCA melting happened at a temperature lower than that of CA melting as indicated by other studies (Gómez et al., 2006). This may have been explained by the fact, that is, the strengthening as well as a lower amount of contacts between the CA chains. Also, the melting enthalpy was 3,600 kJ/g for ZCA. The higher energy involved during the ZCA melting process may be due to water volatilization, since TGA showed large mass loss in this temperature range. Effect of Contact Time The effect of equilibrium adsorption time on adsorption efficiency was studied at room temperature close to 25°C. To study that, an initial concentration of ERY of 20 mgL −1 and about 20 mg of ZCA adsorbent were used at different time intervals: 15, 30, 45, 60, 75, 90, and 120 min, as shown in Figure 7. The presence of large number of active sites made the adsorption of Frontiers in Chemistry | www.frontiersin.org July 2021 | Volume 9 | Article 709600 ERY to the surface of the adsorbents very easy and increased rapidly at an early stage. This process was followed by a slower rise in adsorption. This shows that the complex derivatives formed at the initial stage of adsorption are unstable, resulting in a rapid rate of adsorption. As a result of the presence of hydrogen protons emitted to the oxygen-containing solution on the adsorbent surface beside the presence of hydroxyl and carboxyl groups, this causes a slower adsorption speeds which could be due to a reduction in the driving force of the present adsorption sites. The various efficiencies of adsorption have shown that the absorbents do not show identical morphologies. Effect of Temperature Measurements of adsorption were carried out using an adsorbent weight of 20 mg, an initial concentration of 20 mg/L, and a time interval of 60 min. The removal of ERY, controlled by CA, zeolite, and ZCA tests, increased with a rise in temperature from 20 to 45°C, initially indicating an endothermic adsorption mechanism up to 30°C (Figure 8). This could lead to an improvement in the diffusion rate of ERY in the porous structure of the ZCA derivatives, raising the temperature. Due to high temperatures, the adsorption mechanism can include both physical and chemical adsorption, resulting in increased active sites due to bond breakup. The endothermic adsorption process can therefore be attributed to increased pore diameter. Nevertheless, increases in the removal of ERY were controlled with a rise in temperature from 20 to 45°C using CA, zeolite, and ZCA samples, showing a concentration equilibrium between ERY and adsorbents. Effect of ERY Initial Concentration Measurements of adsorption at room temperature (25°C) were carried out using separate initial ERY concentrations of 10, 20, 30, and 40 mg/L for 60 min and 20 mg of the three adsorbents, as shown in Figure 9. With an increase in the overall ERY content of up to 20 mg/L, the adsorption process improved and then started to decrease. The number of interchangeable sites in the adsorbent structures and the ratio of ERY to the three adsorbents were identified as the main factors for the decline in adsorption as initial ERY concentrations increased. The exchangeable sites on the adsorbents are saturated after increasing the ratio of ERY, resulting in a decrease in the efficiency of adsorption. It was observed that the adsorption capacity of adsorbents improved by 5% with an improvement in initial ERY concentrations from 10 to 20 mg/L. This may be the result of the substantial driving force transferred by the ERY concentration in order to defeat the resistance to mass movement between solid and liquid phases. As seen in Table 1, with reference to the previous studies, the innovation of this study can be summarized as using zeolite/ cellulose acetate blended fiber as the first example in the ERY removal literature. Kinetic Models and Adsorption Isotherms In this study, the modeling of adsorption kinetics was studied to help and describe the adsorption rate-controlling mechanism. We studied the adsorption kinetics of ERY using the three adsorbents at initial concentration of 30 mg/L and at 25°C. From this study, the obtained kinetic data were analyzed with the pseudo-first-order (Radi et al., 2015), pseudo-second-order (Hanbali et al., 2020), and intraparticle diffusion using Eqs 3, 4, 10 respectively. As seen in Figure 10, pseudo-second-order modeling showed an improved fit for adsorption calculations relative to pseudo-first-order modeling for all adsorbents. However, the results obtained in pseudo-first-order modeling were still adequate to define the sorption kinetics of ERY, showing that the surface showed both chemisorption and physi-sorption adsorption processes. The regression coefficient (R 2 ) of all adsorbents in the pseudo-second-order is very close to 1 more than the one for the pseudo-first-order. Also, the qe calculated for the three adsorbents in the pseudo-second-order is very close to the experimental one, as shown in Table 2. It has been shown that the pseudo-second-order modeling showed an acceptable match FIGURE 10 | Kinetic models of (A) pseudo-first-order, (B) pseudo-second-order processes, (C) the intraparticle diffusion for the adsorption of ERY by the three adsorbents at different time periods. to the adsorption compared to the pseudo-first-order modeling. The movement of ERY from aqueous solution to the adsorbents surfaces might be in different steps, that is, intraparticle diffusion, film diffusion, or both, and that is the rate determining step. The intraparticle diffusion model is shown. The constant Ci represents the boundary layer thickness, and Kpi is a constant. A plot between q t vs. t 1/2 showed straight line with an appropriate value of correlation coefficient (R 2 ) giving the applicability of the intraparticle diffusion model on all three forms of experimental data. For data that match the intraparticle diffusion model, one sees two distinct areas, meaning that two stages are involved in the diffusion process: the external transfer of mass or boundary diffusion of the layer and the intraparticle or micropore diffusion. A greater slope of the first step than the second step suggests a faster adsorption operation, which is due to the more accessible adsorption sites at the initial stage (Hong et al., 2020). Equilibrium Modeling Both Langmuir and Freundlich isotherms are the most widely used models for representing equilibrium data of adsorption of ERY onto three adsorbents that were investigated at 25°C for 30 min, with an adsorbent weight of 30 mg/L ( Figure 11). The equilibrium study was carried out in order to understand the mechanism of adsorption process, that is, Langmuir (Hanbali et al., 2020) and Freundlich (Brunauer et al., 1938), which assumes the adsorption of adsorbate as a function of equilibrium concentration. Langmuir isotherm best describes the monolayer adsorption of the solute from solution onto the adsorbent surface having a finite number of active sites present on it. The linear form of the Langmuir isotherm model is shown in Eq. 5. The results of the models are shown in Table 3. A dimensionless constant R L was calculated using Eq. 11. where Co is the original concentration of ERY (mg/L) and KL is the constant of Langmuir isotherm. The RL value represents adsorption mechanisms that are unfavorable (RL > 1), linear (RL 1), desirable (1 > RL > 0), or irreversible (RL 0) (Hong et al., 2020). The R L (0.106) values for ERY in the present study were <1 for the three adsorbents, which indicated favorable adsorption. Freundlich isotherm considers the heterogeneous surface and nonuniform distribution of heat of sorption. It is most favorably studied for description of the multilayer adsorption process (Eq. 6). In summary, the studied isotherms were best suited to Langmuir models, which is believed due to the high regression coefficient (R 2 ) value ( Table 3). It can also be observed that the surfaces of all three adsorbents are homogeneous and that adsorption process occurred mainly in the monolayer system. Thermodynamic Study The adsorption thermodynamics for the adsorption process of ERY onto three adsorbents are displayed in Table 4, in order to understand the nature of ERY adsorption on the three adsorbents using Eqs. 7-9. The three adsorbents showed negative values of ΔH, and the values were −6,200, −8,500, and −9600 kJ/mol for zeolite, CA, and ZCA, respectively, and this shows that the adsorption is exothermic. The positive values of ΔS for ERY on the three adsorbents showed some orderliness on the surfaces of adsorbents. Meanwhile, spontaneous sorption nature of the reaction was depicted by negative values of ΔG, that is, −1.32, −0.1.56, and −1.9 kJ/mol, respectively. Quantum Chemical Studies The optimized geometry and calculated physical and quantum chemical quantities of ERY are given in Figure 1 and Table 1, respectively. Accordingly, the dipole moment (D), polarizability, (α), and first-order hyperpolarizability (β) values of ERY compound were determined as 4.421 D, 416.124 au, and 169.795 au, respectively. Also, the thermodynamic quantities ΔE, ΔH, and ΔG including the thermal correction were calculated at −2,467.184 au, −2,467.129 au, and −2,467.269 au, respectively. As known well, the vibrational freedom constitutes a remarkable part of the thermal energy as well as the entropy (S) and heat capacity (Cv) for the molecular systems (McQuarrie, 1973;Sandler, 2010;Herzberg, 2013). From Table 5, the thermal energy (ΔE) and vibrational movement contribution to the ΔE (ΔE vib. ) were predicted at 730.238 kcal/mol and 728.461 kcal/mol, respectively. In addition, the Cv and S values of ERY compound In addition, the QCPs are used successfully to assess the reactivity and its selectivity from the simple molecular systems (Serdaroglu, 2011a;Serdaroglu, 2011b;Serdaroglu and Ortiz, 2017;Serdaroglu et al., 2020) to complex systems (Jacob et al., 2020;Al-Otaibi et al., 2021;Junejo et al., 2021). In this work, the chemical reactivity tendency of ERY was assessed in light of the calculated QCPs and is displayed in Table 5. ΔE gap and µ (eV) were determined at 9.757 and −4.506 eV, respectively. As known well, the hardness value is a very helpful parameter to assess the chemical reactivity, especially in the evaluation of the adsorption processes. Hence, it has been the main subject of a series of theoretical investigations (Parr and Pearson, 1983;Berkowitz et al., 1985;Yang and Parr, 1985;Pearson, 1986;Zhou and Parr, 1989;Parr and Chattaraj, 1991;Parr and Gazquez, 1993;von Szentpaĺy et al., 2020;Chaudhary et al., 2021) to be able to calculate it by using the different atomic and/or molecular constants and/or quantities such as ionization energies and electronegativities of the atoms in a specific molecule (Kaya and Kaya, 2015a), and atomic charges (Kaya and Kaya, 2015b). In addition, the molecular hardness has been reported to be able to be used in the theoretical prediction of the lattice energies of the ionic crystals (Kaya and Kaya, 2015c;Kaya et al., 2016;Islam and Kaya, 2018). In this work, the η and Δε back−donat values of ERY were calculated at 4.879 and −1.220 eV, respectively. Furthermore, Table 5 displayed that the electron-donating power (0.182°au) of the ERY compound was calculated to be greater than the electron-accepting power (0.016°au), which affirmed that the ERY compound preferred the charge transfer to the metal surfaces. In past, corrosion inhibition efficiency was reported to increase with an increase of the electron-donating ability in case ΔN < 3.6, and vice versa for ΔN > 3.6 (Lukovits et al., 2001). According to the ΔN (0.032 < 3.6) and electro-donating power values, the adsorption of ERY toward the studied adsorbents is easily noticed to be actualized via the charge transfer from the ERY compound to studied adsorbents. Moreover, the possible nucleophilic (HOMO) and electrophilic (LUMO) attack sites of ERY compound are shown in Figure 13. The HOMO density was mostly amplified over the dimethyl amin (-N(CH 3 ) 2 ) functional and partially be scattered on the oxacyclohexane ring. On the other side, the LUMO broadens on the surrounding of -(C O)functional group of ERY. In addition, the MEP graphs displayed the richness of the electron by red color (V < 0) and poorness by blue color (V > 0) fields of the ERY compound. As expected, the charge transfer zeolites are minerals that contain mainly aluminum and silicon compounds-C O groups were covered by red color to electrophilic attacks, and the H Atom of the -O-H group was marked by blue color for the nucleophilic attacks. Also, the saturated C-chain of ERY presented neutral attitude for both nucleophiles and electrophiles because it is covered by green color. The chemical reactivity of many kinds of molecular systems (Mustafa and Serdaroglu, 2017;Jacob et al., 2020;Serdaroglu, 2020;Serdaroglu et al., 2020;Al-Otaibi et al., 2021) has also been clarified by using the results of the second-order perturbative energy analysis. Table 6 summarized the lowering of the stabilization energy, possible interaction types, and the occupancies of both donor and acceptor orbitals. As expected, the mainly saturated structure of the ERY compound, the dominant interactions contributed to E (2) (62.33 kcal/mol) were sourced from the charge transfer to anti-bonding orbital U* O13-C39 (ED j 0.15098e) from nonbonding orbital LP (2) O6 (ED i 1.84800e). Also, the hyperconjugations due to the charge movement from each filled orbital σ C31-C43 and σ C31-H72 to unfilled orbital U* C12-O36 were calculated with the E (2) of 4.77 kcal/mol and 2.39 kcal/mol, respectively, even if they did not contribute much to the E (2) . From Table 6, the remaining interactions were due to the anomeric interactions, and the highest energy interactions among them were predicted as the interaction LP (2) O13 (ED i 1.88472e) → σ* O6-C39 (ED j 0.07615e) in 42.55 kcal/mol. Similarly, the charge movement from the lone pair of the oxygen atom known as the strong electron-donating of the -C O group to neighbor orbitals also had great responsibility of energy lowering. Namely, the LP (2) O12 → σ* C21-C36 (E (2) 24.58 kcal/mol), LP (2) O12→ σ* C31-C36 (E (2) 24.89 kcal/ mol), and LP (2) O13 → σ* C20-C39 (E (2) 24.31 kcal/mol) interactions also had significant roles in the lowering of the stabilization energy. As known well, the -NH2 group also has strong capability of electron-donating. From Table 6, the charge movement from the N atom of the -NH2 group to each of σ* C28-H67, σ* C48-H108, and σ* C49-H111 interactions was calculated with the energy of 10.72, 10.83, and 10.74 kcal/mol, respectively. Here, it can be considered that these interactions have significant responsibility of the possible intermolecular interactions due to the charge movement that existed in a molecular system, affecting the polarity distribution on the surface. Desorption Study The stability and reusability of the three adsorbents are especially critical for widespread applications. The adsorption-desorption recycling test, as shown in Figure 14, was used to investigate the adsorbents' stability further. The adsorbents were washed twice with ethanol after each run and then reused for the next stage of adsorption Gholamiyan et al., 2020). The findings show that there is no significant loss of adsorption site after three runs, showing that Frontiers in Chemistry | www.frontiersin.org July 2021 | Volume 9 | Article 709600 the three adsorbents are more reliable. After the first three regeneration cycles, the adsorption efficiency loss of the three adsorbents to ERY was only about 5.04%. The result was attributed to the reduction of the binding sites in imprinted polymer matrix during regeneration cycles (Barrett et al., 1951). Therefore, the three adsorbents can be reused at least three times without significantly decreasing their adsorption capacities. CONCLUSION With an increase in the number of studies and research on the fate of pharmaceuticals, personal care products, and their environmental effects on human beings, many researches have been published. As the population and economies grow, numerous antibiotics are increasingly being used in biomanufacturing, livestock farming, and pharmaceutical industries. The QCPs revealed that the adsorption of ERY toward the studied adsorbents actualize via the charge transfer from the ERY compound to studied adsorbents, because of the ΔN (0.032 < 3.6) and electro-donating power values. The MEP plots pointed out that the -C O groups were covered by red color to electrophilic attacks and the H Atom of the -O-H group was marked by blue color for the nucleophilic attacks. The NBO analysis of ERY indicated that the anomeric and hyper-conjugative interactions have chief responsibility of the possible intermolecular interactions because of the charge movement affecting the polarity distribution on the surface. The three adsorbents zeolite, cellulose acetate, and ZCA were used to study the removal of ERY from aqueous liquid prepared in the lab. Several characterizations were done on both the adsorbents and ERY including SEM, XRD, FTIR, and TGA. A brief summary of the results was shown in Abstract, and more details about the results were presented in the Results section. In summary, the three adsorbents showed very high removal efficiency and reached more than 98% using the fiber. Those adsorbents showed very high reusability, and this will save a lot of money and protect environment. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS SJ: wrote the manuscript, IE: experimental work, OH: editing the manuscript, YM: did most of the plots. GH and SS: financial aid, OD: explained some data. Soheila Gholamiyan: formal analysis and concept creation. Majid Hamzehloo: formal analysis, conceptualization, control, validation, and visualization. Abdolhadi Farokhnia's responsibilities include systematic analysis, writing-review and editing, conceptualization, supervision, confirmation, and visualization.
2021-07-14T13:26:49.583Z
2021-07-14T00:00:00.000
{ "year": 2021, "sha1": "a1d3b240bfaed1ed496aa949f2f174da40009c71", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2021.709600/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1d3b240bfaed1ed496aa949f2f174da40009c71", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
817907
pes2o/s2orc
v3-fos-license
T1rho mapping of entire femoral cartilage using depth- and angle-dependent analysis Objectives To create and evaluate normalized T1rho profiles of the entire femoral cartilage in healthy subjects with three-dimensional (3D) angle- and depth-dependent analysis. Methods T1rho images of the knee from 20 healthy volunteers were acquired on a 3.0-T unit. Cartilage segmentation of the entire femur was performed slice-by-slice by a board-certified radiologist. The T1rho depth/angle-dependent profile was investigated by partitioning cartilage into superficial and deep layers, and angular segmentation in increments of 4° over the length of segmented cartilage. Average T1rho values were calculated with normalized T1rho profiles. Surface maps and 3D graphs were created. Results T1rho profiles have regional and depth variations, with no significant magic angle effect. Average T1rho values in the superficial layer of the femoral cartilage were higher than those in the deep layer in most locations (p < 0.05). T1rho values in the deep layer of the weight-bearing portions of the medial and lateral condyles were lower than those of the corresponding non-weight-bearing portions (p < 0.05). Surface maps and 3D graphs demonstrated that cartilage T1rho values were not homogeneous over the entire femur. Conclusions Normalized T1rho profiles from the entire femoral cartilage will be useful for diagnosing local or early T1rho abnormalities and osteoarthritis in clinical applications. Key Points • T1rho profiles are not homogeneous over the entire femur. • There is angle- and depth-dependent variation in T1rho profiles. • There is no influence of magic angle effect on T1rho profiles. • Maps/graphs might be useful if several difficulties are solved. Introduction Osteoarthritis (OA) is the most common type of arthritis and a leading cause of pain. In 2010 in the United States, it represented the 11th most common cause of disability, and was responsible for 2.7 % of all years lived with disability [1,2]. The social cost of OA can be between 0.25 % and 0.50 % of a country's gross domestic product (GDP) [2]. New and advanced therapeutic modalities are being developed for the treatment of OA, including new chondroprotective and chondro-regenerative drugs, mesenchymal stem cell therapy, osteochondral autograft transfer, and autologous chondrocyte implantation [3]. Therefore, it is important to detect early degenerative changes in cartilage in vivo and to understand its natural progression in order to treat OA. New MR techniques for cartilage evaluation have recently been developed, and enable us to assess proteoglycan content, collagen content and orientation, water mobility, and regional cartilage compressibility using T2 and T1rho mapping, sodium MRI, delayed gadolinium-enhanced MRI of cartilage (dGEMRIC), and diffusion tensor imaging [4][5][6][7][8]. Increases in T2 relaxation time of cartilage have been associated with loss of collagen matrix anisotropy, which is a result of increased permeability of the matrix in degenerated cartilage. In contrast, T1rho imaging of cartilage shows a strong correlation between T1rho values and cartilage proteoglycan content depletion, one of the earliest degenerative cartilage changes in OA [4,9,10]. T1rho mapping has been a more sensitive indicator for cartilage degeneration than T2 mapping, and has enabled early detection of cartilage degeneration in early OA patients before gross morphological change occurs [4,11]. However, what can be considered a normal range of T1rho values at specific locations of the knee is not well understood. There are also no available data about variations in T1rho measurements, especially regarding the depth and angle dependence of T1rho values over the entire femoral cartilage. This is in contrast to the well-known angle dependence that exists in T2 profiles [12]. Many reports have recently been published regarding T1rho values of healthy and damaged knee cartilage, although the methodology of segmentation and analyses varies among them (Table 1) [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. The number of slices utilized in knee cartilage segmentation in most reports is typically only one to several slices, and not all of the slices from the entire knee. There are also no previous publications or reference standards describing the entire femoral T1rho map profile in normal subjects, with analysis of cartilage layer variations. This paucity of data, in turn, makes the clinical diagnosis of early OA with T1rho mapping difficult to achieve. In order to successfully apply T1rho mapping in clinical use, it is important to understand the normal T1rho profiles for the entire knee cartilage. Therefore, the purpose of this study was to create normalized T1rho profiles of the healthy entire femoral cartilage with surface maps and 3D graphs using angle-and depthdependent analysis, and to evaluate the usefulness of this approach. Subjects We recruited 23 healthy volunteers (mean age, 28.9 years; range, [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] for participation in this study. Inclusion criteria for all subjects included asymptomatic individuals between 18 and 40 years of age, with no prior history of knee injury or surgery. We excluded three subjects from the study, including one subject who had a large knee which could not fit the knee coil, and two subjects with claustrophobia. The study was approved by our institutional review board, and conformed to the tenets of the Declaration of Helsinki. Written informed consent was obtained from each subject. Image processing and cartilage segmentation of entire knee Images were transferred in DICOM (Digital Imaging and Communications in Medicine) format to a personal computer (PC; Windows 7), which was used to perform all postprocessing and analyses. For possible motion between scans, T1rho series were first realigned relative to the first TSL images using rigid-body transformation before being fitted to mono-exponential function on a pixel-by-pixel basis for generation of T1rho maps: S(TSL) = S0 × exp(−TSL/T1rho), where S0 is the signal intensity when TSL=0. The cartilage of the entire femur was extracted slice-by-slice by T.N., a board-certified radiologist of the Japanese College of Radiology, with 13 years of experience and subspecialization in musculoskeletal radiology. TSL=20 was specifically chosen for segmentation because the shortest spin-lock length has the highest signal-to-noise ratio. The shortest TSL was used for segmentation of cartilage in a prior study [20]. Once cartilage was segmented by manually placing vertices along the boundary, their x and y coordinates were used in a circle-fitting algorithm by assuming a circular cartilage shape around an imaginary centre position in the subchondral bone, whose coordinates were estimated in a least squares manner, and which the user could manually place instead if necessary. Additional boundary vertices with finer spacing were then interpolated and used for computation of slope angles for the radial vectors from the centre position to each boundary vertex. For each slope angle (in 1°increments) the farthest (closest to the articular surface) and nearest (closest to the bone) boundary vertices were recorded, while the radial points between the boundary vertices were approximated by linear interpolation and recorded for subsequent depth/angle-based segmentation of cartilage. The T1rho depth/angle-dependent profile was investigated in this study by partitioning cartilage into two layers (deep, 51-100 %; superficial, 0-50 %) and angular segmentation in increments of 4°over the length of segmented cartilage (the angle 0 defined along B0) with positive/negative angles in a counterclockwise/clockwise rotation (Fig. 1). The method of partitioning of the cartilage into two halves was described in several recent studies [33,34]. All image processing described above was performed using in-house-developed and implemented software in MATLAB (The MathWorks, Inc., Natick, MA, USA). Bland-Altman plot-based investigation of the inter-and intra-operator agreement in manual cartilage segmentation of the same data set was carried out and published previously [35]. The measured mean difference in the size of individual angular segments and limits of agreement (95 %, expressed as±2 SD) in number of pixels were −1.4 ± 15.5 and 2.5 ± 19.4, respectively, for the inter-and intra-operator agreement, while those converted into area (mm 2 ) were −0.11± 1.16 and 0.19 ±1.45, respectively. 3D T1rho map of normalized femoral cartilage and analysis of T1 rho values We calculated the average T1rho values of cartilage over each anatomical landmark and the entire femur on each normalized knee in both the superficial and deep layer with 4°stepwise analysis. In addition, we compared the average T1rho values at representative angles of −54°, −30°, 0°, +30°, and +54°to evaluate angle dependent change, including the magic angle effect, as described in a prior paper [36]. The weight-bearing effect was evaluated by dividing the articular cartilage into weight-bearing and non-weight-bearing portions, with definition of the angle from −30°to +30°as weight-bearing, and angles smaller than −30°or greater than 30°as non-weightbearing [37]. Finally, we created 3D graphs and surface maps of the T1rho profiles using R version 3.0.2 for Windows software (R Development Core Team, Vienna, Austria) and MATLAB. In creating 3D graphs, we used the thin-plate spline method as an interpolant [38]. Statistical analysis The differences in T1rho values between superficial and deep layers, and between weight-bearing and non-weight-bearing areas, were statistically analysed using the Wilcoxon signedrank test. A p value <0.05 was considered to be statistically significant. Statistical analyses were performed using R version 3.0.2 for Windows software (R Development Core Team, Vienna, Austria). Results Angle-and depth dependence of T1rho profiles Figure 2 shows the T1rho profiles of each layer with angledependent analysis at the medial condyle, lateral condyle, and trochlea. In the medial and lateral condyles, T1rho values were higher in the superficial layer than in the deep layer in most locations, including the weight-bearing portion. In the trochlea, T1rho values were higher in the superficial layer than the deep layer above approximately −50°, and an inverse relationship appeared below −50°, with higher T1rho values in the deep layer. Throughout each anatomical landmark (MC, LC, T), T1rho values were not constant, even within the weight-bearing portion (−30 to +30°) of the medial and lateral condyles. Angular variations among the T1rho profiles in each layer demonstrated no influence of magic angle effect. Figure 3 shows the average cartilage T1rho values in each layer at the anatomical landmarks and the entire femur. Average T1rho values in the superficial layer of the femoral articular cartilage were higher than those in the deep layer over the entire femur, medial condyle, and lateral condyle, with significant difference (p< 0.05). The difference in T1rho values between the two layers at the trochlea, however, were not statistically significant. Table 2 shows the average cartilage T1rho values of the weight-bearing and non-weight-bearing portions of the entire femur, medial femoral condyle, and lateral femoral condyle. T1rho values of the deep layer of the weight-bearing portions of the medial and lateral condyles were lower than those of the non-weight-bearing portions, with a statistically significant difference (p<0.05). In contrast, there was no significant difference in T1rho values of the Fig. 1 Sagittal SPGR images in the section of the (a) medial condyle and the (b) trochlea from T1rho sequences after manual segmentation with post-processing. After manual segmentation, angular analysis with in step of 4°, and depth analysis of the superficial and deep layers was performed superficial layer between the weight-bearing and nonweight-bearing portions. Table 3 shows the average T1rho values at representative angles. There was no influence of a magic angle effect on T1rho values, although there was angular variation in each layer. Again, T1rho values in the deep layer were lower than those in the superficial layer at all representative angles. 3D graphs and surface maps of T1rho profile Figure 4 demonstrates 3D graphs of T1rho mapping. Figure 4a is a 3D coloured map of T1rho from the whole layer of the entire femoral condyle illustrating the differences among T1rho values. The remaining graphs reveal threedimensional T1rho values in the superficial, deep, and whole layers of the entire femoral condyle, viewed by different sections (Fig. 4b-d). Each layer is represented by a single colour tone. It is well demonstrated here that cartilage T1rho values were not homogeneous throughout the entire femoral condyle. Figure 5 shows projection maps of mean T1rho profiles of normalized femoral cartilage, with 23 slices in each layer. The colour bar indicates red for high, green for middle, and blue for low T1rho values. T1rho values tended to be lower within the deep layer of the weight-bearing portion, and were not homogeneous in each layer throughout the entire femoral condyle. There was a focal area of decreased T1rho values in the deep layers of the inferior trochlea and the anterior aspect of the lateral femoral condyle, as seen in Fig. 2b and c. Discussion In this study, we presented several new findings regarding the T1rho profile for the entire femoral cartilage, utilizing a depth/ angle-dependent analysis. First, our results demonstrated angle-and depth-dependent variations in the T1rho profile for each layer, with no influence of a magic angle effect. Second, we created normalized T1rho profiles of healthy femoral cartilage based on three-dimensional angle-and depthdependent analysis utilizing surface maps and 3D graphs. This analysis demonstrated that T1rho values were not homogeneous in each layer throughout the entire femoral condyle. With respect to comparisons between weight-bearing and non-weight-bearing portions, Goto et al. reported that T1rho values in normal asymptomatic volunteers were higher in weight-bearing regions of the medial femoral cartilage than in less-weight-bearing regions. On the other hand, in the lateral femoral condyle, they found no significant difference in T1rho values between weight-bearing and less-weightbearing regions [15]. In contrast, in a study of ACL injury patients and controls, Bolbos et al. found significantly higher T1rho values within the non-weight-bearing versus weightbearing portions of the femoral condyle [24]. Thus these two studies demonstrated opposite results. In our study, T1rho values were higher in the non-weight-bearing portion than in the weight-bearing portion over the medial and lateral condyles, which supports the results of the Bolbos et al. study. This finding is more significant in the deep layer. In other words, proteoglycan content was greater and T1rho values were lower in the weight-bearing portion, especially in the deep layer. In the analysis of depth and angle dependence, it is important to note that cartilage has an organized and layered structure, divided into four zones: the superficial zone, middle zone, deep zone, and zone of calcified cartilage. Chondrocytes in the superficial zone secrete relatively little proteoglycan [39,40]. In contrast, the deep zone contains the highest proteoglycan concentration. Proteoglycans resist compression, and generate swelling pressure due to their affinity for water. The deep zone consists of large-diameter collagen fibrils oriented perpendicular to the articular surface. This layer contains the highest proteoglycan and lowest water concentrations, and has the highest compressive modulus [41]. More resistance to various forces in knee activity is required in the weight-bearing portion. Therefore, it makes sense that cartilage in the weight-bearing portion needs more proteoglycan, which results in lower T1rho values in this region, as seen in the present study. Furthermore, we postulate that the difference in T1rho values between the two previously mentioned studies may be due to the methodology of segmentation and ROI placement. ROIs were manually drawn by Goto et al. in a portion of the cartilage within the medial and lateral condyles, and they did not utilize entire slices for femoral cartilage segmentation [15]. Our study demonstrates that T1rho values are not constant throughout the femoral condyle and trochlea, even within the weight-bearing portions (−30 to +30°) of the medial and lateral condyles. For example, there is a minimal peak of T1rho values in the deep layer at approximately −10 to −20°within the trochlea and −30°within the lateral condyle (Figs. 2b, c and 5c). These locations correspond to cartilage of the distal trochlea and anterior to the lateral femoral notch. Yoshioka et al. reported that 3D SPGR images showed nonuniform signal intensity within articular cartilage of the knee, and signal intensity was decreased in these locations [42]. Hypointense regions of the articular cartilage are known to correspond to collagenous tissue or extracellular matrix [43,44], with an associated decrease in T1rho values as collagen content increases [45]. There is also evidence that several factors other than proteoglycan depletion may contribute to variations in T1rho values. These include collagen fibre orientation/concentration and the concentration of other macromolecules [6,46]. The distal trochlea is likely non-weightbearing. Therefore, the minimal peak of cartilage T1rho in the distal trochlea seems to be mainly attributable to an increase in collagen content. In our study, cartilage T1rho values over the trochlea showed less difference between the superficial and deep layers. Those values above and below approximately −50°d emonstrated an inverse relationship. One reason for this is that the trochlea appears to be less weight-bearing, and thus less proteoglycan is necessary in the deep layer. The other reason is that T1rho values in the superficial layer of the trochlea were decreased below −50°in the area where the trochlea opposes the patella. During flexion and extension of the knee, the patella moves back and forth inside the trochlear groove. The increase in shear stress within the superficial layer of the trochlea may require more collagen, since type II collagen contributes to the shear and tensile properties of the tissue [41]. This finding may explain the decrease in T1rho of the superficial layer over the trochlea at the patellofemoral joint. Researchers have also reported observing a magic angle effect for T1rho relaxation time in the study of human osteoarthritic cartilage specimens [25]. However, our results showed no significant magic angle effect on T1rho profiles, although there were angle-dependent variations in both superficial and deep layers and a subtle nonspecific peak in the deep layer of the medial femoral condyle between +50°and +70°. Additionally, 3D graphs and 2D surface maps failed to demonstrate an apparent peak at 54°. We were able to compare T1rho values from throughout the entire femoral cartilage with previously reported results at specific locations [15,17,21,24,26]. Two different 3D graphs based on thin-plate spline, and two dimensional (2D) projection maps of T1rho values in normalized femoral cartilage with a colour-bar help us to visualize and understand T1rho variations within the entire femoral condylar cartilage. With a 3D graph, we are able to recognize the differences in T1rho values more stereographically. In laminar analysis, the 3D graph with all layers is more comprehensible ( Fig. 4b-d). However, the 3D graph has the disadvantage of overestimating or underestimating T1rho values because of the effect of interpolation [47]. A 2D surface map in this study showed two-dimensional projection of T1rho values, which will enable easier comparison of normalized volunteer data with patient data in clinical applications. We were able to analyse T1rho values three-dimensionally across the entire femoral condyle utilizing 3D graphs and surface maps from various points of view, including angle, layer, slice, and anatomic landmark. These improve upon previously reported 2D analyses with several slices from the knee. We believe that 3D graph and surface map analysis of the entire femoral condyle is one of the most promising tools for cartilage T1rho analysis. This method can also be easily applied to T2 map analysis. From a clinical perspective, it is important to understand that T1rho values have angle/depth-dependent variations among the various anatomical landmarks. In this study, these appeared to be influenced by water, collagen, and proteoglycan content in various locations and to various degrees, with little influence of collagen orientation. Therefore, detailed T1rho profiles of the standardized entire normal femoral cartilage could serve as a baseline for investigating pathological conditions of knee cartilage. These findings may be particularly helpful for the accurate diagnosis of early cartilage degeneration, i.e. early OA, and within a specific location and depth within the cartilage. Our study has several limitations. First, the study sample was small. According to a power analysis based on a twosample t test model of equal allocation and variance, and using previously published data [32], the sample size (n = 20) yielded 75 % power at a significance level of 0.05 (alpha= 0.05). In the future, we will need T1rho data from normalized entire knee cartilage for much larger samples in order to improve accuracy. Second, the superficial and deep layers we analysed do not correlate with the four physiologic zones of articular cartilage [41]. It is an inevitable technical limitation that the spatial resolution is not sufficient to distinguish four physiologic layers. Third, there is limitation in normalizing the entire femoral cartilage. A measurement error will become larger as the distance from the each landmark becomes greater, because the size of femoral cartilage is different among subjects. To fix this problem, we need to analyse larger samples, which will reduce the measurement error in normalization for future studies. Fourth, we must consider rotation effects in the longitudinal/transverse axis of subjects. Because each knee of the subject is placed in a slightly flexed and rotated position in the coil, obtained sagittal images were not completely matched in longitudinal/transverse axis. However, since sagittal images were obtained without oblique angulation, parallel to the B0, we believe we have minimized the difference of rotation effects, especially for the magic angle effect. Fifth, it was outside the scope of our study to assess the effect of varus/ valgus malalignment and deviation of the leg axis. Varus and valgus knee malalignment influences the distribution of load at the knee joint and has been shown to be a possible factor for OA [48]. Finally, it was time-consuming to extract the entire femoral cartilage data by manual segmentation and analyse T1rho values with 3D graphs and surface maps. Therefore, it is clinically and practically difficult to process knee samples from a large number of patients using our manual method. Further technical development of auto-segmentation and auto-3D analysis is needed in order to efficiently apply this form of analysis to patient data in clinical practice. In conclusion, T1rho values of the femoral cartilage demonstrate regional and depth variation, with lower T1rho values in the deep layer and no significant magic angle effect. T1rho values across the entire femoral condyle can be analysed three-dimensionally by utilizing 3D graphs and surface maps using different display parameters. It is important to understand the normal T1rho variations that occur throughout the entire knee cartilage in order to detect early T1rho abnormalities and early OA in clinical applications. Acknowledgments The authors thank Dr. Andrew Chang, M.D., for preparation of the manuscript. The scientific guarantor of this publication is Hiroshi Yoshioka. The authors of this manuscript declare no relationships with any companies whose products or services may be related to the subject matter of the article. This study has received funding by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant UL1 TR000153. One of the authors has significant statistical expertise. Institutional review board approval was obtained. Written informed consent was obtained from all subjects (patients) in this study. Methodology: prospective, experimental, performed at one institution. Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http:// creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2017-08-02T19:17:37.139Z
2015-09-22T00:00:00.000
{ "year": 2015, "sha1": "8e0f065949375c462d313d0c79f0ebbea0a8c9f8", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-015-3988-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6f9417ee4da7d2865f1a3f541cd178f190c263f6", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214642193
pes2o/s2orc
v3-fos-license
Separate Your Domains: NIST PQC KEMs, Oracle Cloning and Read-Only Indifferentiability It is convenient and common for schemes in the random oracle model to assume access to multiple random oracles (ROs), leaving to implementations the task—we call it oracle cloning—of constructing them from a single RO. The first part of the paper is a case study of oracle cloning in KEM submissions to the NIST Post-Quantum Cryptography standardization process. We give key-recovery attacks on some submissions arising from mistakes in oracle cloning, and find other submissions using oracle cloning methods whose validity is unclear. Motivated by this, the second part of the paper gives a theoretical treatment of oracle cloning. We give a definition of what is an “oracle cloning method” and what it means for such a method to “work,” in a framework we call read-only indifferentiability, a simple variant of classical indifferentiability that yields security not only for usage in single-stage games but also in multi-stage ones. We formalize domain separation, and specify and study many oracle cloning methods, including common domain-separating ones, giving some general results to justify (prove read-only indifferentiability of) certain classes of methods. We are not only able to validate the oracle cloning methods used in many of the unbroken NIST PQC KEMs, but also able to specify and validate oracle cloning methods that may be useful beyond that. Introduction Theoretical works giving, and proving secure, schemes in the random oracle (RO) model [11], often, for convenience, assume access to multiple, independent ROs. Implementations, however, like to implement them all via a single hash function like SHA256 that is assumed to be a RO. The transition from one RO to many is, in principle, easy. One can use a method suggested by BR [11] and usually called "domain separation." For example to build three random oracles H 1 , H 2 , H 3 from a single one, H, define where i is the representation of integer i as a bit-string of some fixed length, say one byte. One might ask if there is justifying theory: a proof that the above "works," and a definition of what "works" means. A likely response is that it is obvious it works, and theory would be pedantic. If it were merely a question of the specific domain-separation method of Eq. (1), we'd be inclined to agree. But we have found some good reasons to revisit the question and look into theoretical foundations. They arise from the NIST Post-Quantum Cryptography (PQC) standardization process [35]. We analyzed the KEM submissions. We found attacks, breaking some of them, that arise from incorrect ways of turning one random oracle into many, indicating that the process is error-prone. We found other KEMs where methods other than Eq. (1) were used and whether or not they work is unclear. In some submissions, instantiations for multiple ROs were left unspecified. In others, they differed between the specification and reference implementation. Domain separation as per Eq. (1) is a method, not a goal. We identify and name the underlying goal, calling it oracle cloning-given one RO, build many, independent ones. (More generally, given m ROs, build n > m ROs.) We give a definition of what is an "oracle cloning method" and what it means for such a method to "work," in a framework we call read-only indifferentiability, a simple variant of classical indifferentiability [29]. We specify and study many oracle cloning methods, giving some general results to justify (prove read-only indifferentiability of) certain classes of them. The intent is not only to validate as many NIST PQC KEMs as possible (which we do) but to specify and validate methods that will be useful beyond that. Below we begin by discussing the NIST PQC KEMs and our findings on them, and then turn to our theoretical treatment and results. NIST PQC KEMs. In late 2016, NIST put out a call for post-quantum cryptographic algorithms [35]. In the first round they received 28 submissions targeting IND-CCA-secure KEMs, of which 17 remain in the second round [37]. Recall that in a KEM (Key Encapsulation Mechanism) KE, the encapsulation algorithm KE.E takes the public key pk (but no message) to return a symmetric key K and a ciphertext C * encapsulating it, (C * , K) ←$ KE.E(pk). Given an IND-CCA KEM, one can easily build an IND-CCA PKE scheme by hybrid encryption [18], explaining the focus of standardization on the KEMs. Most of the KEM submissions (23 in the first round, 15 in the second round) are constructed from a weak (OW-CPA, IND-CPA, ...) PKE scheme using either a method from Hofheinz, Hövelmanns and Kiltz (HHK) [24] or a related method from [21,27,40]. This results in a KEM KE 4 , the subscript to indicate that it uses up to four ROs that we'll denote H 1 , H 2 , H 3 , H 4 . Results of [21,24,27,40] imply that KE 4 is provably IND-CCA, assuming the ROs H 1 , H 2 , H 3 , H 4 are independent. Next, the step of interest for us, the oracle cloning: they build the multiple random oracles via a single RO H , replacing H i with an oracle F[H ](i, ·), where we refer to the construction F as a "cloning functor," and F[H ] means that F gets oracle access to H . This turns KE 4 into a KEM KE 1 that uses only a single RO H , allowing an implementation to instantiate the latter with a single NISTrecommended primitive like SHA3-512 or SHAKE256 [36]. (In some cases, KE 1 uses a number of ROs that is more than one but less than the number used by KE 4 , which is still oracle cloning, but we'll ignore this for now.) Often the oracle cloning method (cloning functor) is not specified in the submission document; we obtained it from the reference implementation. Our concern is the security of this method and the security of the final, single-ROusing KEM KE 1 . (As above we assume the starting KE 4 is secure if its four ROs are independent.) Oracle cloning in submissions. We surveyed the relevant (first-and secondround) NIST PQC KEM submissions, looking in particular at the reference code, to determine what choices of cloning functor F was made, and how it impacted security of KE 1 . Based on our findings, we classify the submissions into groups as follows. First is a group of successfully attacked submissions. We discover and specify attacks, enabled through erroneous RO cloning, on three (first-round) submissions: BIG QUAKE [8], DAGS [7] and Round2 [22]. (Throughout the paper, firstround submissions are in gray, second-round submissions in bold.) Our attacks on BIG QUAKE and Round2 recover the symmetric key K from the ciphertext C * and public key. Our attack on DAGS succeeds in partial key recovery, recovering 192 bits of the symmetric key. These attacks are very fast, taking at most about the same time as taken by the (secret-key equipped, prescribed) decryption algorithm to recover the key. None of our attacks needs access to a decryption oracle, meaning we violate much more than IND-CCA. Next is submissions with questionable oracle cloning. We put just one in this group, namely NewHope [2]. Here we do not have proof of security in the ROM for the final instantiated scheme KE 1 . We do show that the cloning methods used here do not achieve our formal notion of rd-indiff security, but this does not result in an attack on KE 1 , so we do not have a practical attack either. We recommend changes in the cloning methods that permit proofs. Next is a group of ten submissions that use ad-hoc oracle cloning methodsas opposed, say, to conventional domain separation as per Eq. (1)-but for which our results (to be discussed below) are able to prove security of the final single-RO scheme. In this group are BIKE [3], KCL [44], LAC [28], Lizard [16], LOCKER [4], Odd Manhattan [38], ROLLO-II [30], Round5 [6], SABER [19] and Titanium [43]. Still, the security of these oracle cloning methods remains brittle and prone to vulnerabilities under slight changes. This classification omits 14 KEM schemes that do not fit the above framework. (For example they do not target IND-CCA KEMs, do not use HHK-style transforms, or do not use multiple random oracles.) Lessons and response. We see that oracle cloning is error-prone, and that it is sometimes done in ad-hoc ways whose validity is not clear. We suggest that oracle cloning not be left to implementations. Rather, scheme designers should give proof-validated oracle cloning methods for their schemes. To enable this, we initiate a theoretical treatment of oracle cloning. We formalize oracle cloning methods, define what it means for one to be secure, and specify a library of proven-secure methods from which designers can draw. We are able to justify the oracle cloning methods of many of the unbroken NIST PQC KEMs. The framework of read-only indifferentiability we introduce and use for this purpose may be of independent interest. The NIST PQC KEMs we break are first-round candidates, not second-round ones, and in some cases other attacks on the same candidates exist, so one may say the breaks are no longer interesting. We suggest reasons they are. Their value is illustrative, showing not only that errors in oracle cloning occur in practice, but that they can be devastating for security. In particular, the extensive and long review process for the first-round NIST PQC submissions seems to have missed these simple attacks, perhaps due to lack of recognition of the importance of good oracle cloning. Indifferentiability background. Let SS, ES be sets of functions. (We will call them the starting and ending function spaces, respectively.) A functor F: SS → ES is a deterministic algorithm that, given as oracle a function s ∈ SS, defines a function F[s] ∈ ES. Indifferentiability of F is a way of defining what it means for F[s] to emulate e when s, e are randomly chosen from SS, ES, respectively. It permits a "composition theorem" saying that if F is indifferentiable then use of e in a scheme can be securely replaced by use of F[s]. Maurer, Renner and Holenstein (MRH) [29] gave the first definition of indifferentiability and corresponding composition theorem. However, Ristenpart, Shacham and Shrimpton (RSS) [39] pointed out a limitation, namely that it only applies to single-stage games. MRH-indiff fails to guarantee security in multistage games, a setting that includes many goals of interest including security under related-key attack, deterministic public-key encryption and encryption of key-dependent messages. Variants of MRH-indiff [17,20,33,39] tried to address this, with limited success. Rd-indiff. Indifferentiability is the natural way to treat oracle cloning. A cloning of one function into n functions (n = 4 above) can be captured as a functor (we call it a cloning functor) F that takes the single RO s and for each i ∈ [1..n] defines a function F[s](i, ·) that is meant to emulate a RO. We will specify many oracle cloning methods in this way. We define in Sect. 4 a variant of indifferentiability we call read-only indifferentiability (rd-indiff). The simulator-unlike for reset-indiff [39]-has access to a game-maintained state st, but-unlike MRH-indiff [29]-that state is readonly, meaning the simulator cannot alter it across invocations. Rd-indiff is a stronger requirement than MRH-indiff (if F is rd-indiff then it is MRH-indiff) but a weaker one than reset-indiff (if F is reset-indiff then it is rd-indiff). Despite the latter, rd-indiff, like reset-indiff, admits a composition theorem showing that an rd-indiff F may securely substitute a RO even in multi-stage games. (The proof of RSS [39] for reset-indiff extends to show this.) We do not use resetindiff because some of our cloning functors do not meet it, but they do meet rd-indiff, and the composition benefit is preserved. General results. In Sect. 4, we define translating functors. These are simply ones whose oracle queries are non-adaptive. (In more detail, a translating functor determines from its input W a list of queries, makes them to its oracle and, from the responses and W , determines its output.) We then define a condition on a translating functor F that we call invertibility and show that if F is an invertible translating functor then it is rd-indiff. This is done in two parts, Theorems 1 and 2, that differ in the degree of invertibility assumed. The first, assuming the greater degree of invertibility, allows a simpler proof with a simulator that does not need the read-only state allowed in rd-indiff. The second, assuming the lesser degree of invertibility, depends on a simulator that makes crucial use of the read-only state. It sets the latter to a key for a PRF that is then used to answer queries that fall outside the set of ones that can be trivially answered under the invertibility condition. This use of a computational primitive (a PRF) in the indifferentiability context may be novel and may seem odd, but it works. We apply this framework to analyze particular, practical cloning functors, showing that these are translating and invertible, and then deducing their rdindiff security. But the above-mentioned results are stronger and more general than we need for the application to oracle cloning. The intent is to enable further, future applications. Analysis of oracle cloning methods. We formalize oracle cloning as the task of designing a functor (we call it a cloning functor) F that takes as oracle a function s ∈ SS in the starting space and returns a two-input function e = F[s] ∈ ES, where e(i, ·) represents the i-th RO for i ∈ [1..n]. Section 5 presents the cloning functors corresponding to some popular and practical oracle cloning methods (in particular ones used in the NIST PQC KEMs), and shows that they are translating and invertible. Our above-mentioned results allow us to then deduce they are rd-indiff, which means they are safe to use in most applications, even ones involving multi-stage games. This gives formal justification for some common oracle cloning methods. We now discuss some specific cloning functors that we treat in this way. The prefix (cloning) functor F pf(p) is parameterized by a fixed, public vector p such that no entry of p is a prefix of any other entry of p. Some NIST PQC submissions use a method we call output splitting. The simplest case is that we want e(i, ·), . . . , (n, ·) to all have the same output length L. We then define e(i, X) as bits (i − 1)L + 1 through iL of the given function s applied to X. That is, receiving function s as an oracle, the splitting (cloning) functor F spl returns function e = F spl An interesting case, present in some NIST PQC submissions, is trivial cloning: just set e(i, X) = s(X) for all X. We formalize this as the identity (cloning) functor F id defined by F id [s](i, X) = s(X). Clearly, this is not always secure. It can be secure, however, for usages that restrict queries in some way. One such restriction, used in several NIST PQC KEMs, is length differentiation: e(i, ·) is queried only on inputs of some length l i , where l 1 , . . . , l n are chosen to be distinct. We are able to treat this in our framework using the concept of working domains that we discuss next, but we warn that this method is brittle and prone to misuse. Working domains. One could capture trivial cloning with length differentiation as a restriction on the domains of the ending functions, but this seems artificial and dangerous because the implementations do not enforce any such restriction; the functions there are defined on their full domains and it is, apparently, left up to applications to use the functions in a way that does not get them into trouble. The approach we take is to leave the functions defined on their full domains, but define and ask for security over a subdomain, which we called the working domain. A choice of working domain W accordingly parameterizes our definition of rd-indiff for a functor, and also the definition of invertibility of a translating functor. Our result says that the identity functor is rd-indiff for certain choices of working domains that include the length differentiation one. Making the working domain explicit will, hopefully, force the application designer to think about, and specify, what it is, increasing the possibility of staying out of trouble. Working domains also provide flexibility and versatility under which different applications can make different choices of the domain. Working domains not being present in prior indifferentiability formalizations, the comparisons, above, of rd-indiff with these prior formalizations assume the working domain is the full domain of the ending functions. Working domains alter the comparison picture; a cloning functor which is rd-indiff on a working domain may not be even MRH-indiff on its full domain. Application to KEMs. The framework above is broad, staying in the land of ROs and not speaking of the usage of these ROs in any particular cryptographic primitive or scheme. As such, it can be applied to analyze RO instantiation in many primitives and schemes. In the full version of this paper [10], we exemplify its application in the realm of KEMs as the target of the NIST PQC designs. This may seem redundant, since an indifferentiability composition theorem says exactly that once indifferentiability of a functor has been shown, "all" uses of it are secure. However, prior indifferentiability frameworks do not consider working domains, so the known composition theorems apply only when the working domain is the full one. (Thus the reset-indiff composition theorem of [39] extends to rd-indiff so that we have security for applications whose security definitions are underlain by either single or multi-stage games, but only for full working domains.) To give a composition theorem that is conscious of working domains, we must first ask what they are, or mean, in the application. We give a definition of the working domain of a KEM KE. This is the set of all points that the scheme algorithms query to the ending functions in usage, captured by a certain game we give. (Queries of the adversary may fall outside the working domain.) Then we give a working-domain-conscious composition theorem for KEMs that says the following. Say we are given an IND-CCA KEM KE whose oracles are drawn from a function space KE.FS. Let F: SS → KE.FS be a functor, and let KE be the KEM obtained by implementing the oracles of the KE via F. (So the oracles of this second KEM are drawn from the function space KE.FS = SS.) Let W be the working domain of KE, and assume F is rd-indiff over W. Then KE is also IND-CCA. Combining this with our rd-indiff results on particular cloning functors justifies not only conventional domain separation as an instantiation technique for KEMs, but also more broadly the instantiations in some NIST PQC submissions that do not use domain separation, yet whose cloning functors are rd-diff over the working domain of their KEMs. The most important example is the identity cloning functor used with length differentiation. A key definitional element of our treatment that allows the above is, following [9], to embellish the syntax of a scheme (here a KEM KE) by having it name a function space KE.FS from which it wants its oracles drawn. Thus, the scheme specification must say how many ROs it wants, and of what domains and ranges. In contrast, in the formal version of the ROM in [11], there is a single, schemeindependent RO that has some fixed domain and range, for example mapping {0, 1} * to {0, 1}. This leaves a gap, between the object a scheme wants and what the model provides, that can lead to error. We suggest that, to reduce such errors, schemes specified in standards include a specification of their function space. Oracle Cloning in NIST PQC Candidates Notation. A KEM scheme KE specifies an encapsulation KE.E that, on input a public encryption key pk returns a session key K, and a ciphertext C * encapsulating it, written (C * , K) ←$ KE.E(pk). A PKE scheme PKE specifies an encryption algorithm PKE.E that, on input pk, message M ∈ {0, 1} PKE.ml and randomness R, deterministically returns ciphertext C ← PKE.E(pk, M; R). For neither primitive will we, in this section, be concerned with the key generation or decapsulation/decryption algorithm. We might write KE[X 1 , X 2 , . . .] to indicate that the scheme has oracle access to functions X 1 , X 2 , . . ., and correspondingly then write KE.E[X 1 , X 2 , . . .], and similarly for PKE. Design Process The literature [21,24,27,40] provides many transforms that take a public-key encryption scheme PKE, assumed to meet some weaker-than-IND-CCA notion of security we denote S pke (for example, OW-CPA, OW-PCA or IND-CPA), and, with the aid of some number of random oracles, turn PKE into a KEM that is guaranteed (proven) to be IND-CCA assuming the ROs are independent. We'll refer to such transforms as sound. Many (most) KEMs submitted to the NIST Post-Quantum Cryptography standardization process were accordingly designed as follows: (1) First, they specify a S pke -secure public-key encryption scheme PKE. (2) Second, they pick a sound transform T and obtain KEM KE 4 (The notation is from [24]. The transforms use up to three random oracles that we are denoting H 2 , H 3 , H 4 , reserving H 1 for possible use by the PKE scheme.) We refer to KE 4 (the subscript refers to its using 4 oracles) as the base KEM, and, as we will see, it differs across the transforms. (3) Finally-the under-the-radar step that is our concern-the ROs The question now is whether the final KE 1 is secure. We will show that, for some submissions, it is not. This is true for the choices of base functions F 1 , . . . , F m made in the submission, but also if these are assumed to be ROs. It is true despite the soundness of the transform, meaning insecurity arises from poor oracle cloning, meaning choices of the constructions C i . We will then consider submissions for which we have not found an attack. In the latter analysis, we are willing to assume (as the submissions implicitly do) that F 1 , . . . , F m are ROs, and we then ask whether the final functions are "close" to independent ROs. The Base KEM We need first to specify the base KE 4 (the result of the sound transform, from step (2) above). The NIST PQC submissions typically cite one of HHK [24], Dent [21], SXY [40] or JZCWM [27] for the sound transform they use, but our examinations show that the submissions have embellished, combined or modified the original transforms. The changes do not (to best of our knowledge) violate soundness (meaning the used transforms still yield an IND-CCA KE 4 if H 2 , H 3 , H 4 are independent ROs and PKE is S pke -secure) but they make a succinct exposition challenging. We address this with a framework to unify the designs via a single, but parameterized, transform, capturing the submission transforms by different parameter choices. Figure 1 (top) shows the encapsulation algorithm KE 4 .E of the KEM that our parameterized transform associates to PKE and H 1 , H 2 , H 3 , H 4 . The parameters are the variables X, Y, Z (they will be functions of other quantities in the algorithms), a boolean D, and an integer k * . When choices of these are made, one The encapsulation algorithm at the top of Fig. 1 takes input a public key pk and has oracle access to functions H 1 , H 2 , H 3 , H 4 . At line 1, it picks a random seed M of length the message length of the given PKE scheme. Boolean D being true (as it is with just one exception) means PKE.E is randomized. In that case, line 2 applies H 2 to X (the latter, determined as per the table, depends on M and possibly also on pk) and parses the output to get coins R for PKE.E and possibly (if the parameter k * = 0) an additional string K . At line 3, a ciphertext C is produced by encrypting the seed M using PKE.E with public key pk and coins R. In some schemes, a second portion of the ciphertext, Y , often called the "confirmation", is derived from X or M , using H 3 , as shown in the table, and line 4 then defines C * . Finally, H 4 is used as a key derivation function to extract a symmetric key K from the parameter Z, which varies widely among transforms. In total, 26 of the 39 NIST PQC submissions which target KEMs in either the first or second round use transforms which fall into our framework. The remaining schemes do not use more than one random oracle, construct KEMs without transforming PKE schemes, or target security definitions other than IND-CCA. Submissions We Break We present attacks on BIG QUAKE [8], DAGS [7], and Round2 [22]. These attacks succeed in full or partial recovery of the encapsulated KEM key from a ciphertext, and are extremely fast. We have implemented the attacks to verify them. Although none of these schemes progressed to Round 2 of the competition without significant modification, to the best of our knowledge, none of the attacks we described were pointed out during the review process. Given the attacks' superficiality, this is surprising and suggests to us that more attention should be paid to oracle cloning methods and their vulnerabilities during review. Randomness-based decryption. The PKE schemes used by BIG QUAKE and Round2 have the property that given a ciphertext C ← PKE.E(pk, M; R) and also given the coins R, it is easy to recover M , even without knowledge of the secret key. We formalize this property, saying PKE allows randomnessbased decryption, if there is an (efficient) algorithm PKE.DecR such that PKE.DecR(pk, PKE.E(pk, M; R), R) = M for any public key pk, coins R and message m. This will be used in our attacks. • F for a certain function W (the rejection sampling algorithm) whose details will not matter for us. The notation W [F ] meaning that W has oracle access to F . The following attack (explanations after the pseudocode) recovers the encapsulated KEM key K from ciphertext (pk, C * ) / / Input public key and ciphertext, oracle for F 1. Since Y is in the ciphertext, the coins R can be recovered as shown at line 2. The PKE scheme allows randomness-based decryption, so at line 3 we can recover the message M underlying C using algorithm PKE.DecR. But K = H 4 (M ) = F (M ), so K can now be recovered as well. In conclusion, the specific cloning method chosen by BIG QUAKE leads to complete recovery of the encapsulated key from the ciphertext. Attack on ). These differences arise from differences in the way the output of a certain function W [F ] is parsed. Our attack is on the reference-implementation version of the scheme. We need to also know that the scheme sets k * so that can be recovered from the ciphertext. Again exploiting the fact that the PKE scheme allows randomness-based decryption, we obtain the following attack that recovers the encapsulated KEM key K from ciphertext This attack exploits the difference between the way H 2 , H 3 are defined across the specification and implementation, which may be a bug in the implementation with regard to the parsing of W [F ](x). However, the attack also exploits dependencies between H 2 and H 3 , which ought not to exist when instantiating what are required to be distinct random oracles. Round2 was incorporated into the second-round submission Round5, which specifies a different base function and cloning functor (the latter of which uses the secure method we call "output splitting") to instantiate oracles H 2 and H 3 . This attack therefore does not apply to Round5. Attack on DAGS. If x is a byte string we let x[i] be its i-th byte, and if x is a bit string we let x i be its i-th bit. We say that a function V is an extendable output function if it takes input a string x and an integer to return an -byte output, 8 be obtained by zeroing out the first two bits. If y is a string of bytes then let Z (y) = Z(y [1] Since Y is in the ciphertext, this results in a partial encapsulated-key recovery attack. The attack reduces the effective length of K from 64 · 8 = 512 bits to 512 − 192 = 320 bits, meaning 37.5% of the encapsulated key is recovered. Also R = H 2 (M ), so Y , as part of the ciphertext, reveals 32 bytes of R, which does not seem desirable, even though it is not clear how to exploit it for an attack. Submissions with Unclear Security For the scheme NewHope [2], we can give neither an attack nor a proof of security. However, we can show that the final functions H 2 , H 3 , H 4 produced by the cloning functor F NewHope with oracle access to a single extendable-output function V are differentiable from independent random oracles. The cloning functor F NewHope sets H 1 (x) = V (x, 128) and H 4 = V (x, 32). It computes H 2 and H 3 from V using the output splitting cloning functor. Concretely, KE 2 parses V (x, 96) as H 2 (x) H 3 (x), where H 2 has output length 64 bytes and H 3 has output length 32 bytes. Because V is an extendable-output function, H 4 (x) will be a prefix of H 2 (x) for any string x. We do not know how to exploit this correlation to attack the IND-CCA security of the final KEM scheme KE 2 [V ], and we conjecture that, due to the structure of T 10 , no efficient attack exists. We can, however, attack the rdindiff security of functor F NewHope , showing that that the security proof for the base KEM KE 1 [H 2 , H 3 , H 4 ] does not naturally transfer to KE 2 [V ]. Therefore, in order to generically extend the provable security results for KE 1 to KE 2 , it seems advisable to instead apply appropriate oracle cloning methods. Submissions with Provable Security but Ambiguous Specification In their reference implementations, these submissions use cloning functors which we can and do validate via our framework, providing provable security in the random oracle model for the final KEM schemes. However, the submission documents do not clearly specify a secure cloning functor, meaning that variant implementations or adaptations may unknowingly introduce weaknesses. The schemes BIKE [3], KCL [44], LAC [28], Lizard [16], LOCKER [4], Odd Manhattan [38], ROLLO-II [30], Round5 [6], SABER [19] and Titanium [43] fall into this group. Length differentiation. Many of these schemes use the "identity" functor in their reference implementations, meaning that they set the final func- Reference implementations typically enforce this separation by fixing the input length of every call to F . Our formalism calls this query restriction "length differentiation" and proves its security as an oracle cloning method. We also generalize it to all methods which prevent the scheme from querying any two distinct random oracles on a single input. In the following, we discuss two schemes from the group, ROLLO-II and Lizard, where ambiguity about cloning methods between the specification and reference implementation jeopardizes the security of applications using these schemes. It will be important that, like BIG QUAKE and RoundTwo, the PKE schemes defined by ROLLO-II and Lizard allow randomness-based decryption. The scheme ROLLO-II [30] In the reference implementation of ROLLO-II, however, H 2 is instantiated using a second, independent function V instead of F , which prevents the above attack. Although the random oracles H 1 , H 3 and H 4 are instantiated using the identity functor, they are never queried on the same input thanks to length differentiation. As a result, the reference implementation of ROLLO-II is provably secure, though alternate implementations could be both compliant with the submission document and completely insecure. The relevant portions of both the specification and the reference implementation were originally found in the corresponding first-round submission (LOCKER). Lizard [16] also follows transform T 9 to produce its base KEM The reference implementation of the public-key encryption schemes prevents the attack by cloning H 3 and H 4 from G via a third cloning functor, this one using the output splitting method. Yet, the inconsistency in the choice of cloning functors between the specification and both implementations underlines that adhoc cloning functors may easily "get lost" in modifications or adaptations of a scheme. Submissions with Clear Provable Security Here we place schemes which explicitly discuss their methods for domain separation and follow good practice in their implementations: Classic McEliece [13], CRYSTALS-Kyber [5], EMBLEM [41], FrodoKEM [34], HQC [32], LIMA [42], NTRU-HRSS-KEM [25], NTRU Prime [14], NTS-KEM [1], RQC [31], SIKE [26] and ThreeBears [23]. These schemes are careful to account for dependencies between random oracles that are considered to be independent in their security models. When choosing to clone multiple random oracles from a single primitive, the schemes in this group use padding bytes, deploy hash functions designed to accommodate domain separation, or restrictions on the length of the inputs which are codified in the specification. These explicit domain separation techniques can be cast in the formalism we develop in this work. HQC and RQC are unique among the PQC KEM schemes in that their specifications warn that the identity functor admits key-recovery attacks. As protection, they recommend that H 2 and H 3 be instantiated with unrelated primitives. Signatures. Although the main focus of this paper is on domain separation in KEMs, we wish to note that these issues are not unique to KEMs. At least one digital signature scheme in the second round of the NIST PQC competition, MQDSS [15], models multiple hash functions as independent random oracles in its security proof, then clones them from the same primitive without explicit domain separation. We have not analyzed the NIST PQC digital signature schemes' security to see whether more subtle domain separation is present, or whether oracle collisions admit the same vulnerabilities to signature forgery as they do to session key recovery. This does, however, highlight that the problem of random oracle cloning is pervasive among more types of cryptographic schemes. Preliminaries Basic notation. By [i..j] we abbreviate the set {i, . . . , j}, for integers i ≤ j. If x is a vector then |x| is its length (the number of its coordinates), .|x|]} is the set of its coordinates. The empty vector is denoted (). If S is a set, then S * is the set of vectors over S, meaning the set of vectors of any (finite) length with coordinates in S. Strings are identified with vectors over {0, 1}, so that if x ∈ {0, 1} * is a string then |x| is its length, x[i] is its i-th bit, and x[i..j] is the substring from its i-th to its j-th bit (including), for i ≤ j. The empty string is ε. If x, y are strings then we write x y to indicate that x is a prefix of y. If S is a finite set then |S| is its size (cardinality). We let y ← We use the code-based game-playing framework of [12]. A game G (see Fig. 2 for an example) starts with an init procedure, followed by a non-negative number of additional procedures, and ends with a fin procedure. Procedures are also called oracles. Execution of adversary A with game G consists of running A with oracle access to the game procedures, with the restrictions that A's first call must be to init, its last call must be to fin, and it can call these two procedures at most once. The output of the execution is the output of fin. We write Pr[G(A)] to denote the probability that the execution of game G with adversary A results in the output being the boolean true. Note that our adversaries have no output. The role of what in other treatments is the adversary output is, for us, played by the query to fin. We adopt the convention that the running time of an adversary is the worst-case time to execute the game with the adversary, so the time taken by game procedures (oracles) to respond to queries is included. Functions. As usual g: D → R indicates that g is a function taking inputs in the domain set D and returning outputs in the range set R. We may denote these sets by Dom(g) and Rng(g), respectively. We say that g: Dom(g) → Rng(g) has output length if Rng(g) = {0, 1} . We say that g is a single output-length (sol) function if there is some such that g has output length and also the set D is length closed. We let SOL(D, ) denote the set of all sol functions g: D → {0, 1} . Read-Only Indifferentiability of Translating Functors We define read-only indifferentiability (rd-indff) of functors. Then we define a class of functors called translating, and give general results about their rd-indiff security. Later we will apply this to analyze the security of cloning functors, but the treatment in this section is broader and, looking ahead to possible future applications, more general than we need for ours. Functors and Read-Only Indifferentiability A random oracle, formally, is a function drawn at random from a certain space of functions. A construction (functor) is a mapping from one such space to another. We start with definitions for these. Let SS be a function space that we call the starting space. Let ES be another function space that we call the ending space. We imagine that we are given a function s ∈ SS and want to construct a function e ∈ ES. We refer to the object doing this as a functor. Formally a functor is a deterministic algorithm F that, given as oracle a function s ∈ SS, returns a function F[s] ∈ ES. We write F: SS → ES to emphasize the starting and ending spaces of functor F. Rd-indiff. We want the ending function to "emulate" a random function from ES. Indifferentiability is a way of defining what this means. The original definition of MRH [29] has been followed by many variants [17,20,33,39]. Here we give ours, called read-only indifferentiability, which implies composition not just for single-stage games, but even for multi-stage ones [20,33,39]. Let ES and SS be function spaces, and let F: SS → ES be a functor. Our variant of indifferentiability mandates a particular, strong simulator, which can read, but not write, its (game-maintained) state, so that this state is a static quantity. Formally a read-only simulator S for F specifies a setup algorithm S.Setup which outputs the state, and a deterministic evaluation algorithm S.Ev that, given as oracle a function e ∈ ES, and given a string st ∈ OUT(S.Setup) (the read-only state), defines a function S.Ev The working domain W ⊆ Dom(ES), a parameter of the definition, is included as a way to allow the notion of read-only indifferentiability to provide results for oracle cloning methods like length differentiation whose security depends on domain restrictions. The S.Ev algorithm is given direct access to e 0 , rather than access to priv as in other definitions, to bypass the working domain restriction, meaning it may query e 0 at points in Dom(ES) that are outside the working domain. All invocations of S.Ev[e 0 ] are given the same (static, game-maintained) state st as input, but S.Ev[e 0 ] cannot modify this state, which is why it is called read-only. Note init does not return st, meaning the state is not given to the distinguisher. Discussion. To compare rd-indiff to other indiff notions, we set W = Dom(ES), because prior notions do not include working domains. Now, rd-indiff differs from prior indiff notions because it requires that the simulator state be just the immutable string chosen at the start of the game. In this regard, rd-indiff falls somewhere between the original MRH-indiff [29] and reset indiff [39] in the sense that our simulator is more restricted than in the first and less than in the second. A construction (functor) that is reset-indiff is thus rd-indiff, but not necessarily vice-versa, and a construct that is rd-indiff is MRH-indiff, but not necessarily vice-versa. Put another way, the class of rd-indff functors is larger than the class of reset-indiff ones, but smaller than the class of MRH-indiff ones. Now, RSS's proof [39] that reset-indiff implies security for multi-stage games extends to rd-indiff, so we get this for a potentially larger class of functors. This larger class includes some of the cloning functors we have described, which are not necessarily reset-indiff. Translating Functors Translating functors. We focus on a class of functors that we call translating. This class includes natural and existing oracle cloning methods, in particular all the effective methods used by NIST KEMs, and we will be able to prove general results for translating functors that can be applied to the cloning methods. A translating functor T: SS → ES is a functor that, with oracle access to s and on input W ∈ Dom(ES), non-adaptively calls s on a fixed number of inputs, and computes its output T[s](W ) from the responses and W . Its operation can be split into three phases which do not share state: (1) a pre-processing phase which chooses the inputs to s based on W alone (2) The above-mentioned calls of phase (2) are done in the second line of the code above, so that this implements a translating functor as we described. Formally we say that a functor F: SS → ES is translating if there exists a (SS, ES)-query translator QT and a (SS, ES)-answer translator AT such that F = TF QT,AT . Inverses. So far, query and answer translators may have just seemed an unduly complex way to say that a translating oracle construction is one that makes nonadaptive oracle queries. The purpose of making the query and answer translators explicit is to define invertibility, which determines rd-indiff security. Let SS and ES be function spaces. Let QTI be a function (deterministic algorithm) that takes an input U ∈ Dom(SS) and returns a vector W over Dom(ES). We allow QTI to return the empty vector (), which is taken as an indication of failure to invert. Define the support of QTI, denoted sup(QTI), to be the set of all U ∈ Dom(SS) such that QTI(U ) = (). Say that QTI has full support if sup(QTI) = Dom(SS), meaning there is no U ∈ Dom(SS) such that QTI(U ) = (). Let ATI be a function (deterministic algorithm) that takes U ∈ Dom(SS) and a vector Y over Rng(ES) to return an output in Rng(SS). Given a function e ∈ ES, we define the function P[e] QTI,ATI : Dom(SS) → Rng(SS) by We require that the function P[e] QTI,ATI belong to the starting space SS. Now let QT be a (SS, ES)-query translator and AT a (SS, ES)-answer translator. Let W ⊆ Dom(ES) be a working domain. We say that QTI, ATI are inverses of QT, AT over W if two conditions are true. The first is that for all e ∈ ES and all W ∈ W we have This equation needs some parsing. Fix a function e ∈ ES in the ending space. Then s = P[e] QTI,ATI is in SS. Recall that the functor F = TF QT,AT takes a function s in the starting space as an oracle and defines a function e = F[s] in the ending space. Equation (2) is asking that e is identical to the original function e, on the working domain W. The second condition (for invertibility) is that if U ∈ {QT(W )[i] : W ∈ W}-that is, U is an entry of the vector U returned by QT on some input W -then QTI(U ) = (). Note that if QTI has full support then this condition is already true, but otherwise it is an additional requirement. We say that (QT, AT) is invertible over W if there exist QTI, ATI such that QTI, ATI are inverses of QT, AT over W, and we say that a translating functor TF QT,AT is invertible over W if (QT, AT) is invertible over W. In the rd-indiff context, function P[e] QTI,ATI will be used by the simulator. Roughly, we try to set S.Ev[e](st, U ) = P[e] QTI,ATI (U ). But we will only be able to successfully do this for U ∈ sup(QTI). The state st is used by S.Ev to provide replies when U ∈ sup(QTI). Equation (2) is a correctness condition. There is also a security metric. Consider the translation indistinguishability game G ti SS,ES,QTI,ATI of Fig. 3. Define the ti-advantage of adversary B via In reading the game, recall that () is the empty vector, whose return by QTI represents an inversion error. TI-security is thus asking that if e is randomly chosen from the ending space, then the output of P[e] QTI,ATI on an input U is distributed like the output on U of a random function in the starting space, but only as long as QTI(U ) was non-empty. We will see that the latter restriction creates some challenges in simulation whose resolution exploits using read-only state. We say that (QTI, ATI) provides perfect translation indistinguishability if Adv ti SS,ES,QTI,ATI (B) = 0 for all B, regardless of the running time of B. Additionally we of course ask that the functions QT, AT, QTI, ATI all be efficiently computable. In an asymptotic setting, this means they are polynomial time. In our concrete setting, they show up in the running-time of the simulator or constructed adversaries. (The latter, as per our conventions, being the time for the execution of the adversary with the overlying game.) Rd-Indiff of Translating Functors We now move on to showing that invertibility of a pair (QT, AT) implies rdindifferentiability of the translating functor TF QT,AT . We start with the case that QTI has full support. Let be the maximum output length of QT. If A makes q priv , q pub queries to its priv, pub oracles, respectively, then B makes · q priv + q pub queries to its pub Adversary B is playing game G ti SS,ES,QTI,ATI . Using its pub oracle, it presents the interface of G 0 and G 1 to A. In order to simulate the priv oracle, B runs Fig. 6. The simulator uses its read-only state to store a key st for G, then using G(st, ·) to answer queries outside the support sup(QTI). We introduce this primitive because it allows multiple instantiations. The simplest is that it is a PRF, which happens when it does not use its oracle. In that case the simulator is using a computational primitive (a PRF) in the indifferentiability context, which seems novel. Another instantiation prefixes st to the input and then invokes e to return the output. This works for certain choices of ES, but not always. Note G is used only by the simulator and plays no role in the functor. The proof of the following is in [10]. Let be the maximum output length of QT and the maximum output length of QTI. If A makes q priv , q pub queries to its priv, pub oracles, respectively, then B makes ·q priv +q pub queries to its pub oracle and C makes at most · ·q priv +q pub queries to its RO oracle and at most q pub + · q priv queries to its FnO oracle. The running times of B, C are about that of A. Analysis of Cloning Functors Section 4 defined the rd-indiff metric of security for functors and give a framework to prove rd-indiff of translating functors. We now apply this to derive security results about particular, practical cloning functors. Arity-n function spaces. The cloning functors apply to function spaces where a function specifies sub-functions, corresponding to the different random oracles we are trying to build. Formally, a function space FS is said to have arity n if its members are two-argument functions f whose first argument is an integer i ∈ . This is the most common case for practical uses of ROs. To explain, access to n random oracles is modeled as access to a two-argument function f drawn at random from FS, written f ←$ FS. If FS has sol subspaces, then for each i, the function f i is a sol function, with a certain domain and output length depending only on i. All such functions are included. This ensures input independence as we defined it earlier. Thus if f ←$ FS, then for each i and any distinct inputs to f i , the outputs are independently distributed. Also functions f 1 , . . . , f n are independently distributed when f ←$ FS. Put another way, we can identify FS with FS 1 × · · · × FS n . Domain-separating functors. We can now formalize the domain separation method by seeing it as defining a certain type of (translating) functor. Let the ending space ES be an arity n function space. Let F: SS → ES be a translating functor and QT, AT be its query and answer translations, respectively. Assume QT returns a vector of length 1 and that AT((i, X), V ) simply returns V [1]. We say that F is domain separating if the following is true: QT(i 1 , X 1 ) = QT(i 2 , X 2 ) for any (i 1 , X 1 ), (i 2 , X 2 ) ∈ Dom(ES) that satisfy i 1 = i 2 . To explain, recall that the ending function is obtained as e ← F[s], and defines e i for i ∈ [1..n]. Function e i takes input X, lets (u) ← QT(i, X) and returns s(u). The domain separation requirement is that if (u i ) ← QT(i, X i ) and (u j ) ← QT(j, X j ), then i = j implies u i = u j , regardless of X i , X j . Thus if i = j then the inputs to which s is applied are always different. The domain of s has been "separated" into disjoint subsets, one for each i. Practical cloning functors. We show that many popular methods for oracle cloning in practice, including ones used in NIST KEM submissions, can be cast as translating functors. In the following, the starting space SS = SOL({0, 1} * , OL(SS)) is assumed to be a sol function space with domain {0, 1} * and an output length denoted OL(SS). The ending space ES is an arity n function spaces that has sol subspaces. Let p be a vector of strings. We require that it be prefix-free, by which we mean that i = j implies that p[i] is not a prefix of p[j]. Entries of this vector will be used as prefixes to enforce domain separation. One example is that the entries of p are distinct strings all of the same length. Another is that a p[i] = E(i) for some prefix-free code E like a Huffman code. .n]. Function e i takes input X, prefixes p[i] to X to get a string X , applies the starting function s to X to get Y , and returns Y as the value of e i (X). We claim that F pf(p) is a translating functor that is also a domain-separating functor as per the definitions above. To see this, define query translator QT pf(p) by QT pf(p) (i, X) = (p[i] X), the 1-vector whose sole entry is p[i] X. The answer translator AT pf(p) , on input (i, X), V , returns V [1], meaning it ignores i, X and returns the sole entry in its 1-vector V . We proceed to the inverses, which are defined as follows: The working domain is the full one: W = Dom(ES). We now verify Eq. (2). Let QT, QTI, AT, ATI be QT pf(p) , QTI pf(p) , AT pf(p) , ATI pf(p) , respectively. Then for all W = (i, X) ∈ Dom(ES), we have: We observe that (QTI pf(p) , ATI pf(p) ) provides perfect translation indistinguishability. Since QTI pf(p) does not have full support, we can't use Theorem 1, but we can conclude rd-indiff via Theorem 2. Identity. Many NIST PQC submissions simply let e i (X) = s(X), meaning the ending functions are identical to the starting one. This is captured by the identity functor F id : SS → ES, defined by F id [s](i, X) = s(X). This again assumes OL i (ES) = OL(SS) for all i ∈ [1..n], meaning all ending functions have the same output length as the starting function. This functor is translating, via QT id (i, X) = X and AT id ((i, X), V ) = V [1]. It is however not, at least in general, domain separating. Clearly, this functor is not, in general, rd-indiff. To make secure use of it nonetheless, applications can restrict the inputs to the ending functions to enforce a virtual domain separation, meaning, for i = j, the schemes never query e i and e j on the same input. One way to do this is length differentiation. Here, for i ∈ [1..n], the inputs to which e i is applied all have the same length l i , and l 1 , . . . , l n are distinct. Length differentiation is used in the following NIST PQC submissions: BIKE, EMBLEM, HQC, RQC, LAC, LOCKER, NTS-KEM, SABER, Round2, Round5, Titanium. There are, of course, many other similar ways to enforce the virtual domain separation. There are two ways one might capture this with regard to security. One is to restrict the domain Dom(ES) of the ending space. For example, for length differentiation, we would require that there exist distinct l 1 , . . . , l n such that for all (i, X) ∈ Dom(ES) we have |X| = l i . For such an ending space, the identity functor would provide security. The approach we take is different. We don't restrict the domain of the ending space, but instead define security with respect to a subdomain, which we called the working space, where the restriction is captured. This, we believe, is better suited for practice, for a few reasons. One is that a single implementation of the ending functions can be used securely in different applications that each have their own working domain. Another is that implementations of the ending functions do not appear to enforce any restrictions, leaving it up to applications to figure out how to securely use the functions. In this context, highlighting the working domain may help application designers think about what is the working domain in their application and make this explicit, which can reduce error. But we warn that the identity functor approach is more prone to misuse and in the end more dangerous and brittle than some others. As per the above, inverses can only be given for certain working domains. Let us say that W ⊆ Dom(ES) separates domains if for all (i 1 , X 1 ), (i 2 , X 2 ) ∈ W satisfying i 1 = i 2 , we have X 1 = X 2 . Put another way, for any (i, X) ∈ W there is at most one j such that X ∈ Dom j (ES). We assume an efficient inverter for W. This is a deterministic algorithm In W that on input X ∈ {0, 1} * returns the unique i such that (i, X) ∈ W if such an i exists, and otherwise returns ⊥. (The uniqueness is by the assumption that W separates domains.) As an example, for length differentiation, we pick some distinct integers l 1 , . . . , l n such that {0, 1} li ⊆ Dom i (ES) for all i ∈ [1..n]. We then let W = {(i, X) ∈ Dom(ES) : |X| = l i }. This separates domains. Now we can define In W (X) to return the unique i such that |X| = l i if |X| ∈ {l 1 , . . . , l n }, otherwise returning ⊥. The inverses are then defined using In W , as follows, where U ∈ Dom(SS) = {0, 1} * : The correctness condition of Eq. (2) over W is met, and since In W (X) never returns ⊥ for X ∈ W, the second condition of invertibility is also met. (QTI id , ATI id ) provides perfect translation indistinguishability. Since QTI id does not have full support, we can't use Theorem 1, but we can conclude rd-indiff via Theorem 2. Algorithm QTI spl (U ) The correctness condition of Eq. (2) over W = ES is met, and (QTI spl , ATI spl ) provides perfect translation indistinguishability. Since QTI spl has full support, we can conclude rd-indiff via Theorem 1. Rd-indiff of NewHope. We next demonstrate how read-only indifferentiability can highlight subpar methods of oracle cloning, using the example of NewHope [2]. The base KEM KE 1 defined in the specification of NewHope relies on just two random oracles, G and H 4 . (The base scheme defined by transform T 10 , which uses 3 random oracles H 2 , H 3 , and H 4 , is equivalent to KE 1 and can be obtained by applying the output-splitting cloning functor to instantiate H 2 and H 3 with G. NewHope's security proof explicitly claims this equivalence [2].) The final KEM KE 2 instantiates these two functions through SHAKE256 without explicit domain separation, setting H 4 (X) = SHAKE256(X, 32) and G ( X) = SHAKE256(X, 96). For consistency with our results, which focus on sol function spaces, we model SHAKE256 as a random member of a sol function space SS with some very large output length L, and assume that the adversary does not request more than L bits of output from SHAKE256 in a single call. We let ES be the arity-2 sol function space defining sub-functions G and H 4 . In this setting, the cloning functor F NewHope : SS → ES used by NewHope is defined by F NewHope [s](1, X) = s(X) [1..256] and F NewHope [s](2, X) = s(X) [1..768]. We will show that this functor cannot achieve rd-indiff for the given oracle spaces and the working domain W = {0, 1} * . In Fig. 7, we give an adversary A which has high advantage in the rd-indiff game G rd-indiff FNewHope,SS,ES,W,S for any indifferentiability simulator S. so adversary A will always call fin on the bit 1 and win. When b = 0 in game G rd-indiff FNewHope,SS,ES,W,S , the two strings y 1 = e 0 (1, X) and y 2 = e 0 (2, X) will have different 256-bit prefixes, except with probability = 2 −256 . Therefore, when A queries pub(0), the simulator's response y can share the prefix of most one of the two strings y 1 and y 2 . Its response must be independent of d, which is not chosen until after the query to pub, so Pr[y [1..256] = y d [1..256]] ≤ 1/2 + , regardless of the behavior of S. Hence, A breaks the indifferentiability of Q NewHope with probability roughly 1/2, rendering NewHope's random oracle functor differentiable. The implication of this result is that NewHope's implementation differs noticeably from the model in which its security claims are set, even when SHAKE256 is assumed to be a random oracle. This admits the possibility of hash function collisions and other sources of vulnerability that are not eliminated by the security proof. To claim provable security for NewHope's implementation, further justification is required to argue that these potential collisions are rare or unexploitable. We do not claim that an attack on read-only indifferentiability implies an attack on the IND-CCA security of NewHope, but it does highlight a gap that needs to be addressed. Read-only indifferentiability constitutes a useful tool for detecting such gaps and measuring the strength of various oracle cloning methods.
2020-03-19T20:15:54.031Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "c24035074ace8a2c03125d3813d8dc405483376f", "oa_license": null, "oa_url": "https://www.research-collection.ethz.ch/bitstream/20.500.11850/392433/2/main-llncs.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "52c4bbbf0eba3ea71402352a036526aab6e3e5f8", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
14517376
pes2o/s2orc
v3-fos-license
Long-Distance Dispersal after the Last Glacial Maximum (LGM) Led to the Disjunctive Distribution of Pedicularis kansuensis (Orobanchaceae) between the Qinghai-Tibetan Plateau and Tianshan Region Quaternary climate fluctuations have profoundly affected the current distribution patterns and genetic structures of many plant and animal species in the Qinghai-Tibetan Plateau (QTP) and adjacent mountain ranges, e.g. Tianshan (TSR), Altay, etc. In this greater area disjunct distributions are prominent but have nevertheless received little attention with respect to the historical processes involved. Here, we focus on Pedicularis kansuensis to test whether the current QTP and TSR disjunction is the result of a recent Holocene range expansion involving dispersal across arid land bridge(s) or a Pleistocene range fragmentation involving persistence in refugia. Two chloroplast DNA spacers were sequenced for 319 individuals from 34 populations covering the entire distribution range of this species in China. We found a total of 17 haplotypes of which all occurred in the QTP, and only five in the TSR. Overall genetic diversity was high (HT = 0.882, HS = 0.559) and higher in the QTP than in the TSR. Genetic differentiation among regions and populations was relatively low (GST = 0.366) and little evidence for a phylogeographic pattern emerged. The divergence times for the four main lineages could be dated to the early Pleistocene. Surprisingly, the two ubiquitous haplotypes diverged just before or around the Last Glacial Maximum (LGM) and were found in different phylogenetic lineages. The Species Distribution Model suggested a disappearance of P. kansuensis from the TSR during the LGM in contrast to a relatively constant potential distribution in the QTP. We conclude that P. kansuensis colonized the TSR after the LGM. The improbable long-distance dispersal by wind or water across arid land seed flow may well have had birds or men as vector. Introduction Tectonic events and climate fluctuations have profoundly shaped the current distribution patterns and genetic structures of many plant and animal species in temperate zones of the Northern Hemisphere [1][2][3][4]. Since the early Cenozoic, the geology and topography of East Asia underwent dramatic changes. Most notably is the uplift of the Qinghai-Tibetan Plateau (QTP) and adjacent mountain ranges, e.g. Tianshan (TSR) and Altay Mts., which entailed pronounced climatic and environmental dynamics in both space and time [5][6][7] and a strong effect on landscape and vegetation [8,9]. One consequence is the intense aridification of the Tarim Basin in northwestern China [10][11][12][13], resulting in an arid area of about 6.00×10 5 km 2 between the QTP and the TSR [14]. The present day distribution of plant and animal species is strongly influenced by these historical processes which potentially and iteratively led to range shifts, range expansion, range contraction and/or range fragmentation. In this context, a disjunct distribution could either be the result of long-distance dispersal from a source area into a suitable new area [15][16][17][18] or the consequence of disruption of the previously continuous distribution range [19,20]. Phylogenetic relationship [16][17][18] and genetic diversity within a given species [21][22][23][24] are two aspects frequently considered in order to unravel the historical processes involved. Theoretical and empirical evidence suggests that, when the disjunction is due to recent long-distance dispersal, individuals from separated regions will cluster together in a phylogenetic tree [16][17][18]. Additionally, the regions are characterized by different levels of genetic diversity [23,25] with the newly colonized region harboring lower levels. By contrast, in the case of range fragmentation, individuals from different regions will cluster by region [18,23] while levels of genetic diversity remain comparable [19,20,24]. Obviously, the level and spatial distribution of genetic diversity within a species is also dependent on the combination of life-history traits, e.g. longevity, breeding system [26,27], which can mask the genetic imprint of historical processes. Although numerous phylogeographical studies have been carried out in either the QTP [4,28,29] or the greater Tianshan-Altay region [19,21,[30][31][32][33][34], investigations addressing the historical processes that led to disjunct distributions are scarce. The limited data available show that plant species had low genetic diversity in Tianshan-Altay region, indicating a rapid colonization from the QTP and strong founder effects in the Tianshan-Altay region [35][36][37]. However, these studies included either samples from only one Altay population, i.e. the congeneric Pedicularis longiflora [35], or plant species less representative of highland terrestrial plants, i.e. the fern Lepisorus clathratus and the aquatic Hippuris vulgaris [36,37]. Thus, it remains questionable whether the current QTP and TSR disjunctions are the result of a recent Holocene range expansion involving dispersal across arid land bridge(s) or a Pleistocene range fragmentation involving persistence in refugia. Here, we focus on Pedicularis kansuensis Maxim. (Orobanchaceae), a highland plant species widespread in western China and Nepal, with a disjunct distribution between the QTP and the TSR but not known from the Altay. This species was previously mis-identified as P. verticillata in the TSR [38-41], but clarified to be P. kansuensis based on morphological and molecular evidence [42]. It is an annual or facultative biennial hemiparasitic herb, occurring in moist gravelly ground or grassy slopes in subalpine zone at elevations between 1,800 and 4,600 m [43]. In nearly twenty years, P. kansuensis has been reported to rapidly expand in population sizes and become weedy in Bayanbulak Grassland of the Tianshan Mts., which has caused great loss of herbage yield and threatened the local livestock industry [39,44]. In the present study we aim to unravel the historical processes that led to the current disjunctive distribution of P. kansuensis. Given the great extent of today's arid Tarim Basin which separate the QTP and the TSR we propose two alternative scenarios: (a) P. kansuensis survived the LGM in situ or in refugia in the respective foothills. Here we would expect a strong phylogeographic signal, existence of unique regional haplotypes and comparable high levels of genetic diversity. (b) P. kansuensis colonized the TSR from the northern fringes of the QTP via long-distance seed dispersal after the LGM. Under this scenario we would expect to find a certain degree of genetic similarity between the source and sink regions, i.e. shared haplotypes, but not a strong phylogeographic signal. Also, the sink region would be characterized by lower levels of genetic diversity and evidence for rapid population expansion should be detectable. Ethics statement This study was conducted in accordance with the laws of the People's Republic of China. No specific permits were required for accessing the sampling locations. P. kansuensis is not an endangered or protected species. Plant sampling Leaf tissue of P. kansuensis was collected from 34 populations across the Qinghai-Tibetan Plateau (QTP) and the Tianshan region (TSR) in western China (Table 1). Three to 16 individuals growing at least 20 m apart were sampled in each georeferenced population rendering a total of 319 individuals. Fresh leaves were dried in silica gel and stored at room temperature until DNA extraction. For all populations voucher specimens were deposited at the Herbarium of the Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Xinjiang, China (XJBI)(S1 Table). [50] scripted in R ver. 3.2.3 [51]. No significant differences between the curtailed and the full data set were found so that we decided to proceed with analysis of the full dataset. SAMOVA ver. 1.0 was used to investigate the spatial component in the dataset by defining K groups of populations that are geographically homogeneous and genetically differentiated from each other (10,000 iterations; range of 2 K 10) [52]. The result file, a pairwise cpDNA F ST distance matrix, was imported into BARRIER [53] which incorporates Monmonier's maximum- difference algorithm [54] to visualize the geographic location of genetic breaks among (groups of) populations. Using this method, we divided the distribution range of P. kansuensis populations into two groups, Tianshan group (TSG) and QTP group (QTPG). Furthermore, isolation by distance (IBD) [55], the correlation between genetic and geographical distance was checked with a Mantel test [56] using ALLELES IN SPACE (AIS) [57]. Genetic structure was assessed with an Analysis of Molecular Variance (AMOVA) [58] in ARLEQUIN 3.5 [49] with significance tests based on 10,000 permutations. Parameters of within-population gene diversity (H S ), total gene diversity (H T ), and genetic differentiation (G ST , N ST ) were estimated according to Pons and Petit [59]. Significant phylogeographic structure was inferred by testing whether N ST was significantly greater than G ST using U-statistic. If N ST is significantly higher than G ST , closely related haplotypes occur more often in the same populations than less closely related haplotypes, indicating the presence of phylogeographical structure [59]. Phylogenetic relationship and divergence time Phylogenetic relationships among P. kansuensis cpDNA haplotypes were analyzed using Neighbor-joining (NJ), Maximum parsimony (MP) and Maximum Likelihood (ML) algorithms implemented in MEGA ver. 6.0 [60], with P. violascens and P. verticillata as outgroups. Gaps in sequences were treated as the fifth character state. We constructed MP trees using a heuristic search with 1,000 random additions of sequences and tree-bisection reconnection (TBR) branch swapping. The ML and MP trees were computed with 1,000 bootstrap replicates in Kimura's two-parameter model. Furthermore, NETWORK ver. 4.6 [61] was used to construct median-joining networks to detect genealogical relationships among the haplotypes of P. kansuensis. The gaps were treated as a single mutation event. Divergence times for different P. kansuensis lineages were estimated through a Bayesian approach implemented in BEAST ver. 1.8.1 [62]. In running MODELTEST ver. 3.7 [63], generalized time reversible (GTR) substitution model and Gamma site heterogeneity model were selected as the best-fit nucleotide substitution model for our dataset of aligned sequences. Due to a lack of fossils of P. kansuensis or its congeneric relatives, substitution rates were used for approximate divergence times. For most angiosperms, the cpDNA substitution rates are estimated to vary between 1.0 and 3.0×10 −9 substitutions per site per year [s/s/y], while 8.24×10 −9 for trnL-trnF [64]. Because P. kansuensis is an annual or biennial herb and trnL-trnF was used in this study, the value of 3.0 and 8.24×10 −9 was specified in BEAST with an additional uncorrelated lognormal relaxed molecular clock assumption. The Markov chain Monte Carlo (MCMC) chains were run for 10,000,000 generations, sampling every 1,000 generations. The combined parameters were checked in TRACER ver. 1.5 [65]. The Bayesian trees were combined and annotated by TREE ANNOTATOR ver. 1.8.1 (part of the BEAST 1.8.1 package). Population demographic analyses To investigate whether populations or groups of populations experienced any population expansion, Tajima's D [66] and Fu & Li's D Ã [67] were calculated using ARLEQUIN 3.5 [49]. In addition, mismatch distribution analysis was also calculated in ARLEQUIN with 1,000 parametric bootstrap replicates. The sum of squared deviations (SSDs) between observed and expected mismatch distribution were computed and P values were calculated as the proportion of simulations producing a larger SSD than the observed SSD. The raggedness index (HRag) and its significance were also calculated to quantify the smoothness of the observed mismatch distribution [68]. Species distribution modelling Lastly, in order to estimate the current potential distribution range of P. kansuensis as well as during the Last Glacial Maximum (LGM; 21 ka before present), a species distribution model (SDM) was computed using the maximum entropy algorithm implemented in MAXENT 3.3.1 [69]. Present day climate data available from the World Clim database (34 stations, 19 bioclimatic variables, 2.5 arcmin resolution) [70] (available at http://www.worldclim.org/download) along with 34 tested geographical data produced by ourselves were used to estimate the present potential distribution range. The community climate system model (CCSM) [71] was then employed to generate the potential distribution during the LGM. To test the reliability of the results, goodness of fit between the model and the training data was assessed by analyzing the area under the receiver operating characteristic curve (AUC). Finally, a jackknife test was performed to measure the relative importance of climatic variables on the occurrence prediction for every distribution model. Chloroplast variation and haplotype distribution The aligned sequences of trnL-trnF and rpl32-trnL were 830 bp and 770 bp in length, respectively, with a total length of the combined alignments of 1,600 bp. Variable sites showed 40 substitutions and 15 indels. In total, 17 haplotypes (H1-H17) were identified (Fig 1, Table 1). Among these, H1 and H2 were widespread haplotypes, occurring in 23 (67.65%) and 16 (47.06%) populations, respectively. All 17 haplotypes were found in the QTP and only five in the TSR (H1-H5). Thus, no haplotype was exclusive for the TSR. Population contained a maximum of five haplotypes and a minimum of one. There was no significant correlation between the number of sampled individuals per population and the number of haplotypes (R = 0.3; P > 0.05). Genetic diversity and structure Haplotype diversity (h) ranged from 0.000 to 0.905 and the YJ (Yajiang) population in the Sichuan Hengduan Mts. contributed the highest value (Table 1, Fig 1). Nucleotide diversity (π) varied between 0.000 and 11.210×10 −3 with a maximum present in the CD2 (Changdu) population in SE Tibet (Table 1, Fig 1). Total genetic diversity based on haplotype variation across all populations was H T = 0.882 and the average within-population diversity was H S = 0.559 ( Table 2). The permutation test showed that there was no significant difference between G ST = 0.366 and N ST = 0.376 (U = 0.11; P > 0.05). Thus, the hypothesis of a strong phylogeographic pattern was rejected. In the SAMOVA analyses, F CT values decreased progressively as the values for K number of groups increased from 2 to 10 with no unambigous number of K supported. Also here the hypothesis of a phylogeographic pattern was rejected. Furthermore, the Mantel test revealed a significant correlation between genetic and geographical distances (R = 0.127, P < 0.001) over all populations. However, a genetic break (barrier) separating the TSR populations from those of the QTP was found with a robustness of 90% (Fig 2). This barrier corresponds to the arid land between the two disjunctive geographic regions. Hierarchical analysis of molecular variance (AMOVA) showed that a low variation (2.52%) was partitioned to the two putative groups of populations, while 33.03% and 64.44% variation was partitioned among populations within groups and within populations, respectively (Table 3). Phylogenetic and genealogical relationships of cpDNA haplotypes The topology of the Neighbor-joining (NJ) tree calculated for 17 haplotypes from 319 P. kansuensis individuals is shown in Fig 3. Four clades were strongly supported (! 94% bootstrap support). The haplotypes in clade I and II mainly occurred in populations from the SE of the QTP with the exception of H2, a widespread haplotype present in 16 populations. Clade IV contained four haplotypes which all stem from the NE edge of the QTP and the TSR. Clade III was the most complicated one, containing 7 haplotypes distributed in 33 populations. Among these haplotypes, H1 represented the most widespread haplotype in our study, occurring in 23 populations. H9-H11 were found in the SE of the QTP. H5 was found in the NE of the QTP and in the TSR. The results of the median-joining network obtained by NETWORK ver. 4.6 [61] showed the same phylogenetic relationship as those revealed by the NJ tree (Fig 3). Also, the maximum parsimony (MP) and maximum likelihood (ML) trees were essentially identical to the NJ tree with respect to the major clades and were thus not shown here. Lineage divergence time and population spatial expansion Divergence times between the haplotypes ranged from 2.339 to 0.034 Mya when the value of 3.0×10 −9 s/s/y was specified in BEAST, while 0.885 to 0.012 Mya for 8.24×10 −9 s/s/y (Fig 3). The mismatch distribution for pairwise differences over all populations and two geographical groups were clearly multimodal (Fig 4), indicating that this species has not experienced a sudden expansion. This was corroborated by positive and insignificant Tajima's D and Fu & Li's D Ã tests (Table 4). Species distribution modeling The 'area under the curve' (AUC) values for the training and the test data of P. kansuensis amounted to 0.998 and 0.995, respectively, indicating good performance of the present-day Genetic Diversity and Genetic Structure In this study, we detected 17 haplotypes from 319 P. kansuensis individuals belonging to 34 populations in the QTP and the TSR. In comparison with P. longiflora, a congeneric species of P. kansuensis which shares many life-history traits (e.g. annual/biennial, insect-pollinated, outcrossing, mid-successional) and has an almost matching distribution range on the QTP with northernmost occurrences in either the Tianshan or the Altay Mts. [43,72,73], the haplotype diversity was slightly higher in P. kansuensis (H T = 0.882 vs. H T = 0.770), though the number of haplotypes found was less than that in P. longiflora (30 haplotypes, 41 populations, 910 individuals) [35]. When compared with less related plant species studied in both the QTP and the Tianshan region, haplotype diversity of P. kansuensis was also higher (e.g. Aconitum gymnandrum Table 4 [77]). This comparison still holds true when only haplotype diversity of populations from the TSR are considered (Tables 1 and 2 Table 2). Gene flow of cpDNA is only possible via means of seeds or clonal plant fragments. [78,79]. In the absence of asexual means for reproduction in P. kansuensis, our results, i.e. high genetic diversity within population and low population differentiation, suggest relatively frequent seed exchange among populations. The seeds of P. kansuensis, however, have no obvious morphological adaptations to wind, water or animal dispersal [80,81]. Nevertheless, water flow has been shown to be an effective way of seed dispersal for P. kansuensis [82], but the hydrography of the Tarim Basin makes this option improbable. Also, secondary wind dispersal across frozen land surfaces seems unlikely given the elevational gradients and northerly winter wind direction [83]. Animal activities, especially migratory birds, as well as transportation of contaminated herbage seeds may have played a role in the dispersal of seeds across the Tarim Basin [84][85][86] despite the fact that direct observations are lacking. These latter two options could well explain why we did not find a strong phylogeographic signal with a clear separation of populations from the QTP and the TSR. The SAMOVA results rendered no support for a distinct number of K groups of populations, the comparison of G ST and N ST values showed no significant difference (Table 2; U = 0.11; P > 0.05) and only 2.52% of the molecular variance could be contributed to differences among the two regions ( Table 3). The genetic barrier is thus very weak despite being detected with high robustness (Fig 2) in BAR-RIER. Just like the limited available studies of species with a similar distribution pattern, we found no convincing evidence for genetic differentiation in P. kansuensis between the QTP and the TSR [35-37]. Extensive survival in the QTP through the Quaternary The divergence of all P. kansuensis haplotypes could be dated back to 2.339 (0.850) Mya time window that coincides with the early or middle Pleistocene, suggesting that P. kansuensis withstood the extensive climate changes during the Quaternary. During this period, the QTP had experienced four major glaciations [87] and several glacial and interglacial cycles [88]. Based on the estimated divergence times of main lineages and most of the haplotypes, we presume that the Quaternary climatic oscillations may have greatly shifted distribution range of P. kansuensis in the QTP, affected its divergence events, and shaped its phylogeographic structure, just as reported in other plant species [1][2][3][4]28,29,89]. Based on recent phylogeographical studies in the demographic history of plant species from the QTP, two main refugium hypotheses have been proposed. One hypothesis suggested some species may have retreated to the eastern or south-eastern plateau edge (e.g. Hengduan Mts.) as refugia during the Quaternary glacial periods, and then recolonized QTP and its surrounding regions during the interglacial phases or at the end of the Last Glacial Maximum (LGM) [35, [90][91][92][93][94][95]. While the other hypothesis suggested some species may have also survived at QTP and its surrounding regions in situ through the Quaternary [96,97]. Previous studies in NW China showed species survival in East Tianshan Mountains [22] and Ili (Yili) Valley [33] during the Quaternary. Refugia are usually correlated with high levels of genetic diversity and unique haplotypes [98]. In this study, H1 and H2 were widespread haplotypes, occurred in 23 (67.65%) and 16 (47.06%) populations, respectively. Some haplotypes (e.g. H8, H9, H10, H11, H15, H16, and H17) occurred in the SE QTP, Hengduan-Himalayan Mts. While H3, H5, H13, and H14 occurred in the NE QTP (e.g. Qilian Mts.) and Tianshan Mts. It seems that P. kansuensis might have survived in known refugial areas at SE QTP (e.g. Hengduan Mts.) and the edge of NE QTP (e.g. Qilian Mts.). However, the results of mismatch analyses (Fig 3) and Tajima's D and Fu & Li's D Ã tests (Table 3) indicated that recent range expansion was rejected, given that P. kansuensis survived extensively in the plateau during the LGM. This was also confirmed by the results of SDM that the LGM potential distributions did not show obviously shrink in the QTP in comparison with the current distributions (Fig 5). Long-distance dispersal from the QTP to the TSR after the LGM In this study, all 17 detected haplotypes were found in the QTP, while only five (H1-H5) in the TSR (Fig 1). All five haplotypes (H1-H5) found in the TSR also occurred in NE of the QTP, of which H1 and H2 were widespread over the entire distribution range (Fig 1). In the phylogenetic tree, neither the five (H1-H5) nor the three (H3-H5) haplotypes formed a single clade, but rather clustered with other haplotypes (Fig 3). The divergence time of H1-H5 corresponded to the divergence time of all haplotypes and was dated back to the early Pleistocene, at 2.339 (0.850) Mya, while the one for H3-H5 to 0.620 (0.225) Mya, and H3 and H4 to 0.17 (0.062) Mya. Furthermore, a wide arid region barrier between the TSR and the QTP had developed and aridification begun by the early Pleistocene [99]. The divergence times of the shared haplotypes were later than the enlarging of aridification (Fig 3). The predication was confirmed by the low molecular variance between groups (2.52%, Table 3). Therefore, the disjunctive distribution of P. kansuensis was unlikely the result of a range fragmentation, but shaped by longdistance dispersal crossing the wide arid land. Generally, long-distance dispersal is characterized by a movement from high genetic diversity region to low genetic diversity region [23,25]. The index of genetic diversity (H T ) of the QTPG is significant higher than the TSG (H T = 0.880 vs. H T = 0.753) ( Table 2). By this token, long-distance dispersal throughout arid land from the QTP, especially the northeast of the QTP, to the TSR could be the reason for the disjunctive distribution of P. kansuensis. In P. longiflora, the single haplotype that genetically connected the Altay Mts. with the NE of the QTP was also estimated to have diverged around 0.138 Mya. Both P. longiflora and P. kansuensis show lower level of genetic diversity in the Tianshan-Altay region than of the QTP. Given that the cradle of the genus Pedicularis is likely in the Hengduan-Himalayan Mts. at the SE of the QTP [100], the NW Chinese Tianshan and Altay Mts. were presumably colonized from the QTP earliest during the last interglacial of the late Pleistocene [87]. At that time the arid land barrier between the different mountain ranges as seen today must have been discontinuous to allow for seed flow. Evidence for this scenario is however lacking. The species distribution model (SDM) results show an absence of P. kansuensis from the TSR during the LGM (Fig 5). This indicates that colonization must have occured after the LGM, hence rather recently. This corroborates the genetic findings. Nevertheless, the reliablity of the SDM is to be taken with caution as we failed to detect any range expansion or contraction in the QTP which would be an intuitive assumption (Table 3, Fig 3). Conclusion Based on phylogeographical and species distribution modeling analyses, we propose that P. kansuensis has survived on the QTP throughout the LGM. The present day disjunct distribution in the Qinghai-Tibetan Plateau and the Tianshan Region is likely the result of multiple bird or human assisted long-distance seed dispersal events crossing the arid land of Tarim Basin after the LGM, particularly from the northeastern fringes of the QTP to the Tianshan Mts. Supporting Information S1
2018-04-03T06:00:08.377Z
2016-11-02T00:00:00.000
{ "year": 2016, "sha1": "f242eb3438ec9068e20c4009727a261c28ee7389", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0165700&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f242eb3438ec9068e20c4009727a261c28ee7389", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5350868
pes2o/s2orc
v3-fos-license
Authors' Reply I have readwith interest the observational study regarding risks of adverse outcomes associated with gapapentin and pregabalin use.1 The authors adeptly analyzed data in the US Renal Data System database, including Medicare Part D prescription drug events (i.e., claims), and identified positive associations of gabapentin and pregabalin exposure with adjusted hazards of first episodes of altered mental status, fall, and fracture requiring either an emergency room visit or hospitalization. The study suffers from one unappreciated limitation. The authors attempted to adjust for the influence of concomitant use of benzodiazepines. Evidently,,0.5% of all patients in the study cohort used benzodiazepines. Such low utilization is clinically implausible but can be explained by the design of theMedicare Part D benefit during the study era (i.e., in 2011). The Medicare Modernization Act excluded benzodiazepines from Part D coverage between 2006 and 2012.2 This exclusion was eliminated by the Patient Protection and Affordable Care Act. The fact that the authors identified any use of benzodiazepines during the study era reflects the provision of “enhanced alternative coverage,” which generally offers a higher monthly premium in exchange for added value (e.g., reduced deductible, coverage of drugs not ordinarily included in the Part D benefit, reduced cost sharing in the coverage gap, etc.). Benzodiazepine toxicity is positively associated with risks of alteredmental status, fall, and fracture.3Thus, becauseof inherent limitations of data ascertained from Medicare Part D, unmeasured confounding by benzodiazepine use cannot be discounted. Because of study design, unmeasured confounding by opioid use also cannot be discounted. Ultimately, polypharmacy is common among patients on dialysis,4 and therefore, observational studies of the efficacy and safety of single drugs or single classes of drugs should be interpreted cautiously. Gabapentinoid toxicity is an observable event, but the magnitude of the associations reported by Ishida et al.1 may be biased. Pharmaceutical companies throughout the world market their products aggressively through a variety of promotional campaigns [1]. In India, these marketing practices pose a greater problem because the restrictions on drug dispensing are very limited-drugs often being dispensed without a prescription from a licensed physician. The companies take full advantage of this situation. As many patients in India are poor and illiterate, and lack information on health care, they often visit local pharmacists or quacks for medical advice. Pharmacists routinely dispense drugs illegally over the counter. We visited 40 local pharmacy stores for medical advice for a feigned medical ailment, and we found that all 40 pharmacists dispensed drugs, including expensive antibiotics [2]. Pharmaceutical promotional campaigns in India, unlike those in developed countries (where pharmacists have little infl uence on drug sales), are not only aimed at changing the prescribing habits of physicians but also at pharmacists and quacks. Pharmaceutical companies in India offer various schemes and incentives (including television sets, motorcycles, and the opportunity for higher profi t margins) to lure pharmacists into buying more drugs than they would normally need. As a result, the pharmacists make every effort to sell these drugs to patients visiting them for medical advice. They may also associate themselves with quacks or physicians in their efforts to shift their stock of the drugs. In developed countries, dubious pharmaceutical marketing practices would soon attract the attention of watchdog bodies and social activists, but in India they go undetected. We believe that this situation demands proactive action on the part of the medical profession and also of the government. The efforts of the pharmaceutical industry to medicalize human life should be resisted. We do not wish India to be in the same position as the countries of the West, where adverse drug reactions are responsible for a signifi cant proportion of hospital admissions and require millions of outpatient visits and corrective measures. In the United States, for example, there are about 100,000 deaths due to medical errors every year, of which about 7,000 are attributed to drug reactions [3]. We believe it is important to assess current awareness about disease mongering among medical and pharmaceutical students, as pharmaceutical promotional campaigns are aimed at both professions. Assessing current awareness could provide a basis for further research, leading to the development of effective measures that will raise awareness levels and motivate students to participate in future campaigns that seek to combat disease mongering. Most medical and pharmaceutical students in India are not aware of the issue of disease mongering; neither do most of them know that recent audits have shown medical interventions and adverse drug reactions to be major causes of death and disability in the US [4]. Articles have been published warning the profession about disease mongering [5][6][7], but for the most part these warnings have not been heeded. One is reminded of Aristotle, who so rightly observed that "truth could infl uence only half a score of men in a century, while falsehood and mystery would drag millions by the nose." We prepared a 20-item questionnaire (Text S1) about disease mongering and the infl uence of the drug industry on clinical practice. The questionnaires were distributed among a random sample of 250 fi nal-year medical and 250 fi nal-year pharmaceutical students. The overall response rate was 406 out of 500 (81.2%), comprising 199 medical and 207 pharmaceutical students. Of the medical students, 30 out of 199 (15%) were able to explain disease mongering with relevant examples. Of the pharmaceutical students, 114 out of 207 (55%) were able to do so, suggesting that awareness of the problem was much greater among these students. Interestingly, however, 87 out of 114 pharmaceutical students believed the government, not the pharmaceutical industry, was responsible for the problem. All the students, both medical and pharmaceutical, said they had frequently seen drugs dispensed without prescription. They had also often seen patients visit local pharmacists for medical advice. They agreed that both practices were unethical. However, both the medical and the pharmaceutical students were unaware of the incentives offered by drug companies to pharmacists for buying their drugs, which lead to unethical dispensing. We believe that our small project, despite its inherent limitations, has thrown some light on the situation. Pharmaceutical students, who are exposed to the drug industry to some extent during their studies, have some idea of the magnitude of the problem, while the majority of medical students have no idea that even their textbooks are written with the help of money that comes from drug companies [8]. We need to make a more concerted attempt to educate the student community of all the health-care professions, in order to counter this unfair tendency. The government should undertake major initiatives to ensure that drugs are only dispensed with a prescription from a licensed physician. Medical associations and medical college administrators should alert their members to cross-check the information provided in drug company literature. Medical students should be warned about disease mongering through the display of posters, and through the organization of essay competitions and interactive plays. Students can play a further role by conducting regional and national surveys of the awareness of the public concerning this serious issue. Supporting Information Text S1. Syphilis: A Forgotten Priority Damian Walker, Godfrey Walker Peter Hotez and colleagues [1] provide a persuasive case for incorporating a rapid-impact package for "neglected tropical diseases" with programs for HIV/AIDS, tuberculosis, and malaria as part of a pro-poor strategy for improving health in the developing world. However, we believe there is a disease that has a high claim to be included in partnerships and initiatives devoted to what the authors term the "Big Three", and yet has been largely ignored. On the basis of the criteria identifi ed by Hotez et al., the case for giving explicit priority to programs to control syphilis and particularly congenital syphilis is high [2]. In 2002, there were 157,000 deaths contributing to more than 4 million disability-adjusted life years (DALYs) (see annex tables 2 and 3 in [3]). These estimates exclude the burden attributable to maternal syphilis, which includes 460,000 abortions or stillbirths, 270,000 low-birth-weight babies, and 270,000 cases of congenital syphilis each year [4]. This burden is concentrated in Africa and exhibits considerable geographic overlap with HIV infection. Syphilis accounts for 20% of genital ulcer diseases and is a cofactor in transmission of HIV, and both infections appear to progress more rapidly when they occur together [5]. Infection with syphilis is curable, and control is possible with existing drugs (specifi cally penicillin). However, little attention has been given to this in context of the Big Three. Azithromycin is included in the chemotherapy package proposed by Hotez et al. for the control of trachoma, and there are clear synergies with syphilis control. A recent trial in Tanzania demonstrated that a single dose of oral azithromycin is as effective as injectable penicillin G benzathine in treating early and latent syphilis [6]. However, some caution is needed concerning the widespread use of azithromycin for syphilis in view of the recent emergence of azithromycin-resistant Treponema pallidum [7]. There are other possible synergies in having a strategy including syphilis control; e.g., during routine antenatal care, chemotherapy for soil-transmitted helminths could be provided at the same time as offering voluntary counselling and testing (VCT) for HIV infection and screening for maternal syphilis. The control of syphilis has been shown to be highly cost-effective. If the control of syphilis was integrated into programs dealing with the four priority disease groups advocated by Hotez et al., then the costeffectiveness of tackling not only syphilis but also the other four major public health priorities would improve. Furthermore, it would lessen the chance of avoiding death from one disease but dying from another [8]. While it might be hoped that the case for giving priority to syphilis would have been accepted and explicit emphasis given to programs to control this disease, this has not happened. Unfortunately, limited attention is given to syphilis control as part of the several partnerships devoted to the Big Three. Maybe this is because syphilis has historically had a social stigma, and has, therefore, been neglected. Now is the time to change this as part of a pro-poor strategy to meet the Millennium Development Goals. We suggest it is explicitly included in the rapid-impact package for neglected tropical diseases. [2,3], others had reported an association between the T790M mutation in exon 20 and the resistance to TKIs [4,5]. As we learn more about the relationship between EGFR status (including gene copy number, mutation status, and mutation type) and drug sensitivity, decisions about treatment with TKIs for patients with non-small-cell lung cancer (NSCLC) become more complex. Previously, we reported HER2 mutational status, as well as EGFR and KRAS, in a large number of NSCLCs [6]. We found that 22% of tumors had EGFR kinase domain mutations (149 out of 671). Of these 149, 15 of them were insertion mutations in exon 20. The more common types of mutations-i.e., deletion in exon 19 (68 out of 149) and L858R in exon 21 (61 out of 149)-were more frequent in women and in "never smokers" (with p values of less than 0.001), whereas the exon 20 insertion mutations showed no bias for sex (seven in males versus eight in females) or smoking status (seven in smokers versus eight in never smokers). We have no data regarding TKI sensitivity for the 15 patients with insertion mutations to date, but based on the results from others and our own in vitro data, they may not benefi t from the conventional TKIs, despite the fact that their tumors have EGFR kinase domain mutations. To maximize benefi t to patients, we should determine the exact type of mutation for an individual tumor and determine whether it conveys sensitivity or resistance prior to TKI therapy. The development and clinical application of novel agents overcoming resistance should yield a more effective targeted therapy for tumors with all types of EGFR mutations. in those with the trait, i.e., a relative risk of 2.5. These are likely to be close to the actual annual numbers for risk of haemorrhage under anticoagulant treatment with and without the VKORC1 variant [1,2]. If we follow 10,000 people for one year-8,000 without the trait, of whom 80 will develop disease; 2,000 with the trait, of whom 50 will develop disease-RR = (50/2,000)/(80/8,000) = 2.5000, and OR = (50/1,950)/(80/7,920) = 2.5385.When the OR is written out in full to [(50/2,000)/(1,950/2,000)]/[(80/8,000)/(7920/8 000)], this can easily be reduced to the above. In a case-control study, all cases are included, but there are only a fraction of all noncases (controls). With a sampling fraction of 1/10, the case-control study sampled from this cohort would look like the following: 80 cases without and 50 cases with the trait, 792 controls without and 195 controls with the trait (OR = [50/195]/[80/792] = 2.5385). With a sampling fraction of 1/100, there would be 79.2 unexposed and 19.5 exposed controls, and the OR would still be 2.54. This demonstrates that the actual risk or odds of disease cannot be derived once only a sample of individuals without disease are included, but that the ratio of exposed over unexposed controls (195/792) remains valid whatever the sampling fraction. This has been called the "exposure odds", and many prefer to write the OR as the exposure odds ratio: OR = (50/80)/(195/792) = 2.54. In a case-control study, because the number of controls is only a fraction of the actual number of individuals without disease in the cohort, absolute risks cannot be calculated, and a recalculation from OR to RR is not possible (unless there is external information on the absolute risks). This implies that it is not possible to calculate from our data how different the OR was from the RR, as Steinsmith tried. We can, however, in this particular case, make an estimate, since we know the risk of haemorrhage under anticoagulant treatment from previous studies to be around 1% per year. With a background risk of 1% per year, all the ORs mentioned in our paper are within 2% of the relative risk. The highest OR of 2.6 (2.5641) would relate to a relative risk of 2.5 (2.5246)-a trivial difference. Steinsmith's further suggestions for analyses, i.e., to use likelihood ratios, are relevant to studies of diagnostic tests in which the aim is to evaluate the presence or absence of disease. This is not the analysis one would use in aetiologic studies such as ours. Generally, since most diseases are infrequent, ORs are good estimators of relative risks under this "rare disease assumption". For a disease with a frequency of 10%, which is high, the difference between OR and RR is still only 10%. On a higher theoretical level, one could argue that the parameter to estimate is not the relative risk, but the rate ratio, i.e., the ratio of two incidence rates. While a cumulative risk is a probability, an incidence has time −1 as its unit, and lies between zero and infi nity. Since the incidence rate is the basic measure of disease occurrence, the rate ratio is the prime comparator, to be preferred over relative risks (which, over time, will converge to unity, because, to quote John Maynard Keynes, "in the long run we are all dead"). It can be shown that under certain sampling conditions, i.e., when controls are sampled from a dynamic population, there is no need for the "rare disease assumption", and the OR is the exact equivalent of an incidence rate ratio [4].
2016-05-12T22:15:10.714Z
2006-04-01T00:00:00.000
{ "year": 2006, "sha1": "38350ed3d4b63ff4410ee6b99be098ff1844c55b", "oa_license": null, "oa_url": "https://doi.org/10.5005/ijccm-15-3-196", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "c7a96d2e78307ef6bd41efafaa1371fca6d3662f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214058435
pes2o/s2orc
v3-fos-license
Electromechanical Properties and Temperature Stability of 1-3 Type PZT/Epoxy Piezoelectric Composite 1-3 type PZT5H/epoxy composites were fabricated via dice and fill method, by which the PZT5H ceramic blocks were processed into a series of rod arrays with kerf width of 120 μm ∼ 260 μm and pillar width of 120 μm ∼ 450 μm. Proper thickness was chosen to guarantee that the aspect ratio (height/width) is higher than 3. Piezoelectric properties of the 1-3 composites as a function of ceramic volume fraction and aspect ratio were investigated. The resultant composites show high piezoelectric coefficient d33>500p C/N, low mechanical quality factor Qm = 2.9 ∼ 7.7, enhanced electromechanical coupling coefficient kt=0.66∼0.74 and low acoustic impedance Z = 9 ∼ 17 Mrayl, as well as a good temperature stability in the investigated temperature range from -50°C to 150°C. Introduction The 1-3 type piezoceramic/polymer composites are ideal functional materials to fabricate ultrasonic transducers for applications in medical imaging and underwater detection, because of the large merits of high sensitivity, wide band and narrow pulse [1,2,3]. This type of composites is formed in the way that inorganic piezoelectric ceramic rods are embedded in a three-dimensionally connected polymer matrix. Such composites maintain the high piezoelectric properties of ceramics and provide low acoustic impedances required for good matching with water and human tissue, in addition to the enhanced electromechanical coupling in thickness mode. Moreover, the piezoelectric properties of the 1-3 composites can also be tailored by varying the ceramic volume fraction, aspect ratio and periodicity to optimize the transducing performance. A large number of researches about the piezoelectric properties of 1-3 piezoelectric composites have been carried out in the past decades [4,5,6]. Nevertheless, studies on the temperature stability are still very limited. In this work, we systematically investigated the influences of ceramic volume fraction, aspect ratio and temperature on the piezoelectric properties of the 1-3 composites. Several 1-3 type piezoelectric composites with the characteristics of high d 33 , large k t , low Z, low Q m and good temperature stability were successfully fabricated by optimizing the ceramic volume fraction and aspect ratio. Experimental procedure Commercially available PZT5H ceramics (Zibo Yuhai Ceramic Co. Ltd, China) and Epoxy resin (EpoThinTM 2 from BUEHLER Co. Ltd, USA) were used as the starting raw materials. The 1-3 type piezoceramic/epoxy composites were fabricated by the dice and fill method. Three types of dicing saws with different thickness (250 µm, 150 µm and 100 µm, respectively) were used to machine different PZT rod arrays. After forming the electrodes on the upper and bottom surfaces with silver Mass density was measured using the Archimedes method. Surface morphology and cross-sectional microstructure were observed by optical microscope. Piezoelectric coefficient d 33 was measured by a quasistatic piezoelectric constant testing meter ZJ-3A (Institute of Acoustics, Chinese Academy of Science). Ceramic volume fraction (CVR) was calculated from the pillar widths and the kerf widths of ceramic rod arrays. Piezoelectric characteristic was analysed with an Agilent 4294A impedance analyser. The electromechanical coupling coefficient and acoustic impedance were determined by a resonance and anti-resonance method performed on the basis of IEEE standards. The composite density ρ was calculated by Eq. (1). Planar coupling factor k p was evaluated by using IEEE standard curve of frequency separation  versus k p for thin discs, and  was calculated from Eq. (2). The thickness coupling factor k t , longitudinal electromechanical coupling factor k 33 and mechanical quality factor Q m were determined from Eq. (3), (4), and (5), respectively. The quantities v c, , ρ c and ρ p are ceramic volume fraction, ceramic density, and polymer density, respectively. f r is the frequency at which electrical impedance shows a minimum in thickness resonance mode. f +0.5 and f -0.5 are the upper and lower frequencies with half of the conductance obtained at f r from the conductance verse frequency curve, respectively. f p and f s are the frequencies of a thin ceramic disc, at which electrical impedance show maximum and minimum in planar resonance mode, respectively. (1 ) Figure 1 is the X-ray diffraction (XRD) profile of an unpoled PZT5H ceramic recorded at room temperature. This XRD profile indicates that the ceramic is of pure perovskite structure without any secondary impurity phase. Peaks around 2θ=45.5 o , which correspond to {200} reflections, are partially enlarged and can be well fitted with three Lorentz curves, as shown in the inset of Figure 1. Based on the peak intensities, we can judge that the PZT5H ceramic is in the R-T phase coexistence at room temperature, and the volume fractional ratio of R-phase to T-phase is approximately 1:3. Table 1 shows the various physical properties of the PZT5H ceramic. It possesses a high d 33 of 740 pC/N, a large longitudinal electromechanical coupling factor k 33 of 0.82 and a low dielectric loss tan of 0.02. As demonstrated in Figure 2, the temperature stabilities of electromechanical coupling factors are quite good in the investigated temperature range from -50 o C to 150 o C. Thus, this commercial PZT is considered as an ideal ceramic material for fabricating the 1-3 type piezoceramic/polymer composites. Figure 3 shows the optical microscope images of microstructure for the 1-3 type composites with CVRs of 25% and 41%, respectively. As can be seen from it, arrays of periodically ordered structure with defect-free ceramic rods and pore-free epoxy were successfully fabricated. The aspect ratio (= thickness/pillar width) is calculated to be higher than 8, according to Figure 3(c). In addition, a fact that the ceramic rods and epoxy matrix are well bonded can be also understood. A series of 1-3 type PZT5H/epoxy composites with varied CVRs from 21% to 51% were prepared by varying the pillar width and kerf width in this study. The impedance and phase vs. frequency spectra of PZT5H ceramic and a 1-3 type PZT5H/epoxy composite with CVR= 31% are shown representatively in Figure 4(a) and 4(b), respectively. The PZT5H ceramic exhibits vibrant planar mode at lower frequencies. The second largest peak near 1.0 MHz is the thickness mode. In contrast, the 1-3 type PZT5H/epoxy composite reveals a neat thickness mode near 2.0 MHz and trivial planar mode. It means, compared with pure PZT5H ceramic, k p of the 1-3 composite is dramatically decreased and k t is increased. Figure 5(a) shows the dielectric constant ε′ and piezoelectric coefficient d 33 of the 1-3 type PZT5H/epoxy composites as a function of CVR. Clearly, ε′ changes nearly linearly with the increase of CVR. However, the change of d 33 with CVR is a little bit complicated. In the range of small CVR values, d 33 increases rapidly with the increase of CVR and reaches 425 pC/N at CVR=20%. Then, d 33 increases slowly but continuously with the further increase of CVR, and reaches 625 pC/N at CVR=48%. Thus, even when CVR is only 25%, d 33 can be as large as 500 pC/N. Figure 5(b) shows longitudinal piezoelectric voltage coefficient g 33 and electromechanical coupling coefficient k t of the 1-3 type PZT5H/epoxy composites as a function of CVR. It is well known that d 33 characterizes the ultrasonic beam transmission capability of a transducer while its echo receiving sensitivity is directly related to its g 33 . Thus, we consider it is very important to know how g 33 changes with the change of CVR. According to Figure 5(b), g 33 and k t show the same changing trend, i.e., they increase first, and then decrease with the increase of CVR. The maxima of g 33 = 0.091 Vm/N and k t =0.71 are observed at CVR=25% and 48%, respectively. A large g 33 implies a high ability to detect the low intensity ultrasonic vibration, leading to improved detecting sensitivity of the transducers. As expected, k t can be enhanced effectively by using the 1-3 connectivity, as indicated in Figure 5(b). It is interesting that all of the 1-3 PZT5H/epoxy composites with different CVRs show higher k t than the PZT5H ceramic. In particular, the one with CVR=48% show a k t maximum of 0.71, which is significantly higher than the PZT5H ceramic, which has k t of 0.56. Figure 6 present the k t change of the 1-3 type PZT5H/epoxy composites as a function of thickness. As can be seen from these figures, k t first increases and then decreases slowly with the increase of thickness. For these two 1-3 type PZT5H/epoxy composites, maximum k t is obtained at the thicknesses of 0.60 mm and 0.80 mm, respectively. Interestingly, these thicknesses correspond to aspect ratios (thickness/pillar width) of about 3.0. Table 2 lists the various physical properties relevant to ultrasonic transducing for a series of 1-3 type PZT5H/Epoxy composites with different CVRs. As can be seen from the table, all these composites possess relatively low loss tanδ (≤0.051) and low acoustic impedance Z (9.1~17.1Mrayl), which is favorable for matching that of the load (such as water or human tissue) to minimize reflection losses at the interface. Meanwhile, low Q m (2.9~7.7) is good for transducers with wide bandwidth and low pulse. As shown in Figure 6, the performance of a 1-3 type composite is affected by thickness. For the 1-3 type PZT5H/epoxy composite with CVR=25% and kerf = 260 m, the maximum k t of 0.71 is obtained at the thickness of 0.80 mm, which corresponds to an aspect ratio of 3.1. The obtained physical properties of this 1-3 piezoelectric composite at the optimum thickness of 0.80 mm are summarized in Table 3. The temperature stability of its k t is shown in Figure 7. As can be seen, k t maintains a high level of 0.68~0.74 with a weak temperature dependence in the whole measured temperature range from -50 o C to 150 o C. Conclusion A series of 1-3 type PZT5H/epoxy piezoelectric composites were fabricated by the cut and fill method. The influences of ceramic volume fraction, aspect ratio and thickness on those physical properties relevant to ultrasonic transducing were systematically investigated. Compared to PZT5H ceramic, the 1-3 type composites show generally increased k t but decreased k p . A high-performance 1-3 type PZT5H/epoxy piezoelectric composite was successfully prepared by optimizing the ceramic volume fraction and the aspect ratio. This composite possesses the excellent ultrasonic transducing properties of d 33 =515 pC/N, k t =0.73, Z=9.8 Mrayl and Q m =3.7. Moreover, its k t shows a very desirable temperature-stable character in the wide temperature range between -50 o C and 150 o C.
2019-12-05T09:43:43.360Z
2019-11-27T00:00:00.000
{ "year": 2017, "sha1": "cef3048a28b5d515db1cd069ae992c03dfc5ab97", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/678/1/012136", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "50b2a19679022f5b12b4af691f5b1296d800c82c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
255684071
pes2o/s2orc
v3-fos-license
THSD7A Positivity Is Associated with High Expression of FAK in Prostate Cancer Prostate cancer is one of the most common malignancies, and there are a wide range of treatment options after diagnosis. Most prostate cancers behave in an indolent manner. However, a given sub-group has been shown to exhibit aggressive behavior; therefore, it is desirable to find novel prognostic and predictive (molecular) markers. THSD7A expression is significantly associated with unfavorable prognostic parameters in prostate cancer. FAK is overexpressed in several tumor types and is believed to play a role in tumor progression and metastasis. Furthermore, there is evidence that THSD7A might affect FAK-dependent signaling pathways. To examine whether THSD7A expression has an impact on the expression level of FAK in its unphosphorylated form, a total of 461 prostate cancers were analyzed by immunohistochemistry using tissue microarrays. THSD7A positivity and low FAK expression were associated with adverse pathological features. THSD7A positivity was significantly associated with high FAK expression. To our knowledge we are the first to show that THSD7A positivity is associated with high FAK expression in prostate cancer. This might be proof of the actual involvement of THSD7A in FAK-dependent signaling pathways. This is of special importance because THSD7A might also serve as a putative therapeutic target in cancer therapy. Introduction Prostate cancer is a common malignancy, having one of the highest incidences out of male cancers [1]. Though most prostate cancers behave in an indolent manner, a given sub-group exhibits an aggressive behavior, potentially leading to systemic disease and death [2,3]. For that reason, there are several treatment options for prostate cancer, ranging from watchful waiting and active surveillance to surgery, radiation, androgen deprivation therapy and chemotherapy [4]. Until now, prostate-specific antigen (PSA) levels in blood, the Gleason grade and tumor extension in biopsies have been the only established prognostic parameters in the preoperative setting. Given the wide range of therapeutic options and the concomitant side effects, novel predictive (molecular) markers are desirable. In a former study, we analyzed the role of thrombospondin type-1 domain containing 7A (THSD7A) as a potential tumor antigen in cancer [5]. Among other findings, we demonstrated that THSD7A overexpression is significantly associated with unfavorable prognostic parameters in prostate cancer, including the tumor stage, Gleason grade and lymph node metastasis as well as PSA recurrence. Several studies deal with the role of FAK in prostate cancer but only a few groups report the expression status of FAK in this cancer, and the results seem somewhat ambiguous [35][36][37][38]. The three major non-receptor tyrosine kinases, FAK, SRC and ETK, form the SRC tyrosine kinase complex, which in prostate cancer is suggested to play an important role in the aberrant activation of the androgen receptor (AR) by phosphorylation [39]. There is also evidence that FAK activation might be an important factor in androgen-independent progression to neuroendocrine carcinoma [40]. There is evidence that THSD7A might affect FAK-dependent signaling pathways [7,8]. The main objective of this study was to examine whether the THSD7A expression status has an impact on the expression level of FAK in its unphosphorylated form. The secondary objective was to examine the association of FAK in its unphosphorylated form and the common pathological parameters in prostate cancer. Therefore, a total of 461 prostate cancers were analyzed by immunohistochemistry (IHC) using tissue microarrays (TMAs). Tissue Samples Tissue samples were derived from primary surgically resected prostatectomy specimens (n = 461). All patients were treated at the Department of Urology at the Saarland University Medical Center, Saarland University, Homburg/Saar, Germany between 2012 and 2020. For all tumors, detailed histopathological data on Gleason grade and pT-status s were available; six patients lacked nodal status (Tables 1 and 2). Exclusion criteria for the cohort was having had neoadjuvant therapy. Tissue samples of the resected prostatectomy specimen were analyzed by immunohistochemistry (IHC) using tissue microarrays (TMAs). Tissue Microarrays TMA construction was performed using a manual tissue arrayer and according to the manufacturer's directions (Manual Tissue Arrayer, AlphaMetrix Biotech, Roedermark, Germany). Tissue cylinders each with a diameter of 0.6 mm were punched out of selected paraffin-embedded tumor tissue blocks and were brought into empty "recipient" paraffin blocks. Four µm sections of the TMA blocks were transferred to adhesion slides (Matsunami TOMO) and were used for immunohistochemistry (IHC). Bound antibody was then visualized using the ultraView Universal Alkaline Phosphatase Red Detection (Roche, Basel, Switzerland) according to the manufacturer's directions. Heat-induced antigen retrieval at 97 • C was performed with CC2 buffer (Ventana) for 56 min (THSD7A) and with CC1 buffer (Ventana) for 64 min, respectively. For evaluation of THSD7A expression, the percentage of positive cells was estimated, and the staining intensity was recorded semiquantitatively as 0, 1+, 2+ or 3+ for each tissue sample. The staining results were categorized into the following four groups for statistical analysis: tumors without any staining were considered negative; tumors with 1+ staining in ≤70% or with 2+ staining in ≤30% of cells were considered weakly positive; tumors with 1+ staining in >70% and with 2+ staining in >30% but ≤70% and with 3+ staining in <30% of cells were considered moderately positive; and tumors with 2+ staining in >70% and with 3+ staining in ≥30% of cells were considered strongly positive. To better define THSD7A expression, we dichotomized THSD7A expression as negative (no staining in any tumor cell) and positive (at least 1+ staining in at least a few tumor cells). For evaluation of FAK expression, the percentage of positive cells was estimated, and the staining intensity was recorded semiquantitatively as 0, 1+, 2+ or 3+ for each tissue sample. To better define FAK expression, we dichotomized FAK expression as low (tumors with 0 staining, 1+ staining, 2+ staining in ≤70% and 3+ staining in ≤30% of cells) or high (tumors with 2+ staining in >70% of cells and 3+ staining in >30% off cells). Statistics Statistical analysis was performed using R (R Corporation 2021, R Foundation for Statistical Computing). Fisher's exact test was used for testing the null hypothesis of independence. For cross tables larger than 2 × 2, pairwise Fisher's tests and p-value adjustment via Benjamini-Hochberg procedure were performed as post-hoc analysis. Regarding Gleason score, tumors were split up into the five distinct grade groups according to the World Health Organization (grade group 1: n = 96; grade group 2: n = 98; grade group 3: n = 124; grade group 4: n = 123; grade group 5: n = 20). Results A total of 397 (86.1%) tumors were analyzable for THSD7A-IHC. Sixty-four cases were not analyzable due to a lack of tissue in the TMA spot or due to a lack of unequivocal tumor tissue. A total of 41 (10.3%) tumors showed at least weak positivity with mainly cytoplasmic staining but also membranous staining in some cases. Less than 5% of nontumor prostate tissue revealed very weak cytoplasmic THSD7A expression. Representative images are shown in Figure 1. THSD7A positivity was associated with advanced tumor stage (p < 0.001), positive nodal status (p < 0.001) and with a high Gleason score (p < 0.001). The results are shown in detail in Table 1. Statistics Statistical analysis was performed using R (R Corporation 2021, R Foundation for Statistical Computing). Fisher's exact test was used for testing the null hypothesis of independence. For cross tables larger than 2 × 2, pairwise Fisher's tests and p-value adjustment via Benjamini-Hochberg procedure were performed as post-hoc analysis. Regarding Gleason score, tumors were split up into the five distinct grade groups according to the World Health Organization (grade group 1: n = 96; grade group 2: n = 98; grade group 3: n = 124; grade group 4: n = 123; grade group 5: n = 20). Results A total of 397 (86.1%) tumors were analyzable for THSD7A-IHC. Sixty-four cases were not analyzable due to a lack of tissue in the TMA spot or due to a lack of unequivocal tumor tissue. A total of 4 1 (10.3%) tumors showed at least weak positivity with mainly cytoplasmic staining but also membranous staining in some cases. Less than 5% of non-tumor prostate tissue revealed very weak cytoplasmic THSD7A expression. Representative images are shown in Figure 1. THSD7A positivity was associated with advanced tumor stage (p < 0.001), positive nodal status (p < 0.001) and with a high Gleason score (p < 0.001). The results are shown in detail in Table 1. A total of 361 (78.3%) tumors were analyzable for FAK-IHC. One hundred cases were not analyzable due to a lack of tissue in the TMA spot or due to a lack of unequivocal tumor tissue. A total of 222 (61.4%) tumors showed low expression and a total of 139 (38.6%) tumors showed high expression with a cytoplasmic staining pattern, respectively. Non-tumor prostate tissue revealed low cytoplasmic FAK expression. Representative images are shown in Figure 2. Low FAK expression was associated with advanced tumor stage (p = 0.007) and positive nodal status (p = 0.005). While there seemed to be a significant association of FAK expression with the Gleason score (p = 0.002), the post-hoc analysis revealed a significant difference only between groups 2 and 3. The results are shown in detail in Table 2. A total of 361 (78.3%) tumors were analyzable for FAK-IHC. One hundred cases were not analyzable due to a lack of tissue in the TMA spot or due to a lack of unequivocal tumor tissue. A total of 222 (61.4%) tumors showed low expression and a total of 139 (38.6%) tumors showed high expression with a cytoplasmic staining pattern, respectively. Nontumor prostate tissue revealed low cytoplasmic FAK expression. Representative images are shown in Figure 2. Low FAK expression was associated with advanced tumor stage (p = 0.007) and positive nodal status (p = 0.005). While there seemed to be a significant association of FAK expression with the Gleason score (p = 0.002), the post-hoc analysis revealed a significant difference only between groups 2 and 3. The results are shown in detail in Table 2. A total of 345 (74.8%) tumors were analyzable for FAK-IHC as well as for THSD7A-IHC. THSD7A positivity was significantly associated with high FAK expression (p = 0.003). The results are shown in detail in Table 3. Figure 3 shows a tumor with high FAK expression and negativity for THSD7A. Figure 4 shows a tumor with high FAK expression and positivity for THSD7A. A total of 345 (74.8%) tumors were analyzable for FAK-IHC as well as for THSD7A-IHC. THSD7A positivity was significantly associated with high FAK expression (p = 0.003). The results are shown in detail in Table 3. Figure 3 shows a tumor with high FAK expression and negativity for THSD7A. Figure 4 shows a tumor with high FAK expression and positivity for THSD7A. A total of 345 (74.8%) tumors were analyzable for FAK-IHC as well as for THSD7A-IHC. THSD7A positivity was significantly associated with high FAK expression (p = 0.003). The results are shown in detail in Table 3. Figure 3 shows a tumor with high FAK expression and negativity for THSD7A. Figure 4 shows a tumor with high FAK expression and positivity for THSD7A. Discussion Prostate cancer is one of the most common malignancies. However, the vast majority of patients with diagnoses of prostate cancer will not die from the disease. Due to the fact that there are a wide range of treatment options for prostate cancer and that some of them have concomitant side effects, it is desirable to find novel prognostic and predictive (molecular) markers. Several studies indicate that THSD7A might play a role at least in the prognosis of different tumor types [5,13,[15][16][17]23]. FAK is believed to play an important role in prostate cancer and is discussed as a potential therapeutic target, especially in advanced stages [34,[41][42][43][44][45]. Moreover, there is evidence that FAK-dependent signaling pathways might be affected by THSD7A [7,8]. For this reason, we wanted to examine whether THSD7A expression status has an impact on the expression level of FAK in its unphosphorylated form. In our analysis, as one could expect from previous investigations, THSD7A positivity was associated with adverse pathological features. Surprisingly, low FAK expression was associated with an advanced tumor stage and nodal metastasis. We did not expect this correlation, given the large number of studies which describe FAK overexpression in several tumor types and its potential role in tumor progression and metastasis. However, as FAK is a protein tyrosine kinase, it exists in a unphosphorylated/inactivated form and a phosphorylated/activated form. Most of the studies reporting on the above-mentioned correlation dealt with the phosphorylated/activated form of FAK. Though FAK is an established tumor marker and a potential therapeutic target, few data can be found on FAK's status in prostate cancer regarding its expression quantity, and the reported results are ambiguous. Rovin et al. [37] did not find an association of FAK expression or staining intensity with the tumor grade or tumor stage using immunohistochemistry when investigating Discussion Prostate cancer is one of the most common malignancies. However, the vast majority of patients with diagnoses of prostate cancer will not die from the disease. Due to the fact that there are a wide range of treatment options for prostate cancer and that some of them have concomitant side effects, it is desirable to find novel prognostic and predictive (molecular) markers. Several studies indicate that THSD7A might play a role at least in the prognosis of different tumor types [5,13,[15][16][17]23]. FAK is believed to play an important role in prostate cancer and is discussed as a potential therapeutic target, especially in advanced stages [34,[41][42][43][44][45]. Moreover, there is evidence that FAK-dependent signaling pathways might be affected by THSD7A [7,8]. For this reason, we wanted to examine whether THSD7A expression status has an impact on the expression level of FAK in its unphosphorylated form. In our analysis, as one could expect from previous investigations, THSD7A positivity was associated with adverse pathological features. Surprisingly, low FAK expression was associated with an advanced tumor stage and nodal metastasis. We did not expect this correlation, given the large number of studies which describe FAK overexpression in several tumor types and its potential role in tumor progression and metastasis. However, as FAK is a protein tyrosine kinase, it exists in a unphosphorylated/inactivated form and a phosphorylated/activated form. Most of the studies reporting on the above-mentioned correlation dealt with the phosphorylated/activated form of FAK. Though FAK is an established tumor marker and a potential therapeutic target, few data can be found on FAK's status in prostate cancer regarding its expression quantity, and the reported results are ambiguous. Rovin et al. [37] did not find an association of FAK expression or staining intensity with the tumor grade or tumor stage using immunohistochemistry when investigating human prostate specimens. Tremblay et al. [35] in contrast stated that an increase in FAK mRNA and protein correlates with progression and invasion in prostate cancer. However, this group investigated a rather small number (n < 100) of samples. Furthermore, the samples included cell lines and human tissue obtained from patients undergoing transurethral resection or from autopsies. Slack JK et al. [36] also report that an increased metastatic potential in prostate cancer correlates with increased FAK expression, though this assumption was solely a result of cell line analysis. Since there is evidence that FAK-dependent signaling pathways might be affected by THSD7A, we first focused on FAK in its unphosphorylated form. It is all the more impressive that THSD7A positivity was significantly associated with high FAK expression, given the inverse correlation of THSD7A and FAK expression levels with common pathological features. To our knowledge, we are the first to show that THSD7A positivity is associated with high FAK expression in prostate cancer. There is evidence that FAK might be activated by THSD7A. Our findings show that FAK expression levels might be raised by THSD7A and may be proof of the actual involvement of THSD7A in FAK-dependent signaling pathways. This is of special importance because THSD7A, as a membrane-associated protein, might serve as a putative therapeutic target in cancer therapy. The limitations of this study were that we only used immunohistochemistry to determine expression levels. For that reason, we do not have reliable information on what caused the determined expression levels of the investigated markers (for example, genetic alterations). We examined the expression level of FAK in its unphosphorylated form. Finally, it is not possible to make a valid statement whether THSD7A effectively raises FAK expression levels or plays a role in the activation/phosphorylation of FAK. Further studies are necessary to clarify the described connection. Conclusions To our knowledge, we are the first to show that THSD7A positivity is associated with high FAK expression in prostate cancer. THSD7A is a membrane-associated protein. Given the potential involvement of THSD7A in FAK-dependent signaling pathways, THSD7A should be discussed as a therapeutic target in prostate cancer. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets used and analyzed in this paper are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-12T16:39:08.724Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "bdbd1c9901437e175a5db0106df6fe9d8790889e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/2/221/pdf?version=1673077569", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05ebf67d7bdfa554c101242e6b5c2e53f45acd0b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
8650107
pes2o/s2orc
v3-fos-license
Enzyme systems of detoxication – overview of recent approaches Enzyme systems of detoxication — overview of recent approaches The most important enzymes of detoxication are cytochromes P450 of Phase I and UDPglucuronosyltransferases of Phase II of drug metabolism. The conventional division of drug, or xenobiotic, metabolism to two phases has survived almost fifty years and although not perfect, still is rather informative and practical. The recent addition of the next, third phase to the first two (Phase I as yielding a molecule with free functional groups ready for conjugation with another molecule or its part in Phase II) stressed the importance of drug transport across the membranes even if it is not a pure metabolic process (i.e. a process changing the polarity and structure of a compound). T he most important enzymes of detoxication are cytochromes P450 of Phase I and UDPglucuronosyltransferases of Phase II of drug metabolism. The conventional division of drug, or xenobiotic, metabolism to two phases has survived almost fifty years and although not perfect, still is rather informative and practical. The recent addition of the next, third phase to the first two (Phase I as yielding a molecule with free functional groups ready for conjugation with another molecule or its part in Phase II) stressed the importance of drug transport across the membranes even if it is not a pure metabolic process (i.e. a process changing the polarity and structure of a compound). The importance of testing the toxicity of newly invented drugs and drug candidates is also given by the simple fact that it is the toxicity and unwanted pharmacokinetics incl. interactions which is the reason for declination of one half of new or promising compounds. All other reasons as low efficacy, adverse effects, marketing and commercial reasons, and other reasons are responsible for the second half of the rejections ( Van de Waterbeemd & Gifford, 2003). According to the recent view of the regulation authorities, the in vitro testing of drug metabolism of any new chemical entity should be realized together with an evaluation of its interaction with the system of drug transporters, mainly with the ABC membrane based protein pumps which often decide on the drug bioavailability. The majority of known cases of drug toxicity are however ascribed to reactions in which the catalytic processes involving cytochromes P450 are involved. The death toll caused by malignant arrhythmias due to increased level of terfenadine or astemizole is well documented and is a part of each modern textbook of pharmacology and toxicology. Also, the recent case of mibefradil which has been withdrawn from the market by the company two years after successful introduction is also well known. In all cases, increased levels of the drug were caused by interactions with other drugs, competing for the same form of cytochrome P450. Cytochromes P450 are also known for their two-sided behavior often compared with Dr. Jekyll and Mr. Hyde of the R. L. Stevenson´s novel: The same form is responsible for detoxication processess as well as by activation of a carcinogen or a toxicant, in principle, by the same type of chemical reaction, i.e. by activation of molecular oxygen. It is of course not the fault of the cytochromes P450, but our own -as the enzyme is simply performing the same role independently on our wish. The ways and approaches used for in vitro studies of processes in which cytochromes P450 take place can be divided according to the degree of organization of the system studied. From the most simple to the most complex, the first system should comprise only the molecules of the enzyme with the necessary components assuring the proper assembly and function. Reconstituted Systems, Bactosomes et al. These so-called "reconstituted systems" with cytochromes P450 should at first include the studied form of cytochrome P450, the NADPH:cytochrome P450 oxidoreductase, NADPH as the source of electrons (or a system generating the NADPH as the NADP, isocitrate and isocitrate dehydrogenase or alternatively again the NADP, and glucose-6-phosphate together with glucose 6-phosphate dehydrogenase). Also, a phospholipid is necessary to form a moiety similar to the membrane of endoplasmic reticulum. Similarly, the reconstituted systems are prepared with specified forms of UDP-glucuronosyltransferase, or with the sulfotransferase or other enzymes to study the second phase of xenobiotic metabolism (Shimada & Yamazaki, 1998;Green & Tephly, 1998). An alternative to these reconstituted systems is the use of membrane preparations of mostly bacterial origin expressing only a single form of cytochrome P450, but together with the NADPH:cytochrome P450 oxidoreductase supplying the necessary electrons. As with the reconstituted TOPICAL ESSAY Also available online on setox.eu/intertox Interdisciplinary Toxicology. 2008; Vol. 1(1): 6-7 Copyright © 2010 Slovak Toxicology Society SETOX systems, a surplus of the NADPH or a NADPH generating system must be added. An advantage of these recombinant systems is certainly the fact, that the enzymes are embedded in the membrane and that no addition of a phospholipid as well as preparation of a vesicular system by ultrasound is needed (See e.g. web sites www.cypex.co.uk (bactosomes), www.bdbiosciences.com (supersomes), www.invitrogen.com (baculosomes)). Studies with reconstituted systems and with membraneous systems containing single form of cytochrome P450 are complex as the composition and even the mode of preparation (e.g. sequence of components added) of the reaction mixture can significantly affect the efficacy of the system. On the other hand, these systems are the only way how to confirm that a particular, selected enzyme is involved in the biotransformation of the substance studied and that it forms the desired products (either more or less toxic or with unchanged toxicity). Microsomes Microsomes, i.e. microsomal fraction of cell homogenate (most frequently liver microsomes formed from hepatocytes) are artificial systems formed during cell disruption and obtained by differential centrifugation (Schenkman & Jansson, 1998). The most important point is that the "microsomes" maintain the orientation of the endoplasmatic reticulum from which they are formed. Studies with microsomes need addition of NADPH or of the NADPH generating system as the amount of this coenzyme in the microsomal preparation is quickly depleted during the reaction. Liver microsomal enzymes are not only cytochromes P450, but also e.g. flavin monooxygenases, xanthin oxidase, quinone oxidoreductase, as well as several enzymes of the conjugation phase of xenobiotic metabolism as UDP glucuronosyltransferases (families UGT1A, UGT2B) and others. Hence, the reaction performed in this system yields products of several enzymes and the analysis is more complex. Hence, an inhibition of specific activities of selected enzymes should be used to find the enzyme system or a particular form of enzyme (e.g. of the cytochrome P450) which is involved in an interaction with the compound studied. This way, the "suspected" enzyme or enzyme form can be selected. Also, a reverse attitude can be used -when a metabolite is formed, the addition of a specific inhibitor of a particular enzyme can influence its formation (Jančová et al., 2007). The microsomes are by no means still the most widely used system in studies of xenobiotic metabolism and toxicity. Hepatocytes Cellular systems, as liver hepatocytes, are more and more frequently used. The commercial availability of hepatocytes from defined sources makes their use rather easy. In this respect, freshly thawn cells according to the supplier's recommendation are the best solution for laboratories which do not have access to cell culture laboratory. Alternatively, a freshly prepared suspension of hepatocytes can be used for limited time (not exceeding two hours) after disruption of the organ. Hepatocytes itself represent even more complex system than microsomes which however mimic well the situation on the organism (Ferrini et al., 1998). The most of prime selection and pilot toxicological studies usually stop at this degree of complexity. For human use, a toxicity study on model experimental animal is still necessary. The realization of such an experiment and the selection of an appropriate model are however beyond the scope of this introduction.
2019-03-22T16:12:01.651Z
2008-06-01T00:00:00.000
{ "year": 2008, "sha1": "9fb0d503e5bbbf7c6aa2862f2eb32254f2b62ebb", "oa_license": "CCBY", "oa_url": "https://content.sciendo.com/downloadpdf/journals/intox/1/1/article-p6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26f8cdd14a3922357010da46cf102629f1c2bdb1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
268360884
pes2o/s2orc
v3-fos-license
Autozygome‐guided exome‐first study in a consanguineous cohort with early‐onset retinal disease uncovers an isolated RIMS2 phenotype and a retina‐enriched RIMS2 isoform Leber congenital amaurosis (LCA) and early‐onset retinal degeneration (EORD) are inherited retinal diseases (IRD) characterized by early‐onset vision impairment. Herein, we studied 15 Saudi families by whole exome sequencing (WES) and run‐of‐homozygosity (ROH) detection via AutoMap in 12/15 consanguineous families. This revealed (likely) pathogenic variants in 11/15 families (73%). A potential founder variant was found in RPGRIP1. Homozygous pathogenic variants were identified in known IRD genes (ATF6, CRB1, CABP4, RDH12, RIMS2, RPGRIP1, SPATA7). We established genotype‐driven clinical reclassifications for ATF6, CABP4, and RIMS2. Specifically, we observed isolated IRD in the individual with the novel RIMS2 variant, and we found a retina‐enriched RIMS2 isoform conserved but not annotated in mouse. The latter illustrates potential different phenotypic consequences of pathogenic variants depending on the particular tissue/cell‐type specific isoforms they affect. Lastly, a compound heterozygous genotype in GUCY2D in one non‐consanguineous family was demonstrated, and homozygous variants in novel candidate genes ATG2B and RUFY3 were found in the two remaining consanguineous families. Reporting these genes will allow to validate them in other IRD cohorts. Finally, the missing heritability of the two unsolved IRD cases may be attributed to variants in non‐coding regions or structural variants that remained undetected, warranting future WGS studies. is not explained after routine clinical testing of all known genes. 1,2e use of whole exome sequencing (WES) beyond clinical IRD gene panels allows to reveal new gene-disease associations. 3The latter, however, is hampered by sporadic cases with only one affected individual or by lack of replication of the findings in a second unrelated family.For these reasons, genomic studies in consanguineous families have taken advantage of runs of homozygosity (ROH) in the search of pathogenic variants causing disease, 4,5 which has accelerated the discovery of novel candidate genes. 6In the field of IRD, several studies have identified novel disease genes in unsolved families using this approach, for instance from Saudi Arabia where a high degree of consanguinity is observed. 3,7,8In these studies, autozygosity mapping has been performed using high-density SNP arrays and different algorithms developed for WES data. 6,9A more recent high-performance homozygosity mapping tool is AutoMap, which uses WES or WGS data as input to generate ROH data of high specificity and sensitivity. 9other advantage of the use of WES in the study of heterogeneous diseases such as IRD is the simultaneous analysis of genes associated with non-syndromic and syndromic IRD, including overlapping groups of disease varying from photoreceptor degeneration to involvement of extra-ocular systems or organs. 10Examples are the most severe forms of IRD, namely Leber Congenital Amaurosis (LCA) and early-onset retinal degeneration (EORD), characterized by a profound visual impairment or blindness from birth or before the age of 5 years, respectively, with an estimated prevalence of 1 in 80 000 children worldwide. 11At least 25 genes have been associated with LCA including RPE65, being a target for FDA-approved gene therapy. 12To add to this complexity, also syndromic conditions such as Senior-Løken or Joubert syndrome can display LCA as part of the syndrome. 13re we present an exome-first study of 15 LCA/EORD families from a Saudi cohort, 12 of which have reported consanguinity.Apart from causative variants in known LCA genes and clinical reclassifications including non-syndromic RIMS2-IRD, we identified a retinaenriched RIMS2 isoform and propose two novel candidate IRD genes. | Subjects and samples Nineteen patients (8 females and 11 males) from 15 IRD families, ranging between 2 and 9 years, were recruited for this study.Fourteen families were recruited from Dhahran Eye Specialist Hospital (DESH), a tertiary care ophthalmic hospital in the eastern province of Saudi-Arabia in addition to one family that was recruited from the ophthalmic genetics clinic at Department of Ophthalmology, College of Medicine, King Saud University (KSU), Riyadh, Saudi Arabia.Twelve of these families originate from this eastern province, while three of them come from the central region of Riyadh.A detailed report including family history and pedigree assessments was obtained from all the families.The study received approval from KSU Institutional Review Board, IRB Project NO.E-23-7817.Written informed consent was obtained from the parents of all participating children before inclusion in this study. | Genomic testing Whole exome sequencing was performed in the 15 index cases of each family. WES data analysis was performed using our in-house developed analysis platform Seqplorer, a graphical web interface that executes SQLite Gemini queries on an underlying database through straightforward dropdown menus and presents the results in a clear manner (https://github.ugent.be/cmgg/seqplorer),and evaluating gnomAD v4.0 population frequencies and in silico missense predictions (REVEL, 14 CADD, 15 MetaRNN 16 ) and splicing predictions (SpliceAI 17 ) in 14 and 1 index cases, respectively.A customized panel comprising 289 IRD genes (version 5 of the in-house RetNet panel, available at https://www.cmgg.be/assets/bestanden/GENPANEL-RETNET.pdf) was used for variant filtering.Only variants with a minor allele frequency (MAF) <0.005 were selected for the analysis.Splicing variants were further analyzed using Alamut Visual Plus Software (v.1.4)(SOPHiA GENETICS, Lausanne, Switzerland).Copy number variant (CNV) detection was performed using the ExomeDepth algorithm. 18l candidate (likely) pathogenic variants were confirmed by Sanger sequencing using the BigDye Terminator v.3.1 kit (Life Technologies) and underwent segregation analysis in all available family members. Variants were classified according to the criteria of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP). 19 | Run of homozygosity (ROH) detection and variant prioritization All 15 index cases underwent homozygosity mapping using AutoMap version 1.2 9 via the command-line package and using default parameters and hg38 as reference genome.The files generated included the predicted ROHs which were used afterwards for variant prioritization in consanguineous cases (n = 12). | Haplotype analysis For two cases sharing the same RPGRIP1 variant, haplotype analysis using the WES data was performed.Homozygous alternate and heterozygous variants located in the ROH involving RPGRIP1 were retained for further analysis and compared between the two index cases.The maximal shared region between both cases was calculated using the genomic coordinates of the two closest variants to the mutation in which the genotype of both cases was different. | In silico analyses of novel candidate IRD genes For the two unsolved consanguineous cases additional analysis was performed to identify potential candidate IRD genes.Variants located in the largest ROHs and with a MAF < 0.005 were selected and the associated genes were evaluated thoroughly through a literature search and various in silico tools for missense and splicing predictions described above.Tissue and single-cell expression of candidate genes was investigated using the Human Protein Atlas (HPA) (https://www.proteinatlas.org/) and retinal single-cell datasets such as Spectacle 20 and the IOB Retina Atlas. 21nctional studies in animal models were assessed in the Mouse Genome Database (MGD; http://www.informatics.jax.org). 22Only variants found in genes for which the retina was found to have a top ranked expression level in retina, according to the consensus dataset of HPA, consisting of normalized expression levels (TPM) for 55 tissues, with reported functional evidence and/or segregating with the disease, were proposed as potential causative variants. | Transcriptional study of retina-enriched RIMS2 isoform Paired-end FASTQ files (GSE115828) with transcriptome data derived from postmortem retina samples (Ratnapriya et al.) were retrieved. 23ly samples derived from donor retinas showing no features of agerelated macular degeneration were evaluated (n = 102).Transcripts were quantified through pseudoalignment by Kallisto (v.0.46.1) 24 using default parameters.Maximum likelihood expression estimates for all annotated (Ensembl human release 106) RIMS2 isoforms were retrieved in transcripts per million (TPM).To further assess the retinaspecificity of these isoforms, we inspected the whole locus using an integration of multiple publicly available multi-omics datasets derived from human retina (Table S1).Potential functional conservation in mouse was evaluated based on PhyloP conservation scores and mining of transcriptomic and epigenomic datasets generated from murine photoreceptors. | Variants in known IRD genes underlie the clinical diagnosis in 73% of the cohort WES in the index cases of 15 IRD families, followed by AutoMap analysis in all patients of consanguineous origin (12/15 families), revealed likely causative variants in 11 families (Table 1).This corresponds to a global detection rate of 73% (11/15) in this cohort.Most of the variants met ACMG criteria for (likely) pathogenicity except two variants classified as variants of uncertain significance (VUS) (Table 1). After investigating IRD genes located in the ROHs in the 12 consanguineous families, 8 homozygous variants were found in 10/12 families (83%).The total size of identified ROHs per patient ranged from 79.73 to 504.53 Mb (Table 1).Segregation analysis allowed us to confirm the homozygous state of the identified variants in 4 additional affected members from whom a DNA sample was available. Three different variants in CRB1 were found in 3 families, one of which c.2969T>C; p.(Leu990Ser) being novel.In 2 families we identified the same RPGRIP1 variant c.1107del; p.(Glu370Asnfs*5), with a common origin as confirmed via haplotype analysis.A maximal shared region of 23 kb between the SNPs rs746359185 and rs3748357 was observed in the WES data.The other homozygous causal variants were identified in the ATF6, CABP4, RDH12, SPATA7, and RIMS2 genes (Table 1). In the remaining 3 non-consanguineous families we prioritized rare variants located in known IRD genes.In one family, we identified two variants in the GUCY2D gene in compound heterozygous state, for which c.3053T>C; p.(Ile1018Thr) variant is reported here for the first time.For one of the two unsolved non-consanguineous families, the total size of ROH was 158.2 Mb, suggestive of relatedness.However, no potential candidate variants were found within those regions.upstream (Figure 2E).This first exon, despite not being annotated, is syntenically conserved in mouse and exhibits retinal, particularly photoreceptor expression, thus suggesting functional conservation (Figure S1).We compared this retina-enriched isoform with the MANE isoform (ENST00000696799.1,RefSeq NM_001348484) and the isoform displayed in HGMD (ENST00000507740.6,Refseq NM_014677.5)(Figure 3 and Table S2).As shown in Figure 3, the exons expressed in retina correspond to the ones of the retinaenriched isoform.The evidence of functionality of this retina-specific isoform together with the isolated retinal phenotype of this patient point towards a specific effect of the variant in the retina.We analyzed the potential splicing defect of this variant due to its location in the first nucleotide of a donor splice site in the MANE isoform of Ensembl (ENST00000696799.).This variant is absent in reference population databases and in silico predictors classified it as pathogenic (Table 3).Segregation analysis confirmed the heterozygous state of this variant in both parents (Figure 4A).According to the HPA, ATG2B has its highest expression in retina (11.8 TPM), after cerebellum (12.1 TPM), and single-cell expression data revealed a predominance in horizontal, rod and cone photoreceptors and bipolar cells (Figure 4B and 4C).Available phenotypic data of a null/knockout mice in MGD showed an abnormal retinal morphology. | Clinical reevaluation and genotype-driven reclassification In the case of F13, a candidate pathogenic variant was located within the second largest ROH with a size of 49.59 Mb (99.55% homozygosity), within a total ROH content of 332.87 Mb.A homozygous missense variant was identified in RUFY3 (chr4:70773568G>C; c.754G>C; p.(Glu252Gln)), a gene with highest expression in retina (116.7 TPM), and expression in most retinal cell types, according to single-cell RNA-seq data (Figure 4B and 4C).The variant is only present once in heterozygous state in the Turkish Variome 25 database and once in gnomAD v4.0.In silico predictions for this variant pointing towards a benign consequence, CADD excepted (Table 3).Segregation analysis in the parents and the unaffected sister revealed that both parents are heterozygous carriers while the variant is absent in the latter (Figure 4A). | DISCUSSION 0][31][32][33] In this study, 15 families of Saudi Arabia were clinically diagnosed with LCA/EORD and 73% (11/15) received a genetic diagnosis after a WES-based analysis.Of these solved families, 90% (10/11) display consanguinity and harbor homozygous pathogenic variants.Consanguineous pedigrees have been widely studied for the discovery of disease-causing variants in rare diseases 6 and in this study, pathogenic variants in IRD genes were found in 83% (10/12) of the pedigrees with reported consanguinity using autozygositydirected WES.Of the three non-consanguineous families however, only one has received a genetic diagnosis based on compound heterozygous variants in the LCA-associated gene GUCY2D.Other genomic studies of Saudi IRD cohorts have reported a diagnostic yield of 63% but when solely considering autosomal recessive cases this percentage steeply increases to 93%, with almost exclusively homozygous pathogenic variants being identified. 8st genes implicated in the current cohort are known LCA/EORD genes.Although the most prevalent LCA gene, CEP290, has not been found to be mutated in this cohort, other variants in genes with an approximate frequency of 3%-10% such as CRB1, RPGRIP1, RDH12 and SPATA7 34 were identified in 7/15 (46%) of our cases.First, three different homozygous CRB1 variants explained the disease in three families, one of which, p.(Leu990Ser), is novel.Second, a previously reported Saudi founder variant in RPGRIP1, p. (Glu370Asnfs*5) 35 was identified in two a priori unrelated families that shared a common ROH region of 23 kb surrounding this variant. Apart from variants in known LCA/EORD genes, variants were found in IRD genes not previously or rarely associated with LCA/EORD.First, in the patient harboring a homozygous ATF6 variant, a clinical reevaluation directed the clinical diagnosis towards achromatopsia.Second, the CABP4 gene has generally been associated with congenital stationary night blindness (CSNB) 36 and congenital cone-rod synaptic disorder (CRSD) 37 although the same variant found here has previously been reported in four siblings with an LCAlike phenotype. 38Although the oculodigital sign that has been observed in F10 has so far not been reported in patients with CABP4associated CRSD, it has been reported by us in RIMS2-associated CRSD, also known as incomplete congenital stationary night blindness (iCSNB). 28So it cannot be excluded that the LCA-like features observed here align with CABP4-CRSD.A new clinical reassessment of F10 was not possible, however. Finally, a novel homozygous variant in the RIMS2 gene, recently associated with CRSD, was identified in a 9-years old child with a non-syndromic form of IRD.Up to date, only five RIMS2 variants have been reported in seven CRSD patients presenting neurodevelopmental disease or abnormal glucose homeostasis apart from their IRD. However, intrafamilial phenotypic variability has been observed in patients with the same genotype. 28Only the eldest individual from a reported family with 3 clinically evaluated affected relatives presented insulin resistance, while the two others, a mother and son displayed neurological features.Given that RIMS2 has a role in the regulation of synaptic membrane exocytosis in pancreas, it is possible that patients with biallelic RIMS2 variants could develop pancreatic symptoms later in life, as previously suggested. 28Regarding the neurological phenotype, while the mother displayed only autistic behavior, her son was severely affected with marked neurodevelopmental delay at 1 year old. 28Our patient F1 was diagnosed at 3 months of age with LCA due to the presence of nystagmus and photophobia.Ophthalmological pancreas, this variant should always result in a truncated protein, similar to the other five pathogenic RIMS2 variants described before. 28 observed that exons present in the MANE isoform are not expressed in retina while exons expressed in retina are not present in the HGMD isoform, which highlights the importance of isoform-aware variant interpretation. In addition, variants in novel genes-candidate IRD genes-can be causative but are missed when filtering for known IRD genes.Two F I G U R E 3 Overview of the RIMS2 isoforms analyzed in this study.RNA-seq data showed that the isoform expressed in retina corresponds to the retina-enriched isoform (yellow) identified in this study.reported, supporting AGBL5 as candidate IRD gene. 45,46Here, we aimed to identify candidate IRD genes in the two remaining unsolved consanguineous families focusing on variants located in ROH and found homozygous variants in two novel genes with remarkable expression in retina, ATG2B and RUFY3.Apart from its retinal expression, functional evidence of retinal degeneration has been reported in mice lacking Atg2b in the Mouse Genome Informatics (Atg2bem1 (IMPC)J, MGI:6275189, C57BL/6NJ-Atg2bem1(IMPC)J/Mmjax mouse strain).8][49] In the case of RUFY3 there is no reported functional evidence for its role and expression in the retina.Hence, both ATG2B and RUFY3 are proposed as potential IRD candidate genes to be assessed in other IRD cohort to further validate them as IRD genes. Finally, in the two remaining unsolved non-consanguineous families without candidate genetic variants, non-coding variants affecting splicing or SVs might explain part of the unsolved cases, as has been reported in other IRD studies. 50,51 conclusion, we took advantage of performing autozygomedriven WES in a consanguineous IRD cohort to identify causative variants in known LCA/EORD genes (CRB1, GUCY2D, RDH12, RPGRIP1, SPATA7), and to establish genotype-driven clinical reclassifications (ATF6, CABP4, RIMS2).We identified an isolated, non-syndromic RIMS2-IRD and revealed a retina-enriched RIMS2 isoform.Although the relationship of the latter with the non-syndromic RIMS2-IRD is to be proven, it may suggest potential different phenotypic consequences of pathogenic variants depending on the particular tissue/ cell-type specific isoforms they affect.We report two candidate IRD genes, which is important to be able to assess them in other IRD cohorts.Indeed, their validation as IRD gene is needed before they can be included in IRD genetic testing panels in a clinical setting. Finally, despite the high overall diagnostic yield of 73%, the missing heritability of the unsolved IRD cases may be attributed to variants in non-coding regions or SVs that remained undetected by the WES approach, warranting WGS approaches in future studies. All affected patients were diagnosed through an ophthalmological examination including best-corrected visual acuity (BCVA) by projected Snellen charts, slit-lamp biomicroscopy, and fundus examination.All patients underwent retinal imaging using a VX-10 Alpha Combination Mydriatic/Non-Mydriatic retinal camera (Kowa American Corporation, California, USA).Color vision was reevaluated in one patient after the genetic diagnosis (ATF6-genotype) using the Ishihara 24-plate.Full-field Electroretinography (ffERG) (Diagnosys LLC, Lowell, Massachusetts, USA) was performed in all patients.Light-and dark-adapted responses were measured in accordance with the International Society for Clinical Electrophysiology of Vision (ISCEV) extended protocol. 14Macular spectral-domain optical-coherence tomography (SD-OCT) (Heidelberg SPECTRALIS, Heidelberg, Baden-Wurttemberg, Germany), SD-OCT of the retinal nerve fiber layer thickness (RNFL-SD-OCT) was performed in one patient. 2. 3 | Genetic and genomic studies 2.3.1 | DNA extraction EDTA blood was drawn from the 19 patients and available family members for DNA extraction and downstream molecular studies using the ReliaPrep Large Volume HT gDNA Isolation System (Promega, Leiden, The Netherlands) according to the manufacturer's protocols. A Abbreviations: ACMG/AMP, American College of Medical Genetics and Genomics and the Association for Molecular Pathology; het, heterozygous state; hom, homozygous state; LP, likely pathogenic; Mb, Megabase; P, pathogenic; ROHs, runs of homozygosity; VUS, variant of uncertain significance. 3. 3 | Novel phenotypic association of RIMS2-IRD and retina-enriched RIMS2 isoform After assessing known IRD genes in the proband of consanguineous pedigree F1 we identified a novel homozygous variant in a canonical splice donor site of RIMS2.As this gene has been implicated in congenital cone-rod synaptic disorder with neurodevelopmental disease and occasional anomalies of glucose homeostasis, 28 a further systemic evaluation was performed in this patient.The pediatric neurological examination revealed no signs of neurodevelopmental problems and normal glucose homeostasis.Ophthalmological examination including color wide-field fundus photographs showed bilateral optic disc pallor with mild vessels attenuation and general RPE mottling without peripheral pigmentary migration in both eyes at age 9 (Figure2).BCVA was 20/100 in both eyes and apart from horizontal nystagmus no other eye abnormalities were observed.Full field ERG revealed absent response in light and dark conditions in both eyes.Macular SD-OCT showed retinal thinning at the level of inner retina, while SD-OCT of the retinal RNFL-SD-OCT showed bilateral temporal RNFL atrophy in both eyes (Figure2).Due to the exceptional non-syndromic and retina-specific phenotype observed in F1 with the identified novel RIMS2 variant, we performed data mining of RIMS2 expression in human.Isoform ENST00000436393.6,not annotated in RefSeq, was found to be retina-specific as supported by exclusive retinal expression of the first exon and binding of retinal specific transcription factors (CRX, OTX2) PhotophobiaNystagmusPupillary reflexStrabismus ODS Fundus appearance examination revealed a phenotype similar to the one reported by Mechaussier et al., including absent pigmentary migration and thinning of the inner retina, although in F1 the ERG was flat instead of electronegative.Given the isolated retinal phenotype of F1, we performed further analyses to assess a potential effect on splicing of the identified variant.Interestingly, we observed a unique RIMS2 transcript with a retina-specific first exon, among all the different RIMS2 transcripts.Although this retina-specific exon is also conserved in mouse, the corresponding retina-enriched mouse isoform seems unannotated in public databases.The novel RIMS2 splice variant identified in this study, located in the first donor site of exon 9, 10, or 14 of the HGMD, retina-enriched or MANE isoforms, respectively, is predicted to induce skipping of those exons, resulting in truncated proteins for all isoforms.Unless tissue-specific or cryptic splice sites would be used in these and other RIMS2 isoforms expressed in the other tissues associated with RIMS2-pathogenicity, such as brain and F I G U R E 1 Pedigree and fundus image of the patient homozygous for the novel ATF6 c.493_494del; p.(Leu165Aspfs*64) variant identified in this study.(A) Pedigree of F9. (B) Fundus (left eye) of 8-year-old girl (F9), displaying retinal vessels attenuation and welldefined, oval-shaped retinal pigment epithelial defects at the foveal area.V1, variant allele; wt, wild-type allele.studies of Saudi cohorts, Abu-Safieh et al. and Patel et al., proposed several candidate IRD genes. 3,8In the first study, 6 genes were presented as candidate IRD genes, and since their publication in 2013, only additional IRD cases with pathogenic variants in C21orf2, KIAA1549, and ACBD5 have been reported. 9-44This is similar to the findings of Patel et al., after which only AGBL5 variants have been F I G U R E 2 Novel RIMS2 variant in a non-syndromic patient and a retina-enriched RIMS2 isoform.(A) Pedigree of family F1.(B) Color widefield fundus photograph.(C) Macular spectral-domain optical-coherence tomography (SD-OCT).(D) SD-OCT of the retinal nerve fiber layer thickness (RNFL-SD-OCT).Only images of the right eye are shown.(E) We identified a retina-enriched RIMS2 isoform: the first blue highlight corresponds to the transcription start site (TSS) of the MANE RIMS2 isoform ENST00000696799.1.We observed that the first exon of RIMS2 isoform ENST00000436393.6 is highly conserved (second blue highlight) and expressed in retina, but not in other relevant tissues such as brain.Active transcription is supported by bulk RNA-seq, Nuc-seq and CAGE-seq derived from human retina.The retinal specificity of this isoform is further confirmed by signatures of open chromatin, H3K4me2 and retinal transcription factor binding sites (CRX and OTX2).V, variant allele; wt, wild-type allele.
2024-03-13T06:17:56.434Z
2024-03-11T00:00:00.000
{ "year": 2024, "sha1": "35c2ae71e2ae919f212e159f99217d916bafc213", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cge.14517", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "20fd4e28101c6ab39041c5b3bd7d1cc3b2d52b99", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236291632
pes2o/s2orc
v3-fos-license
Rapid Assessment of Anthocyanins Content of Onion Waste through Visible-Near-Short-Wave and Mid-Infrared Spectroscopy Combined with Machine Learning Techniques A sustainable process for valorization of onion waste would need to entail preliminary sorting out of exhausted or suboptimal material as part of decision-making. In the present study, an approach for monitoring red onion skin (OS) phenolic composition was investigated through Visible Near-Short-Wave infrared (VNIR-SWIR) (350–2500 nm) and Fourier-Transform-Mid-Infrared (FT-MIR) (4000–600 cm−1) spectral analyses and Machine-Learning (ML) methods. Our stepwise approach consisted of: (i) chemical analyses to obtain reference values for Total Phenolic Content (TPC) and Total Monomeric Anthocyanin Content (TAC); (ii) spectroscopic analysis and creation of OS spectral libraries; (iii) generation of calibration and validation datasets; (iv) spectral exploratory analysis and regression modeling via several ML algorithms; and (v) model performance evaluation. Among all, the k-nearest neighbors model from 1st derivative VNIR-SWIR spectra at 350–2500 nm resulted promising for the prediction of TAC (R2 = 0.82, RMSE = 0.52 and RPIQ = 3.56). The 2nd derivative FT-MIR spectral fingerprint among 600–900 and 1500–1600 cm−1 proved more informative about the inherent phenolic composition of OS. Overall, the diagnostic value and predictive accuracy of our spectral data support the perspective of employing non-destructive spectroscopic tools in real-time quality control of onion waste. Introduction Huge amounts of onion (Allium cepa L) waste, consisting mainly of the skin and inedible outer scales of the bulb, are generated throughout their supply chain from the farm to retail stores and the households. In 2000, more than 450 000 tonnes of onion solid waste (OSW) were produced in Europe [1]; the tonnage is expected to be much higher today with increasing production [2]. OSW can be considered as an environmental problem because it is not suitable for use as organic fertilizer due to the rapid development of phytopathogenic agents, or as a fodder because of its aroma [3]. A possible solution could be the development of a sustainable process to convert this food waste into a raw material for the food industry [1,4]. Regardless of the season, cultivar, or ripening stage, OSW can be a potential source of fibers, fructooligosaccharides, the alk(en)yl cysteine sulfoxides and certain health-promoting phenolic compounds, especially flavonoids [3]. Onion flavonoids are found especially in the skin and outer layers [5], mainly in the form of quercetin aglycone. At least two major types (including quercetin-2 of 20 3,4'-O-diglucoside (3,4'-Qdg) and quercetin-4'-O-glucoside (4'-Qmg) [3]) conjugate with glucose. OSW from red-skinned onions is also rich in anthocyanins [6]. Even though anthocyanins comprise a small percentage of onion bulb flavonoids, they are heavily concentrated in the skin and in a single layer of cells in the epidermal tissue, mainly in the form of cyanidin glucosides, esterified with malonic acid [6]. Red onion dry outer layers could, therefore, be a source of natural colorants that can be extracted with green extraction techniques and used as a replacement of synthetic counterparts like carmine [7]. Several novel technological solutions have been proposed to recover these valuable ingredients from OSW [8]. Many studies focus on improving the incorporation of the extracts into novel, biofunctional food products or in the formulation of food supplements [9,10]. The overall cost/sustainability of the biorefinery processes depend on various factors [1,11] that must comply with rational strategies for waste management, such as stabilization and quality. Standards for preliminary sorting to exclude exhausted or sub-optimal waste would be of great value. Onsite diagnostic assessment of the onion waste content in phenolic compounds is quite challenging. Complexity of supply chains, multi-scaled production, and heterogeneity of the waste composition hinder large-scale operational investigations. Electrochemical sensors are already used in the food industry as sensitive tools for monitoring polyphenol content in certain commodities [12]. Studies show that non-destructive spectroscopic techniques in the visible-near-short-wave and mid-infrared regions combined with powerful chemometric methods may offer cost-effective, rapid, and versatile tools for monitoring the chemical composition of foods. In this context, the use of VNIR-SWIR and FT-MIR spectrometers at all relevant stages across the onion supply chains along with implementation for quality control of thousands of OSW samples generated would inform decision-making about further waste management processes (re-use, re-cycle, valorization etc.). Whether such technology is mature enough for application to routine analysis of OSW is open to question. Artificial intelligence through machine learning (ML) algorithms has revolutionized the predictive performance of current chemometric methods used in the food sector [13]. In the case of onions, a partial least square regression (PLSR) model for the assessment of total phenolic content and total antioxidant activity of phenolic-rich extracts of onion bulbs has been reported to fit well with data extracted from FT-IR spectral features [14]. However, systematic exploitation of spectral analysis and ML algorithms in OSW lags greatly behind. Some years ago, Vincke et al. [15] utilized NIR spectroscopy along with a Partial Least-Squares Discriminant Analysis (PLS-DA) to automatically sort different parts of onion bulbs produced during specific industrial processes. At that time, Wang and Gitaitis [16] highlighted that light-scattering properties of different onion parts in the VNIR region may be further exploited for non-destructive inspection of diseased onions, but they did not employ chemometric tools in their investigation. The overarching objective of this study was to examine whether chemometric modelling of VNIR-SWIR spectral data of red OS powder, according to their actual content in monomeric anthocyanins, can feasibly be used for predictive purposes. Reference Ultraviolet-visible (UV-Vis) based chemical assays were employed as a first step to assess the content in total phenols and monomeric anthocyanins. Having specified how the spectral signatures in the VNIR-SWIR were to be recorded under standard acquisition protocols, a series of state-of-the-art ML algorithms for regression analysis were deployed using the whole or sub-regions of the VNIR-SWIR spectra as variable inputs. Whether the diagnostic spectral region for anthocyanins might be extended to the mid-infrared spectrum by employing FT-MIR spectroscopy (in attenuated total reflectance (ATR) mode) was also investigated. In that case, an unsupervised ML-based exploratory approach was used to search for interpretable patterns among the OS spectra. The focus of this study was to document the steps, driven by the current findings, to turn infrared spectroscopy into an operational tool for the assessment of the anthocyanins present in OSW. Materials and Methods The methodological approach consists of five discrete steps: (i) chemical analysis to obtain reference values for Total Phenolic Content (TPC) and Total monomeric anthocyanins (TAC); (ii) spectroscopic analysis (VNIR-SWIR/ATR-FT-MIR), which includes the creation of the dry OS spectral libraries; iii) generation of calibration and validation datasets; (iv) spectral exploratory analysis and regression modeling of VNIR-SWIR spectra where several ML algorithms were evaluated to predict the content of anthocyanins; and (v) evaluation of the performance metrics obtained by ML algorithms. The overall data processing and analysis workflow is illustrated in Figure 1 and detailed descriptions of the different steps are provided in the sections below. The focus of this study was to document the steps, driven by the current findings, to turn infrared spectroscopy into an operational tool for the assessment of the anthocyanins present in OSW. Materials and Methods The methodological approach consists of five discrete steps: (i) chemical analysis to obtain reference values for Total Phenolic Content (TPC) and Total monomeric anthocyanins (TAC); (ii) spectroscopic analysis (VNIR-SWIR/ATR-FT-MIR), which includes the creation of the dry OS spectral libraries; iii) generation of calibration and validation datasets; (iv) spectral exploratory analysis and regression modeling of VNIR-SWIR spectra where several ML algorithms were evaluated to predict the content of anthocyanins; and (v) evaluation of the performance metrics obtained by ML algorithms. The overall data processing and analysis workflow is illustrated in Figure 1 and detailed descriptions of the different steps are provided in the sections below. Onion Samples (OS) The bulbs of the Allium cepa L. were supplied from various local markets in Greece during the autumn-winter season of 2019 to represent three different retail product lines: one originating from the Netherlands (n = 25) and two from major producing regions in Greece (Thiva, Evritania, n = 13). The onion skin (OS) test samples were prepared as follows. The bulbs of the red onions were peeled using a sharp blade to remove the outer dry layers and the apical trimmings, which are considered as waste material. Different layers of each sample (dry outer, first inner) were separated. The dry outer and first inner layer skins of each bulb were washed twice in deionized water. The resulting materials were dried at 65°C for 48 h, ground using a domestic blender (KENWOOD, Havant, UK), powdered in a laboratory mill and then sieved through a 0.5 mm mesh. The material was then mixed to represent Onion Samples (OS) The bulbs of the Allium cepa L. were supplied from various local markets in Greece during the autumn-winter season of 2019 to represent three different retail product lines: one originating from the Netherlands (n = 25) and two from major producing regions in Greece (Thiva, Evritania, n = 13). The onion skin (OS) test samples were prepared as follows. The bulbs of the red onions were peeled using a sharp blade to remove the outer dry layers and the apical trimmings, which are considered as waste material. Different layers of each sample (dry outer, first inner) were separated. The dry outer and first inner layer skins of each bulb were washed twice in deionized water. The resulting materials were dried at 65 • C for 48 h, ground using a domestic blender (KENWOOD, Havant, UK), powdered in a laboratory mill and then sieved through a 0.5 mm mesh. The material was then mixed to represent 38 test samples of distinct origin. Random combinations of 6 out 38 samples were produced (n = 8) to enhance color variance. The inner and outer layers of 15 individual bulbs from the Netherlands batch were also treated separately (n = 26). In total, 72 OS test samples were used in this study. All solvents or analytical standards, such as Folin-Ciocalteu reagent, gallic acid, and sodium chloride (Na 2 CO 3 ), were purchased from Sigma-Aldrich, Chemie GmbH (Taufkirchen, Germany). Preparation of OS Extract OS powder was mixed with solvent (liquid-to-solid ratio of 10 mL/g), composed of (70% v/v) ethanol in water, at pH = 1. The material was subjected to extraction at 25 • C for 15 min in an Ultrasons-H ultrasonic bath (J.P. Selecta Barcelona, Spain). Following extraction, the samples were filtered through a 0.45 µm nylon membrane filters (BGB, USA). The clear supernatant was stored at −20 • C until used for further analysis. Determination of Total Phenolic Content (TPC) In brief, 30 µL of all dissolved extracts were mixed, separately, with 2370 µL of deionized water and 150 µL undiluted Folin Ciocalteu's reagent. After one minute, 450 µL Na 2 CO 3 (20%, w/v) was added. The mixture was incubated for 120 min and absorbance of the resulting mixture was measured spectrophotometrically at 750 nm. Gallic acid was used as a reference standard and the results were expressed as milligram gallic acid equivalents (mg GAE)/g of extract. Determination of Total Monomeric Anthocyanin Content (TAC) TAC was determined according to [17] using the pH-differential method. Briefly, absorbance readings at 510 nm and 700 nm were made after dilution of extract in buffers solution, with pH values of 1.0 and 4.5, against distilled water. The calculation was based on Equations (1) and (2), respectively: where Aλmax is the absorbance of the sample extract at 510 nm Total monomeric anthocyanins mg where M w (molecular weight) = 449.2 g/mole for cyanidin-3-glucoside; Df = dilution factor; l = pathlength in cm; ε = 26,900 molar extinction coefficient in L × mole −1 × cm −1 for cyanidin-3-glucoside; 10 3 = conversion of g to mg. The results are expressed as mg cyanidin-3-glucoside per 100 g of onion dry matter. All analyses were performed in triplicate and the median values were calculated. The summary statistics of the chemical analyses are presented in Table 1. In the table below, Q1, Q2, and Q3 denote the quartiles. Q1 corresponds to the lowest 25% of numbers, Q2 ranges between 25.1% and 50% (up to the median), and Q3 corresponds to the range 50.1% to 75% (above the median). Spectroscopic Characterization of the Dry OS In this section, we briefly present the methodological steps to develop an onion spectral library that would be used for assessing the total anthocyanin content. The building of a database for OS anthocyanins that utilizes their unique spectral signatures (combination of infrared bands) in specific spectral regions is described. The VNIR-SWIR measurements of dry red OS powdered samples were performed using a PSR +3500 spectrometer (Spectral Evolution Inc., Lawrence, Massachusetts, USA) operating in the range 350 to 2500 nm. The measurements were performed using a contact probe to eliminate the effects of light scattering. Five spectra per powdered sample were recorded and averaged to obtain the corresponding reflectance spectral signatures. A Spectralon®panel with 99% reflectance was used to calibrate the spectrometer before the measurements. Spectral Preprocessing Techniques Widely employed scatter-corrective and spectral-derivatization preprocessing techniques were applied to the VNIR-SWIR dataset to remove irrelevant information. In brief, (i) the reflectance spectra (REF) were converted into (ii) pseudo-absorbance spectra [log10(1/R)] (ABS), and (iii) transformed into a continuum removal method domain (CR). The Standard Normal Variate (SNV) was then applied to both REF and ABS values resulting to (iv) REF-SNV and (v) ABS-SNV datasets, respectively. The Savitzky-Golay method was applied to remove unwanted background noise from the spectra (vi) by calculating the first derivative (SG1), and in that case, (vii) combining with the SNV transformation (SG1-SNV) and also (viii) by calculating the second derivative (SG2), 11 data points of interval. Lastly, (ix) the detrend (DET) preprocessing method was used before data modelling. An overview of these techniques is presented by Rinnan et al. [18]. In total, nine different spectral datasets were produced. Machine Learning Modeling The Conditioned Latin hypercube method (cLHS) [19] was used to split the onion VNIR-SWIR spectral data into calibration and validation datasets. According to the cLHS algorithm, the method searches the data based on heuristic rules combined with an annealing schedule. The proposed method is considered to be an effective way of replicating the distribution of the variables compared to a random sampling approach. The percentage of the number of onion samples to be allocated for the calibration dataset was determined as 75% of total dataset (54 out of 72), while the rest (18) were included in the validation set. Each dataset of preprocessed spectra was modelled against TAC values using the following linear or non-linear regression algorithms, i.e., (i) partial least square regression (PLS); (ii) Random Forest (RF); (iii) Cubist; (iv) elastic net (ENET); (v) k-nearest neighbors (k-NN); and (vi) support vector machines for regression (SVM). In every method, a set of hyperparameters was selected as follows. The classical PLS algorithm [20], widely applied for multiple purposes in spectroscopic analysis, transforms the input factors' matrix into a series of latent variables (LVs) to maximize the covariance among the predictors and dependent variables. The number of optimum LVs was selected to range from 10 to 30. RF is an ensemble learning classifier [21] with good performance metrics in various spectroscopy studies. Tuning of hyperparameters included first the selection of a number of variables that can be sampled in each split of the tree analysis (6,24) and then the value of the tree parameter (100, 250, 500, 1000, 1500). The rule-based Cubist algorithm [22] reduces a set of rules derived by a decision tree to define a linear regression model. Then, multiple rule-based models are combined (committees) and the final predictions are adjusted using known errors on the training set with a small number of neighbors for each unknown sample. Thus, predefined values for the number of committees (1, 10, 50, 100) and of neighbors (0, 1, 5, 9) were selected. The ENET method was also evaluated as an extension of linear regression that adds regularization penalties to the loss function during training. Next, the k-NN algorithm was used. This is an instance-based learning algorithm that utilizes a distance metric from the calibration dataset and predicts a testing pattern depending on a preselected number, k, of nearest neighbors. In our study, this value was optimized in a range from 0 to 25 closest k neighbors. Lastly, SVM for regression, as introduced by Drucker et al. [23], was evaluated. SVM is a non-parametric technique employing a kernel function to map the initial predictors into a higher dimensional space. In this study, a radial basis function was utilized, while the C parameter values were optimized among (0.001, 0.01, 1, 10) to control the penalization of the residual errors. A grid search on a five-fold cross-validation experiment for each analysis enabled the selection of the optimal hyperparameter values for model consistency. Table 1 in Appendix A shows the optimal hyperparameter values for each ML algorithm. In total, 54 calibration models were produced. In order to assess their performance for prediction of TAC in dry red OS, the root-mean-square error (RMSE, Equation (3)), the coefficient of determination (R 2 , Equation (4)), and the Ratio of Performance to Interquartile Range (RPIQ; Equation (5)) values were compared. The equations used were as follows: where y i is the observed value andŷ i is the predicted value of sample i, N is the number of observations (Equation (3)), y is the mean of the observed values (Equation (4)), and IQ is the interquartile range (IQ = Q3 − Q1) of the observed values (Equation (5)). Q1 and Q3 denote the first and third quartile, respectively. ATR-FT-MIR Analysis ATR-FT-MIR spectra were acquired using a 6700 IR (Jasco, Essex, UK) spectrometer equipped with a DLaTGS detector, a high-throughput Single Reflection ATR with diamond crystal and complemented by the Spectra Manager software (Jasco, Essex, UK). For each spectrum, eight scans were accumulated in the absorbance mode and recorded at 4 cm −1 resolution, covering a range from 4000 to 600 cm −1 . The spectrum was collected against a background obtained with a dry and clean cell and corrected by the ATR correction option of the software. Three spectra per powdered sample were recorded and averaged to obtain the corresponding spectrum before further preprocessing. Spectral artifacts due to noise, baseline offset, and slope or light scattering were removed by the multiplicative signal correction method (MSC) and second order derivatization with the Savitzky-Golay method (11 data points of interval) [24]. The spectral data were mean-centered and further processed via Principal Component Analysis (PCA). PCA is an unsupervised technique that transforms a set of variables into a new set of composite variables, the principal components (PCs). PCA attempts to simplify the distribution of samples and identify the underlying factors that explain possible patterns of variable and sample correlations. For exploratory purposes, only principal components with eigenvalue >1.0 were considered useful, according to the Kaiser criterion [25]. Implementation The statistical and regression analyses of the VNIR-SWIR datasets were performed utilizing the R programming language [26], with the caret package [27]. The commercial SIMCA 16.02 software (Umetrics, Sweden) was used for FT-MIR spectral analysis. Chemical Analyses Data The phenolic constituents of red onion skin are expected to exist primarily in bound form [5]. Nevertheless, the TPC values of red OS samples that represent mainly free soluble forms of phenolic compounds were found to range between 13 and 79 mg GAE/g. These values fall within typical ranges for the outer layers of brown-skin onion bulbs that have previously been reported in literature [5,28], regardless of the geographical origin of the bulb or the extraction method. The soluble phenolic extracts of OS samples were found to be rich in anthocyanins. In particular, the TAC values varied between 0.13 and 3.82 mg cyanidin per g. This result agrees with the findings reported in [5] and its references. However, it was observed that the samples originating from the Netherlands were far richer in monomeric anthocyanins (114.8-369.1 mg/100 g DW) than those from domestic sources (13.3-146.0 mg/100 g DW). A clear trend relating to geographical origin/retail chain was observed in the reference TAC values but not in evidence in TPC values. Whether the VNIR-SWIR and/or ATR-FT-MIR spectroscopic characterization of the samples would expose the same trend is intriguing. VNIR-SWIR Exploratory Approach It is accepted that bands at 1415-1512 nm, 1650-1750 nm, and 1955-2035 nm are mainly due to phenolic structure, according to the findings of Dykes et al. [29]. Similarly, in a research study about total anthocyanins in grape juice using NIR spectroscopy, it was found that the spectral range for these phenolic compounds was 1000-1183 nm [30]. Such bands along with those at around 1450 nm and 1930 nm, corresponding possibly to the O-H stretch and O-H band combination and the H-O-H deformation combination overtones of hydroxyl groups (e.g., due to water or starch) [16,31] were also evident in the near-infrared spectra of the dry OS samples under study. As a general rule, the choice of an optimal preprocessing method depends on the characteristics of the dataset and the goal of the analysis [24]. In our study, the VNIR-SWIR spectra of OS samples as the original REF spectral values or as preprocessed spectra are illustrated in Figure 2. Visual assessment of the spectral signatures revealed no significant variation among the dry onion samples. Application of various preprocessing methods resulted in new feature spectral spaces by pronouncing different regions in the VNIR-SWIR spectrum and eliminating different effects. SG1 and SG2 emphasize the differences in the visible range and are more prominent to the SWIR region possibly because of greater overlapping of the bands. Similarly, the ABS (including also the SNV values) indicated larger variations than the REF at the first edge of the spectrum in the visible region (350-750 nm). In a first exploratory approach to identify diagnostic patterns of sample distribution in the VNIR-SWIR, first derivative of the initial reflectance spectra (SG1) was analyzed via PCA. The analysis resulted in three PCs, the first of which (PC1) accounted for 82.8% of the total variance, the second (PC2) for 10.3% and the third (PC3) for 9.6%. The corresponding two-dimensional scoreplots verified that OS samples from domestic sources tended to be clustered separately from those originating from the Netherlands, mainly because of the higher PC2 score values of the latter (Appendix A, Figure A1). This result is quite promising for further modelling of TAC values given our previous observations. No other pattern could be recognized in the sample distribution among the 3-D scoreplot of the PCA model. Performance of the ML Models We first assessed six different ML models in different spectral datasets derived from the various pre-treatments to highlight the impact of the ML techniques in spectroscopic modelling. Overall, the proposed models have a valuable predictive performance (R 2 > 0.80, and RPIQ > 3). These findings are further illustrated in Figure 3. The results showed that spectral pre-treatments have increased the performance for most of the ML models, with the exception of the DET technique. Notably, modelling of the VNIR-SWIR onion spectral library with the pre-treatment of SG1-SNV and SG2 allowed more accurate predictions of TAC than other preprocessing techniques. A detailed comparison of the model performance obtained with various preprocessing techniques is also provided in Appendix B (Table A2). We also tested the effectiveness of six ML models by comparing their performance metrics, as shown in Figure 4. In general, better predictive performance was achieved with more complex and supervised algorithms. The k-NN and RF algorithms were found to attain the best performance across all properties. They have enabled more robust predictions (R 2 > 0.80, RMSE < 0.53 and RPIQ > = 3.50). The results of the various models are reported graphically in Figure 4, in which we can visually compare their performances. The difference with the PLSR algorithm, one of the most commonly applied algorithms in food spectroscopy, is noticeable (R 2 = 0.81 and RPIQ = 3.48), the last having a larger RMSE (0.53). The Cubist and SVM results show lower predictive performance. Performance of the ML Models We first assessed six different ML models in different spectral datasets derived from the various pre-treatments to highlight the impact of the ML techniques in spectroscopic modelling. Overall, the proposed models have a valuable predictive performance (R 2 > 0.80, and RPIQ > 3). These findings are further illustrated in Figure 3. The results showed that spectral pre-treatments have increased the performance for most of the ML models, with the exception of the DET technique. Notably, modelling of the VNIR-SWIR onion spectral library with the pre-treatment of SG1-SNV and SG2 allowed more accurate predictions of TAC than other preprocessing techniques. A detailed comparison of the model performance obtained with various preprocessing techniques is also provided in Appendix B (Table A2). We also tested the effectiveness of six ML models by comparing their performance metrics, as shown in Figure 4. In general, better predictive performance was achieved with more complex and supervised algorithms. The k-NN and RF algorithms were found to attain the best performance across all properties. They have enabled more robust predictions (R 2 > 0.80, RMSE < 0.53 and RPIQ > = 3.50). The results of the various models are reported graphically in Figure 4, in which we can visually compare their performances. The difference with the PLSR algorithm, one of the most commonly applied algorithms in food spectroscopy, is noticeable (R 2 = 0.81 and RPIQ = 3.48), the last having a larger RMSE (0.53). The Cubist and SVM results show lower predictive performance. It should be highlighted that the selection of the ML model for regression analysis affects the prediction potential of VNIR-SWIR spectral data (Figures 3 and 4). The fact that k-NN and RF models presented better performance with smaller prediction errors than wellstudied models in the domain of food spectroscopy (e.g., PLSR and SVM) may be a result of the efficiency of those algorithms to generate subsets with similar characteristics derived by different rules or the distance of the closest neighborhoods. Moreover, it was clearly demonstrated that the various spectral preprocessing techniques result in complementary information that enhances the predictive performance of the ML models compared to those produced with the raw reflectance recordings. Therefore, smoothing (SG1) and/or normalization of the dataset (SNV) should be prioritized in preprocessing steps. An important aspect of the current study is the interpretability of the underlying models. By visualizing the relative importance of each band across all model-preprocessing combinations, it is possible to recognize those VNIR-SWIR spectral wavelengths that are more prominent in model construction ( Figure 5). It is clear that two discrete spectral regions are important; the first one ranges from 400-1000 nm (VNIR), while the second one is in the range 1200-2500 nm (SWIR). Across nearly all models, VNIR has roughly two main spectral regions: one around 620-650 nm and the other at the beginning of the spectrum (380-390 nm), depicting, respectively, the characteristic red and purple color of the onion's layers due to the presence of anthocyanins. The finding that the upper SWIR part at 2100-2300 nm is practically related to aromatic C-H bonds provides valuable information. It should be highlighted that the selection of the ML model for regression analysis affects the prediction potential of VNIR-SWIR spectral data (Figures 3 and 4). The fact that k-NN and RF models presented better performance with smaller prediction errors than well-studied models in the domain of food spectroscopy (e.g., PLSR and SVM) may be a result of the efficiency of those algorithms to generate subsets with similar characteristics derived by different rules or the distance of the closest neighborhoods. Moreover, it was clearly demonstrated that the various spectral preprocessing techniques result in complementary information that enhances the predictive performance of the ML models compared to those produced with the raw reflectance recordings. Therefore, smoothing (SG1) and/or normalization of the dataset (SNV) should be prioritized in preprocessing steps. An important aspect of the current study is the interpretability of the underlying models. By visualizing the relative importance of each band across all model-preprocessing combinations, it is possible to recognize those VNIR-SWIR spectral wavelengths that are more prominent in model construction ( Figure 5). It is clear that two discrete spectral regions are important; the first one ranges from 400-1000nm (VNIR), while the second one is in the range 1200-2500 nm (SWIR). Across nearly all models, VNIR has roughly two main spectral regions: one around 620-650 nm and the other at the beginning of the spectrum (380-390 nm), depicting, respectively, the characteristic red and purple color of the onion's layers due to the presence of anthocyanins. The finding that the upper SWIR part at 2100-2300 nm is practically related to aromatic C-H bonds provides valuable information. Exploring Shorter Diagnostic Regions in the VNIR-SWIR New low-cost spectral devices available in the marketplace interest both researchers and end users to explore the potential of shorter spectral regions for anthocyanin content estimation. However, it is unclear if spectrometers operating solely at VNIR or SWIR could provide sufficient prediction accuracy. Therefore, focus was given to the spectral Exploring Shorter Diagnostic Regions in the VNIR-SWIR New low-cost spectral devices available in the marketplace interest both researchers and end users to explore the potential of shorter spectral regions for anthocyanin content estimation. However, it is unclear if spectrometers operating solely at VNIR or SWIR could provide sufficient prediction accuracy. Therefore, focus was given to the spectral regions between 400-1000 nm and 1350-2500 nm because they reflect more clearly spectral regions (as derived from the variable importance analysis) wavelength analysis (see Figure 5). The k-NN model performance (see Figure 4) was tested in two shorter spectral ranges. Then, new rounds of modelling analysis were performed on two sub-sets of the SG1-SNV spectral dataset that corresponded to the selected regions. The results are shown in Table 2. It is clear that accuracy of VNIR-based prediction at 400-1000 nm (R 2 = 0.70, RMSE = 0.66, and RPIQ = 2.80) was better than the SWIR-based prediction at 1350-2500 nm (R 2 = 0.55, RMSE = 0.75 and RPIQ = 2.49), but not as high as of that corresponding to the combined spectral region (full spectrum). Chemometric analysis of the VNIR-SWIR spectra (350 to 2500 nm) resulted in satisfactory predictive performance of total anthocyanins content, selectively. The most important features for this purpose were a series of characteristic bands in the visible region of the spectra, mainly at 550-600 nm (at which these compounds mainly absorb), and in the range close to SWIR (2000-2300 nm) ( Figure 5). Future studies could focus on the employment of shorter-range spectroscopic sensors for total anthocyanins in OSW to enable fast, low-cost analyses. This was also proposed recently regarding the evaluation of a Micro-Electromechanical systems spectral sensors for soil properties [32]. To computationally enhance the accuracy of prediction, more advanced chemometric approaches can also be employed. They may be combined, (predictions from single ML models developed using bootstrapped samples or the proposed ML algorithms developed via genetic stacking algorithms or even various spectral datasets after pre-processing) instead of relying solely on the best one via novel multi-input deep learning algorithms [33]. Identification of Phenolic-Group Diagnostic Bands in the MIR The original FT-IR spectra and the accompanying spectral transformation (MSC, second derivative) of OS samples are shown in Figure 6. In our study of dry OS powder, the characteristic amide-stretching bands of proteins (1550 and 1650 cm −1 ) were not clearly evidenced in the FT-IR spectra. This finding is in line with earlier reports [34]. A weak valley at around 1560 cm −1 revealed in preprocessed spectra signified a possible contribution from nucleic acid bases. Plant cell wall polysaccharides and other types of carbohydrates that are abundant in OS samples (e.g., fructooligosaccharides and pectic oligosaccharides) could be distinguished from some characteristic bands in the region between 1200 and 950 cm −1 [35]. In addition, characteristic bands in the region 1800-1500 cm −1 that are often assigned to esteried and non-esteried carboxyl groups of pectin molecules [34,35] were evidenced in the low frequency region. Table 3 provides an overview of the FT-IR spectral bands that were visually observed as peaks in the original spectra (0th order, after MSC) or as corresponding valleys in the 2nd derivative spectra (2nd order), respectively. The original peaks are clearly much better resolved after 2nd order derivatization of the spectra revealing a number of hidden bands that may carry diagnostic information. This was particularly evidenced in the region below 1000 cm −1 but also between 1400 and 1600 cm −1 . Table 3. Major bands shown as peaks in the 0th order and valleys in the 2nd order derivative FT-MIR spectra of dry OS samples and possible assignment. Table 3 provides an overview of the FT-IR spectral bands that were visually observed as peaks in the original spectra (0th order, after MSC) or as corresponding valleys in the 2nd derivative spectra (2nd order), respectively. The original peaks are clearly much better resolved after 2nd order derivatization of the spectra revealing a number of hidden bands that may carry diagnostic information. This was particularly evidenced in the region below 1000 cm −1 but also between 1400 and 1600 cm −1 . Table 3. Major bands shown as peaks in the 0th order and valleys in the 2nd order derivative FT-MIR spectra of dry OS samples and possible assignment. For phenolic compounds, the two aromatic ring-related bands at around 1600 and 1640 cm −1 were distinct in the spectra of dry OS powder [38]. These two bands were more clearly defined than those at 1185 and 965 cm −1 possibly because of C-O-C and C-OH vibrations of phenols [38]. Given the copresence of polysaccharides and oligosaccharides in the test sample, straightforward assignment of the signals in the latter region is not possible. In a recent study about the potential of FT-IR and PCA to identify individual classes of phenols like flavonols, anthocyanins, and phenolic acids [39], the spectral bands between 1755 and 1400 cm −1 and 1000 and 870 cm −1 were highlighted as the most important. Based on our data, we suggest that stretching vibrations of carboxylic groups at around 1735 cm −1 are more likely assigned to hydroxy-benzoic acid moieties that are present in OS as the major autoxidation products of quercetin. It has been reported that protocatechuic acid and 2-(3,4-dihydroxybenzoyl)-2,4,6-trihydroxybenzofuran-3 (2H)-one are formed during storage of the onions; during that period, enzymatic hydrolysis of quercetin glucosides to release the aglycone form proceeds in parallel with quercetin decomposition reactions [40]. Other conjugates formed due to auto-oxidation may also exist in relatively high amounts [41]. Sample To examine further whether these tagged regions of the spectra have diagnostic value related to the total phenolic compound content of OS samples, the spectra of a group of totally 26 OS samples, representing the skin and the 1st inner layer of individual onion bulbs, were acquired and imported to the original dataset. Principal Component Analysis of the data from 0th and 2nd order derivative spectra in the regions 600-1800 cm −1 resulted in 13 and 6 PCs, respectively. These PCs explained 99.2% and 75.7% of the total variance in each case. The analysis of 0th order data extracted eight PCs with very low eigenvalues (< 1) that explained almost 8% of total variance. This result verifies that a considerable amount of variance in the 0th order data is unique or not systematic and is omitted upon 2nd order derivatization. Both rounds of PCA showed that the two groups of samples could be distinguished on the PC1-PC3 scoreplot (Figure 7) on the basis of t3 score values. To examine further whether these tagged regions of the spectra have diagnostic value related to the total phenolic compound content of OS samples, the spectra of a group of totally 26 OS samples, representing the skin and the 1st inner layer of individual onion bulbs, were acquired and imported to the original dataset. Principal Component Analysis of the data from 0th and 2nd order derivative spectra in the regions 600-1800 cm −1 resulted in 13 and 6 PCs, respectively. These PCs explained 99.2% and 75.7% of the total variance in each case. The analysis of 0th order data extracted eight PCs with very low eigenvalues (< 1) that explained almost 8% of total variance. This result verifies that a considerable amount of variance in the 0th order data is unique or not systematic and is omitted upon 2nd order derivatization. Both rounds of PCA showed that the two groups of samples could be distinguished on the PC1-PC3 scoreplot (Figure 7) on the basis of t3 score values. The loading plots of the first and third PCs extracted from each round of PCA are shown in Figure 8. Spectral in the lower frequency region, e.g., at 613, 832, 948, 980 (possibly due to the phenolic ring structure), at 1012-1050 cm −1 (sugars) along with that in the region between 1462-1472, 1500-1520, and at 1734 cm −1 contributed more heavily to the t1-t3 score distribution (p > ±0.6) of these OS samples. Special attention was given to the observed variance between 600 and 1000 cm −1 and 1400-1800 cm −1 because it is expected to reflect more clearly differences in the composition of flavonoids and phenolic acid constituents [39]. New rounds of PCA on spectral data that corresponded only to shorter infrared regions revealed that the abundance of carboxylic acid groups remained a distinctive feature of the dry skin and first inner onion bulb layers (Figure 8). Further exclusion of variables between 900 and 1000 cm −1 resulted in similar performance of the PCA model and verified (through corresponding loading plots) that the observed sample allocation is significantly affected by vibrations beyond that region (e.g., ether bonds in carbohydrates). Moreover, it made it possible to highlight that flavonoid ring-related bands around 600-650 cm −1 and 1500-1560 cm −1 are also important for the observed pattern The loading plots of the first and third PCs extracted from each round of PCA are shown in Figure 8. Spectral in the lower frequency region, e.g., at 613, 832, 948, 980 (possibly due to the phenolic ring structure), at 1012-1050 cm −1 (sugars) along with that in the region between 1462-1472, 1500-1520, and at 1734 cm −1 contributed more heavily to the t1-t3 score distribution (p > ±0.6) of these OS samples. Special attention was given to the observed variance between 600 and 1000 cm −1 and 1400-1800 cm −1 because it is expected to reflect more clearly differences in the composition of flavonoids and phenolic acid constituents [39]. New rounds of PCA on spectral data that corresponded only to shorter infrared regions revealed that the abundance of carboxylic acid groups remained a distinctive feature of the dry skin and first inner onion bulb layers (Figure 8). Further exclusion of variables between 900 and 1000 cm −1 resulted in similar performance of the PCA model and verified (through corresponding loading plots) that the observed sample allocation is significantly affected by vibrations beyond that region (e.g., ether bonds in carbohydrates). Moreover, it made it possible to highlight that flavonoid ring-related bands around 600-650 cm −1 and 1500-1560 cm −1 are also important for the observed pattern among OS samples. Closer inspection of the FT-MIR spectral curves after 2nd order derivatization revealed clear differences in the shape of the bands between 600 and 900 cm −1 that might be partially attributed to skeletal vibrations of different flavylium ring substitution patterns. Considering that anthocyanins constitute a minor percentage of total flavonoids in the skin and outer layers of red onions [6] and the fingerprint region of the spectrum is dominated by highly overlapped signals, it is expected that these shorter-range FT-MIR bands do not assist in quantitative analyses. The FT-MIR data in the specific regions are promising for further exploratory evaluation considering mainly the potential for monitoring oxidation phenomena or other major sources of variance in the phenolic composition, but that is beyond the scope of this study. among OS samples. Closer inspection of the FT-MIR spectral curves after 2nd order derivatization revealed clear differences in the shape of the bands between 600 and 900 cm −1 that might be partially attributed to skeletal vibrations of different flavylium ring substitution patterns. Considering that anthocyanins constitute a minor percentage of total flavonoids in the skin and outer layers of red onions [6] and the fingerprint region of the spectrum is dominated by highly overlapped signals, it is expected that these shorterrange FT-MIR bands do not assist in quantitative analyses. The FT-MIR data in the specific regions are promising for further exploratory evaluation considering mainly the potential for monitoring oxidation phenomena or other major sources of variance in the phenolic composition, but that is beyond the scope of this study. Conclusions Given the heterogeneity of the OSW, which affects their chemical composition, the creation of reference databases is the most essential condition for building robust predictive models. Even though a relatively small OS sample set was used in the current study, the variance in total phenol and total anthocyanin contents of these samples was in the low-high ranges that are typically reported in literature. In the current study, we showed that VNIR-SWIR and FT-IR spectroscopic techniques could be deployed in routine quality control analyses of onion waste, especially in the evaluation of phenolic composition and more particularly in the assessment of the total anthocyanin content. Chemometric analyses of the data through various machine-learning techniques are indispensable for the identification of diagnostic bands across the visible near-to short-wave and mid-infrared regions. Above all, a k-NN model of 1st derivative spectra in the region of 350-2500 nm was the most powerful for the prediction of the monomeric anthocyanin content in dry red OS samples (R 2 = 0.82, RMSE = 0.52, and RPIQ = 3.56). The performance of the predictive model remained satisfactory when it was assessed in a shorter, more selective spectral range. This result supports the perspective for the potential uses of low-cost spectroscopic sensors in this field. The FT-IR spectral fingerprint was more informative about the inherent quality characteristics of OSW as it enables structural assignments. Overall, we suggest that non-destructive spectroscopic tools operating in the visible-near-short-wave and midinfrared regions can be employed in real-time quality control of OSW if the spectral data are of high quality and well-demonstrated diagnostic value or predictive accuracy with regard to the audit target, e.g., anthocyanins content. Updating the reference onion skin spectral libraries and evaluation of the model performance are, thus, in progress.
2021-07-26T00:06:01.530Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "b1b268544195addedaebc89c82773ae9a7d0a7c5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su13126588", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0f5deee0b43155dbe7ec7d91f5d719897d4b9084", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Materials Science" ] }
118452828
pes2o/s2orc
v3-fos-license
Line graphs and the transplantation method We study isospectrality for mixed Dirichlet-Neumann boundary conditions, and extend the previously derived graph-theoretic formulation of the transplantation method. Led by the theory of Brownian motion, we introduce vertex-colored and edge-colored line graphs that give rise to block diagonal transplantation matrices. In particular, we rephrase the transplantation method in terms of representations of free semigroups, and provide a method for generating adjacency cospectral weighted directed graphs. Introduction Inverse spectral geometry studies the extend to which a geometric object, e.g., a Euclidean domain, is determined by the spectral data of an associated operator, e.g., the eigenvalues of the Laplace operator with suitable boundary conditions. This objective is beautifully summarized by Kac's influential question "Can one hear the shape of a drum?" [Kac66]. Recently, the author [Her14] Provided that ∂M is sufficiently smooth, this operator has discrete spectrum given by an unbounded non-decreasing sequence of non-negative eigenvalues. Using number-theoretic ideas, Sunada [Sun85] developed a celebrated method involving group actions to construct isospectral manifolds, i.e., manifolds whose spectra coincide. It ultimately allowed Gordon et al. [GWW92] to answer Kac's question in the negative. Buser [Bus86] distilled the combinatorial core of Sunada's method into the transplantation method, which involves tiled manifolds that are composed of identical building blocks, e.g., M and M ′ in Figure 1(A). If ϕ is an eigenfunction on M, then its restrictions (ϕ i ) 4 i=1 to the blocks of M are superposed linearly as ( 4 i=1 T ij ϕ i ) 4 j=1 on the blocks of M ′ such that the result is an eigenfunction on M ′ , and vice versa. All known pairs of isospectral planar domains with Dirichlet boundary conditions arise in this way [BCDS94], i.e., they are transplantable. Following [Her14], we encode each tiled manifold with mixed Dirichlet-Neumann boundary conditions by an edge-colored graph with signed loops that encode boundary conditions, e.g., G and G ′ in Figure 1(C). By definition, every vertex of an edge-colored loop-signed graph G has one incident edge of each color, either as a link to another vertex or as a signed loop. If G has k edge colors, then it is determined by its k-tuple of adjacency matrices (A c G ) k c=1 , which are diagonally-signed permutation matrices with if v = v and there is a c-colored link between vertices v and v, ±1 if v = v and there is a c-colored N-or D-loop at vertex v, respectively, 0 otherwise. Definition 1. Let G and G ′ be edge-colored or vertex-colored graphs given by k-tuples of n × n adjacency matrices (A c G ) k c=1 and (A c G ′ ) k c=1 , respectively. Then G and G ′ are said to be (1) transplantable if there exists an invertible transplantation matrix T ∈ R n×n such that cycle equivalent if for every finite sequence of colors c 1 , c 2 , . . . , c l ∈ {1, 2, . . . , k} . Note that transplantable graphs are cycle equivalent. The following characterization of transplantable tiled manifolds says that the converse holds for edge-colored loop-signed graphs. Theorem 2. [Her14] Let G and G ′ be edge-colored loop-signed graphs with the same numbers of vertices and colors. Let M and M ′ be tiled manifolds with mixed Dirichlet-Neumann boundary conditions obtained by choosing a building block. Then the following are equivalent: (1) M and M ′ are transplantable (and therefore isospectral), (2) G and G ′ are transplantable, (3) G and G ′ are cycle equivalent. The equivalence of (1) and (2) is shown using regularity and continuation theorems for elliptic operators, and the equivalence of (2) and (3) is shown using representation theory. In the following, we derive further characterizations of transplantability. As is well-known, Brownian motion on a manifold M has 1 2 ∆ M as its infinitesimal generator, rendering it a natural object of study for spectral questions. Consider a particle moving on the tiled manifold M in Figure 1(A). Each time the particle hits ∂ N M, it is reflected back, whereas contact with ∂ D M destroys the particle. Since the 4 triangular building blocks of M are isometric, we only keep track of the triangle sides visited, which corresponds to a walk on the colored vertices of the associated directed line graph L vc (G) shown in Figure 1(E). Each N-loop of G in Figure 1(C) contributes 3 directed edges of equal weight to L vc (G) in Figure 1(E), whereas D-loops of G do not contribute at all. The edge-colored directed line graph L ec (G) in Figure 1(G) is obtained by coloring edges instead of vertices. In Section 2, we define L vc (G) and L ec (G) rigorously, and prove the following main theorem. Theorem 3. Let G and G ′ be edge-colored loop-signed graphs with n vertices and k colors. If tr(A c G ) = tr(A c G ′ ) for every c ∈ {1, 2, . . . , k}, then the following are equivalent: (1) G and G ′ are transplantable, (2) G and G ′ are cycle equivalent, (3) L vc (G) and L vc (G ′ ) are transplantable, (4) L vc (G) and L vc (G ′ ) are cycle equivalent, (5) L ec (G) and L ec (G ′ ) are transplantable, (6) L ec (G) and L ec (G ′ ) are cycle equivalent. If any of the above conditions holds, then there exists an invertible transplantation matrix for both (3) and (5) that is the direct sum of square matrices of sizes (tr(I n + A c G )/2) k c=1 . Theorem 3 has the following representation-theoretic interpretation. For K = {1, 2, . . . , k}, we denote the free group on K by F (K), and the free semigroup on K by K + . The graphs G and G ′ give rise to a pair of representations of F (K) by virtue of c ±1 → A c G and c ±1 → A c G ′ , respectively. Similarly, L vc (G) and L vc (G ′ ), as well as L ec (G) and L ec (G ′ ), give rise to pairs of representations of K + . If the assumptions of Theorem 3 are satisfied, and the representations of one pair are equivalent or have equal characters, then both are true for all pairs. It is worth mentioning that [Her14] gives examples of non-isomorphic graphs G and G ′ as in Theorem 3 that have 4 vertices, no N-loops, and isomorphic line graphs L vc (G) and L vc (G ′ ). These pairs closely resemble the single exception of the classical Whitney graph isomorphism theorem [Whi32] which states that two uncolored connected graphs without loops or parallel edges are isomorphic if and only if their line graphs are isomorphic, with the exception of the triangle graph K 3 and the star graph S 3 = K 1,3 , which both have K 3 as their line graph. We want to point out the results in [MM03,OB12], which initiated our investigations. In [MM03], McDonald and Meyers consider the finitely many known pairs of transplantable planar domains with pure Dirichlet boundary conditions [BCDS94], and introduce their line graph construction, which, in our notation, corresponds to the assignment M → L ec (G). For each of the pairs in [BCDS94], they verify that the associated edge-colored line graphs are cospectral with respect to a certain discrete Laplace operator. In [OB12], Oren and Band note that these graphs are also cospectral with respect to their weighted adjacency matrices. However, the line graph construction was neither known to always produce cospectral graphs, nor could it deal with Neumann boundary conditions, and it had not been noticed that there exist canonical transplantations as in the second part of Theorem 3. Colored directed line graphs Let G be an edge-colored loop-signed graph with n vertices and adjacency matrices (A c G ) k c=1 . In particular, tr(I n + A c G )/2 equals the number of c-colored links and N-loops of G. We note that if G has no N-loops or parallel links, then L ec (G) is a simple edge-colored undirected graph, meaning it has symmetric {0, 1}-adjacency matrices with zero diagonal. Definition 5. Let B w ∈ {0, 1, w} n×n L be the weighted incidence matrix of G * given by For each c ∈ {1, 2, . . . , k}, let C c ∈ {0, 1} n L ×n L be the diagonal matrix given by Proof. We start by showing that (2) and (4) are equivalent, which amounts to showing that the traces of products of adjacency matrices of G determine those of L vc (G), and vice versa. Since tr(I n ) = n and (tr(A c G )) k c=1 are given by assumption, tr(I n L ) = k c=1 (tr(I n + A c G ))/2 can be assumed as given as well. For c ∈ {1, 2, . . . , k}, we have tr(A c L vc (G) ) = 0, (A c G ) 2 = I n , and (A c L vc (G) ) 2 = 0. It therefore suffices to consider products of adjacency matrices with cyclically square-free color sequences, i.e., sequences of the form c 1 , c 2 . . . , c l ∈ {1, 2, . . . , k} with c 1 = c l and c i = c i+1 for i ∈ {1, 2, . . . , l − 1}. As C c 1 C c 2 = C c 2 C c 1 = 0, Lemma 6 yields , which gives the desired statement by induction on l. In the following, let G and G ′ be transplantable edge-colored loop-signed graphs with n × n adjacency matrices (A c G ) k c=1 and (A c G ′ ) k c=1 , respectively. Let T ∈ R n×n be an invertible transplantation matrix satisfying A c G T = T A c G ′ for every c ∈ {1, 2, . . . , k}. In particular, which equals the number of c-colored links and N-loops of G or G ′ , respectively. Each of the graphs L vc (G) and L vc (G ′ ) has n L = n 1 L +n 2 L +. . .+n k L vertices, which we number accordingly, i.e., the respective first n 1 L vertices have color 1, followed by n 2 L vertices of color 2, and so on. Let e, e ′ ∈ {1, 2, . . . , n L }. We denote the color of edge e of G * by c, and its incident vertices by v and v, where v = v if it is an N-loop. Analogously, we let edge e ′ of (G ′ ) * have color c ′ and possibly identical incident vertices v ′ and v ′ . Then, are the only non-vanishing entries in their respective row and column. In particular, which allows to define a transplantation matrix for the line graphs associated with G and G ′ . Definition 8. The line graph transplantation matrix T L ∈ R n L ×n L coming from T is given by For later reference, we note that if e was a c-colored D- Proof. For each c ∈ {1, 2, . . . , k}, let E c = {n 1 L +n 2 L +. . .+n c−1 L +e c | e c = 1, 2, . . . , n c L }, which corresponds to the c-colored edges of G * as well as to the c-colored edges of (G ′ ) * , which in turn correspond to the c-colored vertices of L vc (G) and L vc (G ′ ), respectively. In order to show that the block diagonal matrix T L is invertible, it suffices to show that each of its k diagonal blocks has linearly independent rows. Let c ∈ {1, 2, . . . , k}, and assume that (a e ) e∈E c ∈ R n c L satisfies 0 = e∈E c a e [T L ] ee ′ for every e ′ ∈ E c . As before, we let edge e ′ ∈ E c of (G ′ ) * have possibly identical incident vertices v ′ and v ′ . We first consider the case v ′ = v ′ . Recall that every vertex v of G has exactly one incident c-colored edge, which we denote by e(v, c). Let whereas if e = e(v, c) = e( v, c) is a link between v and v, then Hence, where in the last equality we used that each vertex v is the unique c-neighbor of some The same arguments apply if v ′ = v ′ , except for the summands involving v ′ which disappear. Since the rows of T are linearly independent, we deduce that a e(v,c) = 0 for every v ∈ {1, 2, . . . , n}. In other words, (a e ) e∈E c = 0, which proves that T L is invertible. Next, we show that for every c ∈ {1, 2, . . . , k} and e, e ′ ∈ {1, 2, . . . , n L } If c = c or c = c = c ′ , then [A c L vc (G) ] e e = 0 for all e ∈ E c ′ , and [A c L vc (G ′ ) ] e ′ e ′ = 0 for all e ′ ∈ E c . It therefore suffices to consider the case c = c = c ′ . As before, we let edge e of G * and edge e ′ of (G ′ ) * have incident vertices {v, v} and {v ′ , v ′ }, respectively. Note that each of the sums above has at most 2 non-vanishing summands, which correspond to the c ′ -colored edges at v and v, as well as the c-colored edges at v ′ and v ′ , respectively. Regardless of whether these edges are links, N-loops, or D-loops, if v ′ = v ′ , then
2015-04-13T12:04:35.000Z
2015-04-09T00:00:00.000
{ "year": 2015, "sha1": "075e13978d809002317c8de51e6153d68fec7d14", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.laa.2016.05.021", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "075e13978d809002317c8de51e6153d68fec7d14", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
218581166
pes2o/s2orc
v3-fos-license
Towards Efficient Normalizers of Primitive Groups We present the ideas behind an algorithm to compute normalizers of primitive groups with non-regular socle in polynomial time. We highlight a concept we developed called permutation morphisms and present timings for a partial implementation of our algorithm. This article is a collection of results from the author’s PhD thesis. Introduction One of the tools to study the internal structure of groups is the normalizer. For two groups G and H, which are contained in a common overgroup K, we call the normalizer of G in H, denoted N H (G), the subgroup of H consisting of those elements that leave G invariant under conjugation. We only consider finite sets, finite groups, and permutation groups acting on finite sets. We assume permutation groups to always be given by generating sets and say that a problem for permutation groups can be solved in polynomial time, if there exists an algorithm which, given permutation groups of degree n, solves it in time bounded polynomially in n and in the sizes of the given generating sets. While many problems for permutation groups can be solved efficiently both in theory and in practice, no polynomial time algorithm to compute normalizers of permutation groups is known. A transitive permutation group G acting on a set Ω is called primitive if there exists no non-trivial G-invariant partition of Ω. Primitive groups have a rich and well-understood structure. Hence many algorithms use the natural recursion from general permutation groups to transitive and in turn to primitive ones. For two permutation groups G, H ≤ Sym Ω computing the normalizer of G in H in general is done by searching for the normalizer of G in the symmetric group Sym Ω and simultaneously computing the intersection with H. We focus on computing the normalizer of a primitive group G ≤ Sym Ω in Sym Ω. Being able to compute normalizers for primitive groups efficiently may lead to improved algorithms for more general situations. Our results build substantially on the O'Nan-Scott classification of primitive groups, see [17], and on the classification of finite simple groups (CFSG). Recall that the socle of a group G, denoted Soc G, is the subgroup generated by all minimal normal subgroups of G. Our theoretical main result is the following theorem. As is often the case in computational group theory, ideas from theoretical algorithms can be employed in practical algorithms and vice versa. While the algorithms in [23] are primarily theoretical ones, we also provide probabilistic nearly-linear time versions where possible. The author is developing the GAP package NormalizersOfPrimitiveGroups, hosted at https://github.com/ ssiccha/NormalizersOfPrimitiveGroups 2 , with the aim to implement practical versions of the algorithms developed in [23]. Until now, algorithms concerning permutation morphisms and primitive groups of type PA are implemented. First experiments indicate that already for moderate degrees these outperform the GAP built-in algorithm Normalizer by several orders of magnitude, see Table 1. Since no polynomial time solutions are known for the normalizer problem, the generic practical algorithms resolve to backtracking over the involved groups in one way or another. The fundamental framework of modern backtrack algorithms for permutation groups is Leon's partition backtrack algorithm [16], which generalizes previous backtrack approaches [5,6,12,24] and generalizes ideas of nauty [19] to the permutation group setting. Partition backtrack is implemented in GAP [9] and Magma [4]. Recently, the partition backtrack approach was generalized to a "graph backtrack" framework [14]. Theißen developed a normalizer algorithm which uses orbital graphs to prune the backtrack search [25]. Chang is currently developing specialized algorithms for highly intransitive permutation groups, her PhD thesis should appear shortly. It is to expect that the work in [14] can also be extended to normalizer problems. Hulpke also implemented normalizer algorithms in [13] using group automorphisms and the GAP function NormalizerViaRadical based on [10]. In Sect. 2 we outline the strategy behind our algorithms. In Sect. 3 we recall the O'Nan-Scott Theorem. We present our new concept of permutation morphisms in Sect. 4. In Sect. 5 we sketch how we use our results to obtain Theorem 1. In Sect. 6 we discuss our implementation. Strategy We describe the strategy of the theoretical algorithm behind Theorem 1. Comments regarding the implementation of its building blocks are given at the end of each following section. In this section let G ≤ Sym Ω be a primitive group with non-regular socle H. The normalizer of H in Sym Ω plays a central role in our algorithm, in this section we denote it by M . Observe that to compute N Sym Ω (G) it suffices to compute N M (G) since the former is contained in M . The socle H is isomorphic to T for some finite non-abelian simple group T and some positive integer . The group G is isomorphic to a subgroup of the wreath product Aut(T ) S , see Sect. 3 for a definition of wreath products. By the O'Nan-Scott Theorem the respective isomorphism extends to an embedding 3 of the normalizer M into Aut(T ) S . Furthermore is of the order O(log|Ω|). Hence the index of G in M , and thus also the search-space of the normalizer computation N M (G), is tiny in comparison to the index of G in Sym Ω. Our approach can be divided into two phases. First we compute M , this is by far the most labor intensive part. To this end we compute a sufficiently wellbehaved conjugate of G, such that we can exhibit the wreath structure mentioned above. In [23] we make this more precise and define a weak canonical form for primitive groups. Using that conjugate and the O'Nan-Scott Theorem we can write down generators for M . In the second phase, we compute a reduction homomorphism ρ : M → S k with k ≤ 6 log|Ω|. After this logarithmic reduction, we use Daniel Wiebking's simply exponential time algorithm [26,27], which is based on the canonization framework [22], to compute N S k (ρ(G)). Note that the running time of a simply exponential time algorithm called on a problem of size log n is 2 O(log n) and thus is bounded by 2 c log n = n c for some constant c > 0. Then we use Babai's famous quasipolynomial time algorithm for graphisomorphism [1,2] . Notice that since we perform these algorithms on at most 6 log n points they run in time polynomial in n. The homomorphism ρ is constructed in such a way, that computing the preimage of the above normalizer In our implementation we do not use the algorithms by Wiebking and Babai since these are purely theoretical. Instead we use the partition backtrack implemented in GAP. The O'Nan-Scott Theorem The goal of this and the next section is to illustrate how we use the O'Nan-Scott Theorem to prove the following theorem. In this article we limit ourselves to groups of type PA, which we define shortly. Proof. For groups of type PA this will follow from Corollary 5 and Lemma 6. The O'Nan-Scott Theorem classifies how the socles of primitive groups can act, classifies the normalizers of the socles, and determines criteria to decide which subgroups of these normalizers act primitively. We follow the division of primitive groups into eight O'Nan-Scott types as it was suggested by László G. Kovács and first defined by Cheryl Praeger in [21]. In this section we define the types AS and PA and recall some of their basic properties. In particular we describe the normalizer of the socle for groups of type PA and how to construct the normalizer of the socle, if the group is given in a sufficiently well-behaved form. The version of the O'Nan-Scott Theorem we use, for a proof see [17], is: Let G be a primitive group on a set Ω. Then G is a group of type HA, AS, PA, HS, HC, SD, CD, or TW. The abbreviation AS stands for Almost Simple. A group is called almost simple if it contains a non-abelian simple group and can be embedded into the automorphism group of said simple group. A primitive group G is of AS type if its socle is a non-regular non-abelian simple group. The abbreviation PA stands for Product Action. The groups of AS type form the building blocks for the groups of PA type. To define this type, we shortly recall the notion of wreath products and their product action. The wreath product of two permutation groups H ≤ Sym Δ and K ≤ S d is denoted by H K and defined as the semidirect product H d K where K acts per conjugation on H d by permuting its components. We identify H d and K with the corresponding subgroups of H K and call them the base group and the top group, respectively. For two permutation groups H ≤ Sym Δ and K ≤ S d the product action of the wreath product H K on the set of tuples Δ d is given by letting the base group act component-wise on Δ d and letting the top group act by permuting the components of Δ d . In the practical implementation we use the GAP built-in algorithm to compute the normalizer of T in Sym Δ. Our long-term goal is to use the constructive recognition provided by the recog package [20]. Computing the normalizer of T in Sym Δ is then only a matter of iterating through representatives for the outer automorphisms of T . Permutation Morphisms In general a group of PA type might be given on an arbitrary set and needs only be permutation isomorphic to a group in product action. In this section we discuss how to construct such a permutation isomorphism: Lemma 6. Let G ≤ Sym Ω be a primitive group of type PA. Then we can compute a non-abelian simple group T ≤ Sym Δ, a positive integer , and a permutation isomorphism from G to a permutation group G ≤ Sym(Δ ) such that the socle of G is T in component-wise action on Δ . To this end we present the notion of permutation morphisms developed in [23]. They arise from permutation isomorphisms by simply dropping the condition that the domain map and the group homomorphism be bijections. We illustrate how to use them to prove Lemma 6. Basic Definitions For two maps f : A → B and g : C → D we denote by f × g the product map g(c)). For a right-action ρ : Ω × G → Ω of a group G and g ∈ G, ω ∈ Ω we also denote ρ(ω, g) by ω g . Definition 7. Let G and H be permutation groups on sets Ω and Δ, respectively, let f : Ω → Δ be a map, and let ϕ : G → H be a group homomorphism. Furthermore let ρ and τ be the natural actions of G and H on Ω and Δ, respectively. We call the pair (f, ϕ) a permutation morphism from G to H if the following diagram commutes: We call ϕ the group homomorphism of (f, ϕ) and f the domain map of (f, ϕ). It is immediate from the definition, that the component-wise composition of two permutation morphisms again yields a permutation morphism. In particular we define the category of permutation groups as the category with all permutation groups as objects, all permutation morphisms as morphisms, and the componentwise composition as the composition of permutation morphisms. We rely on this categorical perspective in many of our proofs. We denote a permutation morphism F from a permutation group G to a permutation group H by F : G → H. When encountering this notation keep in mind that F itself is not a map but a pair of a domain map and a group homomorphism. We use capital letters for permutation morphisms. It turns out that a permutation morphism F is a mono-, epi-, or isomorphism in the categorical sense if and only if both its domain map and group homomorphism are injective, surjective, or bijective, respectively. For a permutation group G ≤ Sym Ω we call a map f : Ω → Δ compatible with G if there exists a group homomorphism ϕ such that F = (f, ϕ) is a permutation morphism. We say that a partition Σ of Ω is G-invariant if for all A ∈ Σ and g ∈ G we have A g ∈ Σ. Lemma 8 ([23, Lemma 4.2.10]). Let G ≤ Sym Ω be a permutation group and f : Ω → Δ a map. Then f is compatible with G if and only if the partition of Ω into the non-empty fibers If G is transitive, then the G-invariant partitions of Ω are precisely the block systems of G. Hence for a given blocksystem we can define a compatible map f by sending each point to the block it is contained in. Let G ≤ Sym Ω be a permutation group and f : Ω → Δ a surjective map compatible with G. Then there exist a unique group H ≤ Sym Δ and a unique group homomorphism ϕ : G → H such that F := (f, ϕ) is a permutation epimorphism, see [23,Corollary 4.2.7]. We call F the permutation epimorphism and ϕ the group epimorphism of G induced by f . Observe that a acts on Ω by permuting the points horizontally, while b acts on Ω by permuting the points vertically. The map p 1 projects Ω vertically or "to the top". Notice how the fibers of p 1 correspond to a block-system of V . We determine the group epimorphism π 1 of V induced by p 1 . By definition π 1 (a) is the permutation which makes the following square commute: (1, 2). Correspondingly we get π 1 (b) = id Ω1 . Products of Permutation Morphisms For two permutation groups H ≤ Sym Δ and K ≤ Sym Γ we define the product of the permutation groups H and K as the permutation group given by H ×K in component-wise action on Δ×Γ. Correspondingly, for an additional permutation group G ≤ Sym Ω and two permutation morphisms (f, ϕ) and (g, ψ) from G to H and K, respectively, we define the product permutation morphism G → H ×K as (f ×g, ϕ×ψ). Iteratively, we define the product of several permutation groups or permutation morphisms. To prove Lemma 6 it suffices to be able to compute the following: given the socle H ≤ Sym Ω of a PA type group compute a non-abelian simple group T ≤ Sym Δ and permutation epimorphisms, think projections, P 1 , . . . , P : H → T such that the product morphism P : H → T is an isomorphism. Since every surjective map compatible with H induces a unique permutation epimorphism, it in turn suffices to compute suitable maps p i : Ω → Δ. We illustrate how to construct one of the needed projections for PA type groups. Note how mapping each x ∈ Δ 2 to the block of Σ it is contained in is equivalent to mapping each x to x 2 . Thus we have essentially constructed the map p 2 : Δ 2 → Δ, x → x 2 . Observe that we only used the group theoretic property that H 1 is a maximal normal subgroup of H and thus in particular did not use the actual product structure of Δ 2 . Analogously we can construct the map p 1 : Δ 2 → Δ, x → x 1 . For i = 1, 2 let P i : H → A 5 be the permutation epimorphisms of H induced by p 1 and p 2 , respectively. Since p 1 × p 2 is an isomorphism, P 1 × P 2 must be a monomorphism. By order arguments P 1 × P 2 is thus an isomorphism. In general the above construction does not yield permutation epimorphisms with identical images. We can alleviate this by computing elements of the given group which conjugate the minimal normal subgroups of its socle to each other. Reduction Homomorphism Recall from Sect. 2 that a key ingredient of our second phase is a group homomorphism which reduces the original problem on n points to a problem on less or equal than 6 log n points. We illustrate shortly how to construct this homomorphism, for the details refer to [23,Theorem 9.1.6]. Let G ≤ Sym Ω be a primitive group with non-regular socle and T ≤ Sym Δ be a socle-component of G, confer [23, Chapters 5 and 7] for a definition. Then T is a non-abelian simple group, there exists a positive integer such that Soc G is isomorphic to T , and by [23, Lemma 2.6.1] we have |Ω| = |Δ| s for some s ∈ { /2, . . . , }. Denote by R the permutation group induced by the rightregular action of Out T on itself. We show that we can evaluate the following two group homomorphisms: first an embedding N Sym Ω (Soc G) → Aut T S and second an epimorphism Aut T S → R S , where R S is the imprimitive wreath product and thus acts on |R| · points. We sketch the proof that |R| · ≤ 6 log n. Let m := |Δ| and r := |R|. Note that for we have ≤ 2s = 2 log m n. Since R is regular, we have r = |Out T |. Since T is a socle-component of G, we have |Out T | ≤ 3 log m by [11,Lemma 7.7]. In total we have r · ≤ 3 log m · 2 log m n = 6 log n. In our implementation we use a modified version of this reduction. For groups of type PA we can directly compute an isomorphism from the product action wreath product into the corresponding imprimitive wreath product. Implementation A version of our normalizer algorithm for groups of type PA is implemented in the GAP package NormalizersOfPrimitiveGroups. Table 1 shows a comparison of runtimes of our algorithm and the GAP function Normalizer. At the time of writing, there are two big bottlenecks in the implementation. First, the GAP built-in algorithm to compute the socle of a group is unnecessarily slow. State-of-the-art algorithms as in [7] are not yet implemented. Secondly, computing a permutation which transforms a given product decomposition into a so-called natural product decomposition currently also is slow. The latter may be alleviated by implementing the corresponding routines in for example C [15] or Julia [3]. Note that the actual normalizer computation inside the normalizer of the socle appears to be no bottleneck: in the example with socle type (A 5 ) 7 it took only 40 ms!
2020-05-12T01:00:54.888Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "7f6258a6dd0e20c9dcf83ef65c84d64a89afdef2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-52200-1_10.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "26bdbc1646bc7840d175bc04fab670831c46afe3", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
255118273
pes2o/s2orc
v3-fos-license
The relationship between parenting engagement and academic performance Gender differences in research productivity have been well documented. One frequent explanation of these differences is disproportionate child-related responsibilities for women. However, changing social dynamics around parenting has led to fathers taking an increasingly active role in parenting. This demands a more nuanced approach to understanding the relationship between parenting and productivity for both men and women. To gain insight into associations between parent roles, partner type, research productivity, and research impact, we conducted a global survey that targeted 1.5 million active scientists; we received viable responses from 10,445 parents (< 1% response rate), thus providing a basis for exploratory analyses that shed light on associations between parenting models and research outcomes, across men and women. Results suggest that the gendered effect observed in production may be related by differential engagement in parenting: men who serve in lead roles suffer similar penalties for parenting engagement, but women are more likely to serve in lead roles and to be more engaged across time and tasks, therefore suffering a higher penalty. Taking a period of parental leave is associated with higher levels of productivity; however, the productivity advantage dissipates after six months for the US-sample, and at 12-months for the non-US sample. These results suggest that parental engagement is a more powerful variable to explain gender differences in academic productivity than the mere existence of children, and that policies should factor these labor differentials into account. The COVID-19 pandemic placed the productivity cost of parenting into sharp relief. Studies provide evidence of a decrease in women first-authorship on preprints 1,2 , papers 1 , and funding applications 3 , and lower participation in academic citizenship activities 4 than men. Consistent with previous research on gender disparities in science 1,2 , reasons for this inequity reference women's engagement in household responsibilities 4,5 and an increase of caregiving activities during COVID-19 related lockdowns, both for academics 6 and non-academics 7,8 . Similar effects have been noted within journals of many disciplines including science studies [9][10][11] ; gender studies 12 ; economics 13,14 ; sociology 15 and higher education studies 16,17 . Regardless of the field, the prevailing argument is simple: childcare and homeschooling for extended periods of time were overwhelmingly fulfilled by women 18,19 , thereby decreasing their engagement in professional responsibilities. Scientific inequities, however, predate the pandemic. Decades of work on gender in science have shown that, on average, women publish fewer articles [20][21][22] and receive fewer citations for their work 23 , even when publishing in journals with higher Impact Factors 3,24 . Such disparities in scientific productivity and performance are considered an artifact of wider structural inequalities within the science system. Domestic responsibilities associated with parenting have been heralded as an explanation for these differences 25 ; yet, research on the relationship between parenting and scientific productivity have produced mixed results. Studies have found that women with children face a productivity penalty compared to men with children and women without children [26][27][28] . Other studies have suggested no association between production and family obligations 29 or an increase in productivity immediately after birth 5,26 . The latter may be an artifact of publication lags and the pressures of an academic environment: demonstrating that women accelerate their productivity directly before birth, which manifests after. Scientific Reports | (2022) 12:22300 | https://doi.org/10.1038/s41598-022-26258-z www.nature.com/scientificreports/ Many of the explanations belie the intricacies of modern parenting 5 and previous studies of gender and scientific productivity fail to fully capture this complexity. Women have entered the workforce at higher rates, changing the norm to a dual-labor household 30,31 . In addition, the concept of fatherhood has also changed with fathers taking an increasingly central role in child-rearing 32 , particularly during the pandemic 33,34 . Studies also suggest attrition from STEM after the birth of a child for fathers 35 . These new household dynamics have increased the use of several adaptation behaviors, including labor shifts between parents 31 and the utilization of extended family members for primary childcare 30,31,36 . In addition, to characterize the balance of labor, it is essential to examine not only the workweek, but also the weekends 37 , particularly for researchers, who tend to have nonstandard work schedule. Furthermore, studies of gender disparities in parenting often focus solely on mothers 17 without comparative data on fathers. More nuanced research designs are necessary to understand how parental engagement influences research productivity and visibility beyond the current binary categorization of parenthood (that is, one that consider the presence of children rather than a spectrum of engagement with children). To understand the complex relationship between parenting and academic work, we employ a mixed methods approach utilizing: (1) statistical modelling of researcher productivity and visibility; and (2) an inductive qualitative coding technique to analyze free-text comments from survey respondents. Specifically, we consider how different models of parenting engagement and household arrangements are related to academic productivity and impact for men and women. We address the following research questions: 1. Is there a gendered difference in parenting engagement for researchers? 2. Is there a relationship between parental engagement and research productivity? If so, does this differ by gender and household composition? 3. Is there a relationship between degrees of parental leave and productivity by gender? If so, is this mediated by household composition? 4. Is there a gender difference between parenting engagement and scientific impact? If so, is this mediate by household composition? A global database does not exist from which one could sample all active scientists with children. Therefore, we fielded a survey of 1.5 million publishing scientists in order to generate a sample and match that sample to bibliometric data. The final set of respondents (n = 11,226) represents one of the largest samples of publishing parents; however, the results-particularly the inferential statistical tests-should be interpreted in light of any potential response bias (given that the sample represents < 1% of the initial sampling frame). Results Parenting labor by gender. Among our respondents, women are more likely than men to serve as the primary caregiver for their children (30.6 vs. 3.9%) (Fig. 1A). Inversely, more than a third of men (38.9%) indicated that they served in a secondary role in parenting (i.e., satellite), compared to only 17.4% of women. This establishes an important baseline for the study of scientific parenting: that is, women scholars are disproportionately likely to be taking a lead role in caregiving. The most common model, however, is one of shared parenting: the majority of men (57.1%) and women (52.0%) indicated that parenting roles were shared equally with their partner (i.e., dual). These self-reported roles were investigated to understand how they manifest themselves in terms of both time and task engagement-that is, the times during which parents were engaging in parenting and the types of tasks with which they were disproportionately associated. Lead parents of both genders indicated high percentages of engagement across time compared to other roles, suggesting that the 3.9% of men serving in this role are strongly engaged in parenting. However, men in dual and satellite parenting roles were much less likely than women in the same roles to report primary caregiving across times. This demonstrates that there is a higher burden of labor for women to classify themselves as dual and satellite parents than men (Fig. 1B). Men reinforced these disproportionate labor expectations, even in shared parenting relationships: "Although I try to be active in child care and share responsibilities equally, my wife still takes care of more child care tasks than I do". (M, Dual, US). The time results were confirmed by an analysis of tasks. In nearly every category-particularly for dual and satellite parents-women were more likely to be the parent engaged in the caregiving tasks (Fig. 1C, Table S1). There were fewer differences in how men and women operationalized lead parenting, with lead parents of both genders significantly engaged in childcare tasks. The only task where lead fathers demonstrated significant differences were in dropping off children at school/nursery (79.3 vs. 66.4%, χ 2 =12.12, p = 0.001) and coaching sports (40.8 vs. 17.7%, χ 2 =53.66, p < 0.001). Much stronger gender differences were observed in dual parenting. Men in dual roles were more likely to drop off children at school/nursery (40.0 vs. 29.3%, χ 2 =70.98, p < 0.001), coach sports (30.5 vs. 5.6%, χ 2 =625.04, p < 0.001), and do school/nursey pick-up (28.7 vs. 27.0%, χ 2 =2, p = 0.168), though the latter was not significant. Women in dual roles were significantly more likely to be primarily responsible for all other caregiving duties. The same was true for satellite roles. The only task with which men were significantly more likely to be associated was coaching sports (25.8 vs. 5.1%, χ 2 =188.44, p < 0.001). The time and task analyses reinforce each other: when they self-identified as dual or satellite parents, women are disproportionately engaged in parenting activities. Furthermore, there is little difference between women's labor performance in dual and satellite roles ( Figure S1). These asymmetries between labor and credit show that, even in the perception of equality between parents, women carry a higher burden of labor. Qualitative responses were illuminating in this regard. A woman from Tunisia noted that the survey made her aware that she was the main caregiver. Other respondents supported this, but questioned the exhaustivity of the task list: Respondents reporting being a primary caregiver for the parenting-related activities, by gender and parenting type. Respondents are considered "primary caregiver" if they reported "Mostly me" or "Almost always me" in taking care of these activities. The asterisks denote the FDR-adjusted p-values from the two-sample tests of proportion between men and women: + p < 0.1, * p < 0.05, ** p < 0.01, *** p < 0.001. The cognitive and emotional burdens of domestic labor disproportionately born by women have been wellrecognized in previous studies 38 and were manifest in the exogenous shock of the pandemic 2,4,6 . Therefore, the inequities observed in the itemization of task and time may only represent a conservative estimate of the actual difference in parenting engagement. However, our work demonstrates that self-report data of shared parenting discounts women's engagement in parenting and overestimates men's. Work arrangements and adaptation in parenting. One limitation of the survey is that it captured individual nodes in dyadic relationships, rather than paired couples. One might expect, for example, different labor roles based on the occupation of the partner. To control for this, we identified the sector of employment of the partner, with a particular focus on situations where both the respondent and their partner were employed in academia. Academic couples arguably experience the same productivity pressures and job responsibilities as each other, creating a natural control for labor expectations. Overall, academic women as dual parents are strongly affected by having an academic partner: in terms of the task analysis, women with non-academic partners are primarily responsible for a larger number of tasks than their counterparts with academic partnersespecially regarding transportation and evening care (Fig. 2, Table S2). In contrast, academic men are much less affected by their academic partnership status except for dual parents--those with an academic partner are more Respondents with an academic employment (n = 8,046) reporting being a primary caregiver for the parenting-related activities by parenting type, gender, and partner employment status (Academic vs. Nonacademic). Respondents are considered "primary caregiver" if they reported "Mostly me" or "Almost always me" in taking care of these activities. The asterisks denote the FDR-adjusted p-values from the two-sample tests of proportion between those having an academic partner and their counterparts having a non-academic partner: + p < 0.1, * p < 0.05, ** p < 0.01, *** p < 0.001. For academic men as either lead or satellite parents, we found no statistically significant differences between those with and without an academic partner in parenting engagement (Fig. 2, Table S3). The benefits of a partner who understands the labor burdens of academe was evident in the qualitative responses. For example, parents commented on the perceived flexibility of research careers and these advantages were enhanced when both parents were academics:"…My wife is also an academic which actually helped in sharing duties in a much more understanding way". (M, Dual, Singapore). Scientific Reports However, whereas an academic career was seen as more flexible and therefore amenable to parenting, there was an assumption that flexibility also implied availability. This was most evident in respondents who were the sole academic in their household and the delicate balance between flexibility and availability was experienced by both women ("Inevitably, we both feel that if a sacrifice must be made, it is my schedule" (W, Dual, US)) and men; "It is hard to balance academic work and home life -as in many cases your partner does not understand that reading and working on your computer is your job. Thus, you find that you have various tasks (family, children, house, errands) thrown to you by your spouse who works a "regular" job because you are "not busy". Women, however, are particularly affected by partner occupation, with significant differences in equity between those with and without academic partners. This suggests that spousal hiring programs may have stronger implications for women faculty in caregiving roles as the equitability of parenting tasks is higher with academic partners. Effect of parental engagement on research productivity. Although parenting engagement and partner types account for only a small fraction of variations in productivity (after controlling for academic age, number of children, and discipline), certain patterns are revealing. As illustrated in Fig. 3, both men and women suffer a productivity loss when they are single or lead parents. Using dual mothers with a non-academic partner as our reference group, dual fathers (β = 0.05, p = 0.029) and satellite mothers (β = 0.09, p = 0.004) are 5.6% and 8.9% more productive, respectively; single mothers are 15.3% less productive (β = − 0.17, p = 0.048) ( Table S4). The differential effects of parenting engagement involving partnership status for men and women is confirmed in the two-academics subsample, where lead parents are, on average, 11.1% less productive than dual parents (β = − 0.12, p = 0.012), and the magnitude is roughly the same as the additional effects of being a lead father and being a lead mother with an academic partner in the full sample. This suggests that parenting penalties are felt by both men and women. As one respondent noted: Despite the positive but not significant effect on productivity of having an academic partner for dual mothers (β = 0.05, p = 0.082), we saw an additional 10% decrease in productivity (β = − 0.11, p = 0.038) for lead mothers with an academic partner. This is tied with the reference point of dual mothers, where we see that dual mothers operate at similar levels of engagement as lead mothers. Perhaps counterintuitively, lead-mothers are as productive as dual mothers when they both have a non-academic partner (β = 0.00, p = 0.862). This is in sharp contrast to lead fathers who suffer an additional 11.6% decrease in productivity (β = − 0.12, p = 0.11; not significant). This is likely a result of the unequal engagement women demonstrate in these parenting roles. Men seem to be most productive when they are in satellite roles with academic partners. Women, on the other hand, are most productive when they are in a satellite role with a non-academic partner. This may be a result of the perceived flexibility of academic roles intersecting with cultural expectations in parenting. As one woman observed: "You have to be prepared to work twice as hard and accept that. "(W, Dual, UK). Effects of parental leaves on production. Parental leaves are associated with higher production, but have a point of diminishing return that varies by country (Fig. 4, Table S5). The production advantage is highest at less than one month of leave for the US sample (estimated increase of 26.9%; β = 0.24, p = 0.002) and decreases for every three months of leave after (26.7% increase for longer than one but less than three months [β = 0.24, p < 0.001], and 17.8% for three to six months [β = 0.16, p < 0.001] among US women). The advantage disappears after six months with the US sample and after twelve months in the non-US sample. These cultural differences may be explained by the normative leave lengths and productivity expectations by country. One woman from the US explained how the casual terminology reinforced expectations of production during the limited leave given to US mothers: The six weeks after giving birth should be termed medical leave for the person who delivered the baby (regardless of if they are parenting the child). That should be treated as such. There are still people who will call it a 'sabbatical'. ( W, Dual, US) While the effect is relatively modest in the non-US sample and non-significant for taking leaves less than one month (β = − 0.07, p = 0.609), the corresponding increase in number of papers is estimated to be 17.1%, 10.5%, and 10.6% for leaves longer than one but less than three months (β = 0.18, p < 0.003), three to six months (β = 0.10, p = 0.009), and longer than six but less than twelve months (β = 0.10, p = 0.008). None of the interaction terms between gender and parental leave length are statistically significant, suggesting that the effect of parental leave www.nature.com/scientificreports/ length on production does not differ by gender. However, women are more likely to take leave and to take longer leave, leading to a disproportionate production disadvantage. Respondents noted that family friendly policies did little to mediate the effects that labor demands on scientific production, including the necessity of managing ongoing growth of measures of academic impact over the lifetime of the career: "Family friendly policies are all very well but basically just allow you to take time off work; they don't reduce the amount of work that there is to do or remove deadlines". (W, Lead, UK). Effect of parenting engagement on scientific impact. Two indicators of scientific impact are used to examine the relationship between parenting academic capital: number of total citations (TCS) and mean normalized citation scores (MNCS). While the first one is an absolute indicator largely dependent on researchers' number of papers, the indicates the average impact of their paper compared to other papers published in the same specialty within the same year. Results from the TCS and MNCS models are similar in magnitude and significance, although some notable differences exist (Fig. 5, Table S4). The same differential effects of having an academic partner and being a lead parent for men and women on productivity is also present for impact. Having an academic partner for a dual mother increases impact by 11% (MNCS) (β = 0.10, p = 0.001) and 15.3% (TCS) (β = 0.14, p = 0.004); whereas having an academic partner for a lead mother brings an additional decrease to impact by 17 www.nature.com/scientificreports/ effect of being a lead parent for women is moderated by partner's employment type in both production and impact. The positive effect of being a satellite mother is only significant in the MNCS model (β = 0.08, p = 0.019). The notable discrepancies between the two models relate to the moderating effect of gender on different aspects of the relationship between partnership status, parenting type, and impact. A dual father with an academic partner decreases MNCS by 9.6% (β = − 0.10, p = 0.031), whereas a lead father faces an additional 29.3% drop in TCS (β = − 0.35, p = 0.025). The former can be translated into the non-effect of academic partners for men and the latter the differential effects of being a lead parent for men and women. More specifically, being a lead parent has a negative effect on impact (TCS) for men regardless of their partner's occupation, but for women only if their partner is an academic. This again is confirmed in both MNCS and TCS models fitted with the two-academics sample, where the lead parent effect for women is close to the interaction effect between lead parent and academic partner in the models with the full sample. Scientific impact is a function of visibility: work is more likely to be cited when authors are visible in the scientific community through collaboration, travel, and other forms of engagement. Therefore, it stands to reason that parenting demands that reduce visibility will translate into lower citation rates. Respondents often discussed how institutional policies were inadequate in compensating for demands of research careers (e.g., the necessity to travel, overnight stay and long, after-hours work). However, partners who are flexible and supportive were essential for engagement in the scientific community: "Flexible work hours are a blessing, but the travelling required for a successful career (conferences, networking, field work) is a nightmare. I am lucky to have a supportive partner, without whom I would not have been able to pull it off ". (W, Dual, Sweden) Discussion and conclusion Our analysis offers a novel lens by examining the cost of parenting engagement, as opposed to previous research that focuses on the binary existence of children as a reason for productivity disparities 26,27,39,40 . This work, therefore, provide insights on some of the unexplained productivity differences observed in earlier research which focused merely on the existence of but not engagement with children 35 . Parenting engagement is related with decreased research productivity and impact; however, the composition and management of the household plays an important role in mediating this effect. Results from our respondents show that the parenting penalty for men and women is amplified by their level of engagement in parenting activities. Differential participation in parenting may largely explain observed gender effects. Men who serve in lead roles suffer similar penalties, but women are more likely to serve in lead parenting roles and to be more engaged across time and tasks. In addition, despite respondents indicating that they engage in a dual parenting style, women still engaged in a significantly higher level of daily parenting tasks than men, which may explain the divergence in penalties between men and women in these roles. Simply put, women bear a higher burden of "reproductive labor" 32 . Fathers suffer productivity penalties as their engagement increases; however, these penalties are felt more by women given that they are more likely to serve in lead roles and are more engaged in parenting, even when they report dual or satellite parenting styles. Results suggest a zero-sum game between research productivity and parenting: the more engagement with parenting, the lower the productivity. Our work, therefore, provides evidence of one side of the bi-directional work-family tension 33 . Work and family have been described as "greedy institutions", which make loyalty demands on individuals 33 . The characterization of science as a vocation further amplifies this tension, as individuals are "called" and develop their identity or "personality" through their craft. Furthermore, the incentive structure of in academe prioritize those who produce at extremely high levels, with seemingly infinite capacity for increase. As Frank Fox and colleagues observed: "When these standards for striving and excelling operate, or are idealized, work claims precedence among scientists, setting the stage for conflict with family. " Their analysis found that women demonstrate higher levels of conflict among women for both family on work and work on family. Our results confirm the former. Policies should account for this by creating greater permeability between work and family life to allow for parents to move more seamlessly between these professional and parenting responsibilities. It is the responsibility of the workplace, academic or otherwise, to make reasonable adjustments to the demands made by the work environment in order to adapt to the changing needs of its employees. For example, academic institutions should provide lactation rooms and on-campus childcare, to minimize the burden of transportation and shifting workspaces to meet parenting demands. Funding agencies should consider to support scholar parents. For example, the National Institutes of Health in the United States provides funding to offset childcare for funded doctoral students and postdoctoral fellows 41 . The Christiane Nüsslein-Volhard Foundation in Germany provides funding for household chores and childcare 42 stating that the "time thus freed allows [women scholars] to continue working at a high standard, despite the double burden. " The National Institute of Allergy and Infectious Disease provides funding for primary caregivers to hire technicians, to support their work when childcare demands are high. Professional conferences should also consider parenting needs by selecting safe locations, constructing management hours, and providing resources for childcare. This disproportionate engagement in parenting is exacerbated by the sector of employment of co-parents. There is greater equity in the distribution of parenting tasks when academic women are in dual partnerships with another academic, arguably because the labor expectations for both are shared. As Frank Fox and colleagues noted: "a match in spousal occupation creates potential synchrony, or shared understanding, of both work and family demands 33 ". Women in dual partnerships with non-academics do far more labor than their counterparts with academic partners. These inequities manifest in differences in productivity. Men and women scholars have lower productivity when they are single-parents and in lead parenting roles. Sharing parenting roles increases Scientific Reports | (2022) 12:22300 | https://doi.org/10.1038/s41598-022-26258-z www.nature.com/scientificreports/ productivity for both, but women's productivity is mediated by the occupation of their spouse: those with academic partners see a greater productivity gain than those without. Of course, non-academic roles are much more heterogeneous than academic roles, limiting our ability to interpret the cause of these differences (which operate in opposite ways for lead and satellite roles). However, the qualitative analysis of open-ended responses shed light on these tensions: academic work is seen as more flexible, therefore causing increased parenting burdens when academics are in partnerships with non-academics. These results are important for understanding dual-career academics, who represent a growing percentage of the academic workforce, and have higher rates among women than men 34 . One common consequence of dual-careers is the displacement of one career-in rank or geography-which largely disfavor women 43 . The phenomenon of the "two-body problem" was visualized by NASA astronomer Margaret Thaller thus: "As one body orbits the other, it tugs gravitationally on its partner, altering the original orbit. Then the second body does the same. In the end, there's this give-and-take of a dance, as each body influences the other, constantly changing its path. The bigger, more massive body moves the least… the smaller body has to careen all over the place, trying to find the right place to fit into the co-orbit (Cf 36). " Institutions should create mechanisms to avoid such "careening" and ensure stable positions for partnered scientists. Our findings suggest that women perform better when their partner is also an academic; therefore, spousal hiring tools may be important tools in achieving equity and minimizing the parenting penalty for women. For organizations willing to implement more gender-or parent-sensitive hiring policies, there is a potential to move beyond assessment of 'individuals' to one of 'couples' recognizing the benefit brought to an organization by dual-academic couples. We do not overlook the complexity of this change in policy and practice, including the added difficulty of dual-assessment for couples from separate disciplines and therefore faculties, departments, or functional units of the organization (e.g., management). A further option would be to offer one-off research grants to academic partners of new hires to compensate for any perceived loss of research time due to the move. The analysis of parental leaves for US respondents suggests that the longer the leave, the lower the productivity. These differences, however, were not observed for non-US respondents. This could be due to the heterogeneity of parental leave policies across nations or the lack of parental leave within the US 44 . While the leave penalty is gender neutral (in that it applies to both mothers and fathers) our study follows earlier analysis in showing that mothers are disproportionately more likely to take leave 28,44,45 and to take a longer leave than fathers. Fathers that do take parental leave are also more likely to have full-time working partners 45 , complying with the conceptualization of shared parenting used in this paper to illustrate the family/academic work dynamic of dual-academic households. This is conceptualization is likely to be even more acute in the immediate aftermath of the birth of a child, or when children are young. This creates an obvious tension: women place a high value on adequate leave policies when selecting an institution 28 ; however, their use of this leave still places them at a disadvantage in terms of productivity. Scientific labor and success for an academic career largely reflects the pipeline model, with adherence to "ideal worker norms 39 ". A recent study even referred to 'single ladies' as the ideal-type of academic during COVID-19, due to the absence of responsibilities that would otherwise impede progress 46 . Given that pipeline model presents barriers to re-entry for those who deviate from this norm or suspend their progression 9 , family friendly policies-such as stopping the clock for tenure and longer maternity leaves-that reinforce time away from research can lead to negative effects on careers. Leave policies, therefore, should not only include time away, but acknowledge the consequences of that leave for subsequent evaluations. As Morgan et al. observed, the productivity difference observed after childbirth would take mothers roughly five years of work to close 28 . It is no surprise, therefore, that taking a leave has a strong negative effect on the likelihood of promotion to full professor 44 . It does not benefit mothers to be allowed time away when they will be expected to compensate for that leave when they return. Research impact levels follow similar curves. Impact declines by engagement and is moderated by partner employment, with women in dual roles with an academic partner having higher levels of impact. This is likely a result of the ability to travel and be more visible in service and other professional engagement. Differences in impact, as measured by MNCS, are found to be non-significant for men and women and were in alignment with the time commitment expected from lead-, dual-and satellite parents. They were also not significant for academic couples, suggesting that, at least in part, couples adopt a strategy of cost-minimization to manage potential penalties. In particular, from the qualitative analysis, travel as a way of maintaining visibility in the discipline and maintain collaborations 10 , was found to be one of the most difficult academic responsibilities for parents to conduct, where a combination of structured-and unstructured-parenting strategies were necessary to minimize parenting penalty on scientific impact. Scientific organizations should take care to create equity in mobility programs and networking opportunities, to ensure that certain populations are not disadvantaged. Personal adaptation, however, places the burden upon the scholars and forces adaptations to meet the structural expectations of the "ideal worker". A stronger and more sustainable policy approach would be for institutions to reimagine scientific work to embrace a more diverse workforce. Working demands that assume disengaged parenting disadvantages women. Equitable evaluation requires that institutions consider how their criteria can be applied fairly across populations. These policies, however, should not anchor on the absence of women in science-that is, through longer leave programs, clock extensions, or virtual programming that allows distant participation. Rather, scientific organizations must imagine more creative ways in which women can be full participants in science. Diversity in science is not only a matter of justice, but is critical for a robust scientific ecosystem 11 . Overall caution should be applied to the findings, due to the possibility of non-response bias. With that caveat in mind, however, the results support intuition: they reveal an association, across men and women, between the Scientific Reports | (2022) 12:22300 | https://doi.org/10.1038/s41598-022-26258-z www.nature.com/scientificreports/ level of engagement in parenting activities and the academic outcomes under study. Men who serve in lead roles report similar penalties as women in those roles, but our sample indicates that women more frequently serve in lead parenting roles and are more engaged across time and tasks. Taking a period of parental leave is associated with higher levels of productivity; however, this association dissipates after 6 months for the US-sample, and at 12-months for the non-US sample. The findings provide reason for future research to overcome the sampling challenges faced in this study and develop research designs that allow for the measurement of the causal effect of parenting roles, not solely correlational analyses. The study also highlights the challenge of collecting data on global academics and offers exploratory, correlational evidence suggesting that parental engagement should be considered more extensively in explanations of gender differences in academic productivity (vis-à-vis the mere existence of children), and policies should factor these labor differentials into account. Survey questions asked about the number of children, the years of birth, if and how much maternity/paternity leave was taken following the birth of each child, as well as their engagement in activities that were related to providing care/decision-making for children. In addition, demographic information on children and partners of respondents were included, as well as contribution to childcare and academic careers. Further data processing was performed to exclude incomplete, incomprehensible and anonymous responses, reducing the sample size to 11,226, or 75.3% of eligible responses. It should also be noted that n = 16 responses identified as other gender, and because of the small sample size, were subsequently excluded from the statistical analysis. Further research is encouraged to further this paper's findings with a larger, and more representative sample size this non-binary gender group. This subset of 11,226 respondents was then matched with their corresponding publication records in the WOS over the 1980-2017 period. Such publication records were automatically disambiguated using the algorithm developed by Caron and van Eck 12 , which uses heuristics such as researchers' field of study, institution of affiliation, collaborators and cited references to automatically reconstruct researchers' publication records and distinguish papers written by two or more authors that share the same name. This algorithm has been shown to have high precision and recall 13 , and to produce the best results among the existing disambiguation algorithms 14 . There were, however, respondents to which we were unable to match a publication record, and researchers for which no citation information was found were also excluded for the analysis (N = 781 researchers). On the whole, the final sample analyzed contained 10,445 researchers, which represents 70.1% of all researchers who have finished the survey and met the criterion of being parents. This accounts for 0.70% of the set of sampled researchers, and for 0.40% of the entire population. Materials and methods The coverage of the sample varied by country (Table S6). English-speaking countries, such as the United States, United Kingdom, Canada, Australia, and New Zealand, accounted for more than 65% of all responses and are overrepresented. On the other hand, the majority of European countries exhibit a lower-than-average response rate, which is likely due to GDPR regulations. Asian countries also exhibit lower than average response date, which is-at least in the case of China-due to the access limitations of Qualtrics. Therefore, our results have to be interpreted in the light of this uneven coverage of countries, with particular concern regarding the heterogeneity of leave policies within the underrepresented countries. Quantitative analysis. The analytic set for multiple linear regression models includes 10,013 respondents after excluding 432 with zero MNCS. The outcome variables are the number of publications indexed in WOS between 1980 and 2017 in the productivity model. For the impact model, both the number of total citations (TCS) and mean normalized citation scores (MNCS) are used as proxies for research impact. MNCS was used to allow comparisons over time and disciplines. Natural log transformation was applied to normalize the indicators. We identified three parenting types based on respondents' caregiving situation (either current or when their children were dependents). Lead parents are the primary caregivers to their children, dual parents share equal parenting roles with either their partner or non-parental caregivers (e.g., grandparents, nannies), and satellite parents whose partner or non-parental others are primary caregivers for children. The lengths of parental leaves are divided into six categories: no leave, less than one month, at least one but less than three months, three to six months, more than six but less than 12 months, and 12 months or more. To test whether the relationship between parental engagement and research performance differs by gender and partnership status, we include a three-way interaction term between parenting type, partnership status, and gender. Partnership status is determined by the current occupation of respondents' partner to be in an academic or non-academic sector. Academic partner is strictly referred to respondents whose partner is employed in academic sectors for research and/or teaching. Partners employed in government, private, or other sectors, as well as retired and not employed, are all designated as non-academic partner. To make our analysis more inclusive, single parents are assumed to be lead parents and are retained in the sample. Considering the strong presence of the US respondents in our sample and the fact that US is the only advanced country without a nation-wide parental leave policy, we split the sample into the US-only and non-US subsets when modeling the effect of parental leave lengths on productivity. In addition to the above-mentioned three-way interaction between parenting type, partnership status, and gender, we also include an interaction term between Scientific Reports | (2022) 12:22300 | https://doi.org/10.1038/s41598-022-26258-z www.nature.com/scientificreports/ gender and parental leave lengths to examine whether the relationship between parental leave lengths and productivity differs by gender. We control for the number of children as well as respondents' academic age, highest degree earned, employment sector, and primary discipline. Academic age is calculated based on the first and last years of publication plus one. Given that the effect of age is often multiplicative, we include academic age in its polynomial form and center the variable at the mean to render the intercept more meaningful and interpretable. Both the highest degree and employment sector are binary variables, indicating whether respondents hold a doctoral degree and work in an academic sector for research and/or teaching. Number of children is a nominal variable with four possible values: 1, 2, 3, as well as 4 and more; so is the main discipline, which is regrouped from the original 14 disciplines into four: Arts and Humanities, Health Sciences, Natural Sciences, and Social Sciences. Qualitative analysis. A free text section was included at the end of the survey that encouraged participants to "Please feel free to add any additional comments you have regarding childcare and scientific labor, drawing upon your own experiences". In total, 5976 participants completed this section. To analyze this large number of responses, analysis was separated into a 2-stage coding approach. Stage 1 coding involved a random sample of 1000 comments inductively coded using NVivo ( Figure S5). Comments were coded into themes using a grounded theory-informed approach using a line-by-line analysis approach to identify main themes [15][16][17] . All inductive codes were then collapsed into 59 overarching axial codes, and then collapsed further to 8 codes (Table S18) which allowed the team to manually code the large number of remaining responses (n = 4976 comments) during Stage 2. Stage 2 coding was conducted in Excel to allow for swift and robust coding of the remaining responses. An additional 'Other' category was permitted to retain data variability and richness as well as to elicit further codes as they gained prominence. In addition, extensive memo-making and reflexive note-taking during Stage 2 ( Figure S6) was maintained to maximise the benefits of manual coding as well as to apply an appropriate level of sensitivity and nuance in its interpretation. This also allowed the quantitative data to speak and give voice to members of the research community. We drew exemplar quotes from the dominant themes to complement quantitative analyses at the end of Stage 2. Selection of these quotes ensured: fair representation of men and women's parental voices; global coverage; as well as representation of the theme reflected in the Stage 2 code definition. The quotes here should be read as illustrative, but not a comprehensive reporting of the qualitative component of the survey. Informed consent. Informed consent was obtained from all participants prior to data being collected. Data availability All data needed to evaluate the conclusions in the paper are present in paper and/or the Supplementary Materials.
2022-12-26T16:08:41.066Z
2022-12-24T00:00:00.000
{ "year": 2022, "sha1": "93d2f5941cdd62d0e932f6ea34a498a7c5919c23", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "93935beaf289bae30d6c22932fa6ce4d08255104", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
15873895
pes2o/s2orc
v3-fos-license
Zygosity Determination in Hairless Mice by PCR Based on Hrhr Gene Analysis We analyzed the Hr gene of a hairless mouse strain of unknown origin (HR strain, http://animal.nibio.go.jp/e_hr.html) to determine whether the strain shares a mutation with other hairless strains, such as HRS/J and Skh:HR-1, both of which have an Hrhr allele. Using PCR with multiple pairs of primers designed to amplify multiple overlapping regions covering the entire Hr gene, we found an insertion mutation in intron 6 of mutant Hr genes in HR mice. The DNA sequence flanking the mutation indicated that the mutation in HR mice was the same as that of Hrhr in the HRS/J strain. Based on the sequence, we developed a genotyping method using PCR to determine zygosities. Three primers were designed: S776 (GGTCTCGCTGGTCCTTGA), S607 (TCTGGAACCAGAGTGACAGACAGCTA), and R850 (TGGGCCACCATGGCCAGATTTAACACA). The S776 and R850 primers detected the Hrhr allele (275-bp amplicon), and S607 and R850 identified the wild-type Hr allele (244-bp amplicon). Applying PCR using these three primers, we confirmed that it is possible to differentiate among homozygous Hrhr (longer amplicons only), homozygous wild-type Hr(shorter amplicons only), and heterozygous (both amplicons) in HR and Hos:HR-1 mice. Our genomic analysis indicated that the HR, HRS/J, and Hos:HR-1 strains, and possibly Skh:HR-1 (an ancestor of Hos:HR-1) strain share the same Hrhr gene mutation. Our genotyping method will facilitate further research using hairless mice, and especially immature mice, because pups can be genotyped before their phenotype (hair coat loss) appears at about 2 weeks of age. Introduction Many hairless mouse strains such as hrS/J and Skh:hr-1 are often used in studies of skin, cancer, and immunology by Benavides et al. [3] and Sundberg [12]. at our institute, we have been maintaining a hairless mouse strain of unknown origin called hr (http://animal. nibio.go.jp/e_hr.html). it was introduced from a university in california (there is no precise university name in our records) to Yokohama city university in 1964. the strain was then introduced in 1965 to the institute of Medical Science of the university of tokyo, where a mutated Hairless gene from this strain was transferred into a BaLB/c background. the strain was introduced to our institute (National institutes of health, at the time of introduction) in 1981. the hrS/J strain was established in 1964 by inbreeding mice obtained by crossing offspring of the hairless mice first found in London [4] with BaLB/c mice at the Jackson Laboratory (http:// jaxmice.jax.org/strain/000673.html). in addition, the hos:hr-1 strain was established in 1987 at hoshino Laboratory animals inc. by inbreeding the Skh:hr-1 outbred strain, which had been established at temple university by crossing the cBa strain (http://www. hoshino-lab-animals.co.jp/english/products/hr1-en. html) with hairless mice of unknown origin from Sandra Biological Supply. it remains unknown whether hr mice carry the same mutation as other hairless strains, such as hrS/J and Skh:hr-1 (hos:hr-1), even though the three strains show the same phenotype. The hairless mutation was first found in a mouse in 1924 [4]. this mutation is an autosomal recessive mutation (Hr hr ) in the Hr gene [11]. Murine Hr localizes to the 70-Mb position of mouse chromosome 14, and contains 19 exons [5]. the hr mutation is caused by an insertion of the murine leukemia virus into intron 6 [11]. Both hrS/J and Skh:hr-1 (hos:hr-1) carry this mutation [10]. homozygous mutants (Hr hr /Hr hr ) show normal development of the first hair coat (first hair cycle). Starting at 2 weeks of age, they lose their hair coat rapidly and completely due to an abnormal second hair cycle [4,13]. at weaning (~3 weeks of age), they are completely hairless. in general, females homozygous for Hr hr often fail to nurse their litters due to abnormal lactation (except hos:hr-1 homozygous females, which show normal lactation; thus, this low nursing activity is thought to depend on the genetic background, not the Hr mutation itself). therefore, most hairless strains have been maintained by mating heterozygous females (normal hair coat) and homozygous males (no hair coat). in this case, pups are a mixture of heterozygous and homozygous mutants. homozygous mutants cannot be distinguished from heterozygous ones based on appearance alone because they both have coats before 2 weeks of age. hence, a genotyping method is also needed if younger mice are to be used. We analyzed the Hr gene of the hr strain maintained at our institute to determine if its Hr mutation (tentatively called "Hr x ") is the same as that (Hr hr ) of other hairless strains (such as hrS/J). in addition, we developed a Pcr method to determine the genotypes of pups before the phenotype (hair coat loss) appears (~2 weeks of age) based on the sequence information of hr mice. Hairless mice at our institute, hr mice (nbio#: nbio003) have been maintained by crossing heterozygous females (Hr/Hr x ) and homozygous males (Hr x /Hr x ). Wild-type hr mice (Hr/Hr), which had no mutated hr (Hr x ) alleles, were produced by crossing heterozygous females and males. hos:hr-1 mice homozygous for hairless genes were purchased from hoshino Laboratory animals, inc. (Bando, Japan) through Japan SLc, inc. (hamamatsu, Japan) and used for genotyping tests. all mice were housed under specific pathogen-free conditions with food (cMF, Oriental Yeast co., Ltd., tokyo, Japan) and water provided ad libitum. all animal experiments were conducted in accordance with the guidelines for animal experiments of the National institute of infectious diseases, tokyo, Japan, and the National institute of Biomedical innovation, Osaka, Japan. Determination of DNA sequences flanking the insertion site the genomic region containing the insertion mutation site was amplified by PCR from genomic DNA from homozygous hr mice and a set of two primers, mhr-int6-F514 and mhr-int6-r806 (see table 1), and kOd-FX neo under the following thermal conditions: 95°c for 2 min, 40 cycles of 98.5°c for 10 s and 68°c for 3 min, and then 68°c for 5 min. Pcr products, approximately 13 kbp in length, were gel-purified on a 1% agarose gel, and both the 5' and 3' ends were sequenced using an applied Biosystems 3730 × l dNa analyzer (Life technologies). the obtained sequence was com-pared to genome databases at the NcBi using a BLaSt search. Genotyping PCR Primers for genotyping Hr alleles were designed according to the sequence information of the alleles (table 1 for primer sequences; Fig. 3a for primer positions). all three primers were used simultaneously to determine the genotypes of hr and hos:hr-1 mice. Pcrs were conducted using a hybaid Sprint thermal cycler and hotStartaq dNa polymerase under the following thermal cycling conditions: 94°c for 15 min, 40 cycles of 94°c for 10 s, 60°c for 10 s, and 72°c for 30 s, and then 72°c for 5 min. Pcr products were separated in 2% agarose gels (E-gel EX, g4020-02) and photographed with a laser scanner. Sequencing the mutated region in HR mice Long Pcr for amplifying genomic regions containing the insertional mutation ( Fig. 1 for the primer positions) produced an ~13-kb-long amplicon (Fig. 2). Both the 5' and 3' ends of the amplicon were sequenced. analysis of both sequences using BLaSt search revealed that hr mice carried the same insertional mutation as hrS/J mice; i.e., Hr x turned out to be Hr hr . Genotyping of Hr alleles in HR and Hos:HR-1 mice Primers for genotyping Hr alleles were designed according to their sequence information (see Fig. 3a for primer positions). all three primers were used simultaneously for genotyping Pcr. the zygosities of hr and hos:hr-1 mice were determined using amplicons from both mutant and wild alleles with the following primer sets: S776 and r850 (275 bp, longer bands), and S607 and r850 (244 bp, shorter bands), respectively (Fig. 3B). Discussion Our genomic analysis revealed that the hr mice at our institute share the same hairless mutation (Hr hr ) as hrS/J and Skh:hr-1 (an ancestor of hos:hr-1) mice. this indicates that the hr strain is a descendent of the original hairless mice found in London in 1924 [4]. this possibility was also suggested by the fact that the phenotype of hr mice is identical to that of other hairless mice carrying Hr hr alleles. Our genomic analyses con-firmed this possibility. Although other mutations of the Hr gene, such as rhino (Hr rh ) [8] and bald (Hr ba ) [6] lead to hairlessness, their phenotypes differ from that of hr mice. rhino mice become completely hairless by 35 days of age, like hrS/J mice, but older rhino mice have a different phenotype: their skin becomes progressively looser and redundant, forming folds, flaps, and ridges [12]. rhino alleles contain various types of mutations, different from Hr hr alleles [1,2]. these mutations result in a truncation of hairless proteins. On the other hand, bald mice are phenotypically intermediate between the hairless and rhino strains [6]. the similarities between the Hr hr and Hr ba alleles are unclear because the bald gene has not yet been sequenced. thus, hr mice are genetically and phenotypically hairless mice that carry Hr hr . Based on the genomic sequence around the insertional mutation, we developed a Pcr genotyping method. The method was confirmed to be useful for zygosity checks of both hr and hos:hr-1 strains, and possibly more strains carrying Hr hr alleles. Our three primers flanking the insertional mutation in the Hr hr gene distinguished the zygosities of hairless strains in a single Pcr assay. Flanking primer methods [7], often used for the genotyping of transgenes [9], are simple and precise for zygosity determination. Other methods, such as Southern blots and quantitative real-time Pcr, are also used for zygosity checks, but are challenging in practice. Both methods use quantitative tests, the results of which are and mhr-int6-r806 (positions and sequences are shown in Fig. 1 and table 1, respectively) produced an ~13-kb-long amplicon containing insertion mutations in homozygous hr mice. Sequencing and BLaSt searches indicated that the HR mice share the same insertion mutation as HRS/J mice. The sequences of the 5' and 3' regions flanking the insertion mutation of hrS/J mice were retrieved from genBank (accession numbers M20235 and M20236, respectively). often difficult to compare precisely. In contrast, the flanking primer method is based on qualitative tests (presence or absence of target amplicons) and is easy to perform with no need for complicated procedures such as hybridization of radioactive probes, precise adjustment of template concentration, and so forth. Our primers are different from those of Schaffer et al. [10]. their primers targeted similar positions but had a lower tm than our primers. We believe that our primers have an advantage because a higher tm often leads to better results. in addition, Pcr results are highly dependent on thermal controls, such as the block-temperature (i.e., based on the temperature of blocks, not that within Pcr tubes) and active-tube controls (i.e., based on the temperature within Pcr tubes). Pcr using a thermal cycler with block temperature control needs a longer reaction time than Pcr with active-tube control. in the present study, we used 10 s for the denature and annealing times in a thermal cycler with active-tube control. if a thermal cycler using block-temperature control is used, a longer period, e.g., 30 s, should be used. immature hairless mice can be precisely genotyped using our Pcr method before their phenotype (hair coat loss) appears. this would enable the use of such mice for research that requires knowledge of precise zygosities. in summary, the hr strain at our institute carries the same Hr hr alleles as hrS/J and Skh:hr-1 (hos:hr-1). Our genotyping method could be used for zygosity checks of various hairless mouse strains that carry Hr hr alleles. this method will facilitate the study of hairless mice, and especially immature mice, the zygosities of which cannot be determined based on appearance alone.
2016-05-04T20:20:58.661Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "b08d341a03430bb91a941b57731678cc29dd2a5e", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/expanim/62/3/62_13-0014/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b08d341a03430bb91a941b57731678cc29dd2a5e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253211593
pes2o/s2orc
v3-fos-license
Effective, Long-Term, Neutrophil Depletion Using a Murinized Anti-Ly-6G 1A8 Antibody Neutrophils are crucial innate immune cells but also play key roles in various diseases, such as cancer, where they can perform both pro- and anti-tumorigenic functions. To study the function of neutrophils in vivo, these cells are often depleted using Ly-6G or Gr-1 depleting antibodies or genetic “knockout” models. However, these methods have several limitations, being only partially effective, effective for a short term, and lacking specificity or the ability to conditionally deplete neutrophils. Here, we describe the use of a novel murinized Ly-6G (1A8) antibody. The murinized Ly-6G antibody is of the mouse IgG2a isotype, which is the only isotype that can bind all murine Fcγ receptors and C1q and is, therefore, able to activate antibody-dependent cellular cytotoxicity (ADCC), antibody-dependent phagocytosis (ADCP) and complement-dependent cytotoxicity (CDC) pathways. We show that this mouse-Ly-6G antibody shows efficient, long-term, and near-complete (>90%) neutrophil depletion in the peripheral blood of C57Bl6/J, Balb/c, NXG and SCID mice for up to at least four weeks, using a standardized neutrophil depletion strategy. In addition, we show that neutrophils are efficiently depleted in the blood and tumor tissue of IMR32 tumor-bearing SCID mice, analyzed six weeks after the start of the treatment. Introduction Neutrophils are the most abundant immune cells in the human body, comprising 50-70% of the white blood cell population in the circulation. They are essential for innate immune reactions but also play key roles in tumorigenesis, where they can perform both pro-and anti-tumorigenic functions [1][2][3][4]. To study the in vivo function of neutrophils, Ly-6G (clone 1A8) or Gr-1 (clone RB6-8C5) depleting antibodies are frequently used. Over the years, it has become clear that both these antibodies show efficacy problems, with depletion being only partially effective, transient, or lacking specificity [5][6][7]. The misinterpretation of results is observed in many studies due to the usage of the same antibody clone for both depletion and detection, thereby blocking the epitope for staining, resulting in false-negative results. In addition, whereas Ly-6G is only present on neutrophils, Ly-6C, the other receptor recognized by the Gr-1 antibody, can also be found on subsets of monocytes, macrophages, dendritic cells, and lymphocytes [8][9][10]. Besides the differences in target specificity, Ly-6G and Gr-1 antibodies differ in isotype and efficacy. While the Gr-1 antibody is a rat IgG2b, described to work via the fast-acting complement-dependent cytotoxicity (CDC) pathway [11,12], Ly-6G is a rat IgG2a antibody, shown to mediate neutrophil killing through "slow" Fc-dependent antibody-dependent cellular cytotoxicity (ADCC) and antibody-dependent phagocytosis (ADCP) by monocytes and macrophages, making the neutrophil depletion less efficient [13]. Also, since both depletion antibodies are of rat origin, mouse anti-rat antibodies produced by the mice play a role in reduced efficacy, where increased clearance of the injected depletion antibodies is observed after one week of treatment [14]. Mice Mice were maintained in the animal facility of the University of Utrecht. Experiments were conducted using C57Bl/6J (C57Bl/6JRj), Balb/c (Balb/cByJRj), NXG (NOD.Prkdc scid Il2rg tm1 /Rj), SCID (NOD.CB17-Prkdc scid/scid /Rj), and human FcαRI (CD89) transgenic SCID mice (all housed and bred at Janvier Labs, Paris, France) [21], or C57Bl/6J FcRγ −/− (C57Bl/6JRj FcRγ-chain knockout) mice (housed and bred at the University of Utrecht) [22]. Mice were housed in groups under a 12:12 light-dark cycle, with food and water available ad libitum. Mice were randomized based on age, and both treatment and analysis were performed blind. Upon transfer to Utrecht, mice were acclimatized for at least 1 week prior to the start of the experiment. All experiments were performed in accordance with international guidelines and approved by the National Central Authority for Scientific Procedures on Animals (CCD) and the local experimental animal welfare body (AVD115002016410). Neutrophil Depletion Mice were injected intra-peritoneally (i.p.) with 25, 50, or 100 µg mouse-Ly-6G, 50 or 100 µg rat-Ly-6G, or solvent control (PBS) 3 times a week. The rat-Ly-6G (1A8) hybridoma has been sequenced by Absolute Antibody and transformed into a commercially available recombinant mouse antibody (see Table 1 for more information on the depletion antibodies used). To evaluate neutrophil depletion, blood was drawn via cheek puncture before antibody injection and collected in Lithium-Heparin tubes (Sarstedt, #20.1345, Etten-Leur, the Netherlands). Blood was stored on ice to prevent internalization of CD115, a marker used for flow cytometry detection of monocytes. Flow Cytometry Myeloid cell composition and neutrophil depletion were assessed using the antibodies used in Table 2. To detect other leukocytes present in the blood, we used the antibodies shown in Table 3. For blood and tumor samples from IMR32 tumor-bearing mice, the antibodies depicted in Table 4 were used. Additionally, TO-PRO-3 (Thermo Fisher, #10710194, Bleiswijk, the Netherlands) was added to tumor samples in a 1:50000 dilution to exclude dead cells. Antibodies were diluted in FACS buffer (PBS + 0.1%BSA + 0.1% sodium azide) and added to mouse blood in a 1:1 ratio (15 µL each). After 30-45-min incubation at 4 • C, 1 mL 1× FACS lysis solution (BD, #349202, 10× diluted in mQ) was added to wash away unbound antibodies and lyse the erythrocytes. Cells were lysed for 5-8 min at room temperature, followed by a centrifugation step (1800 rpm, 5 min, 4 • C). Cells were washed in 1 mL FACS buffer, centrifuged (1800 rpm, 5 min, 4 • C), and resuspended in 150 µL FACS buffer containing microsphere latex beads (Thermo Fisher, #11550696, Bleiswijk, the Netherlands, 3000×). For tumor samples, 2.5 × 10 6 tumor cells were stained in a 50 µL antibody mix for 30-45-min at 4 • C. After washing, cells were fixed in 4% paraformaldehyde. Flow cytometry on blood samples was performed using a BD Canto II flow cytometer, while a BD LSR Fortessa was used to measure tumor samples. Data analysis was done using FlowJo (TreeStar, Ashland, OR, USA). Neutrophil Attraction Model in IMR32 Tumor Bearing Mice SCID mice were injected subcutaneously (s.c.) with 2.5 × 10 6 IMR32 cells in a 1:2 mix of IMR32 cells in PBS and Vitrogel Hydrogel Matrix (Tebu Bio, #306VHM01, Heerhugowaard, the Netherlands). Tumor outgrowth was measured using a caliper (length × width × height). Twenty-eight days after the start of the experiment, mice were randomized into treatment groups based on tumor size and age, after which mice were treated i.p. with solvent control (PBS) or IgA ch14.18 antibodies in combination with a IgG1 PGLALA SIRPα-D1 fusion protein (produced as described before) to attract intratumoral neutrophils [23][24][25]. PBS and 25 mg/kg IgA ch14.18 antibodies were administered 3 times a week, while the SIRPα-D1 fusion protein was given every 9 days at a dose of 30 mg/kg. To induce neutrophil depletion, mice were treated with 100 µg mouse-Ly-6G 3 times a week. Blood was sampled at days 35, 42, 49, 56, 63, and 70 after tumor cell injection (days 7, 14, 21, 28, 35, and 42 after the start of the treatments) as described above and analyzed by flow cytometry. Neutrophil infiltration in the tumor microenvironment was evaluated on day 70. Mouse tumors were carefully excised and collected in ice-cold PBS. Tumors were cut smaller and digested using the mouse tumor dissociation kit from Miltenyi (#130-096-730, Leiden, the Netherlands). Up to 1 g of tumor tissue was transferred to C tubes (Miltenyi, #130-096-334, Leiden, the Netherlands) containing enzyme mix (RPMI culture medium, 100 µL Enzyme D, 50 µL Enzyme R, and 10µL Enzyme A) and the 37C_m_TDK_1 program was run on a gentleMACS Octo Dissociator. After dissociation, cells were put through a 70 µm cell strainer in medium containing 10% FCS (Sigma), washed with FACS buffer, and stained for flow cytometry. Quantification and Statistical Analysis Data are presented as mean ± SEM. Comparison of multiple groups was performed using one-way ANOVA with Bonferroni correction, repeated measures one-way ANOVA with Bonferroni correction, or two-way ANOVA with Bonferroni correction. Statistical analyses were performed using GraphPad Prism 9.3.0 (GraphPad Software Inc., San Diego, CA, USA). A p-value < 0.05 was considered significant. Mouse-Ly-6G Depletes Neutrophils More Efficiently Than Rat-Ly-6G in C57Bl6/J Mice To investigate whether the mouse-Ly-6G antibody was able to efficiently deplete neutrophils, we decided to use old (>20 weeks) C57Bl6/J mice since these mice show high neutrophil-turnover and are the most refractory to rat-Ly-6G mediated neutrophil depletion [14]. [22][23][24][25] week-old mice were injected three times a week with two different concentrations (25 or 100 µg) of the mouse-Ly-6G antibody, 50 µg rat-Ly-6G 1A8 antibody, or solvent control (PBS) (Figure 1a). The eight mice per group were randomly divided into two subgroups to be able to draw blood three times a week (four mice per time point) and subsequently perform flow cytometric analysis of the leukocyte composition. First, hematopoietic cells were selected using CD45, followed by the exclusion of Siglec-F positive eosinophils and CD115 positive monocytes. To detect remaining neutrophils, we used the Gr-1 RB6-8C5 antibody clone (to prevent possible epitope masking of Ly-6G) in combination with CD11b staining and the specific forward and sideward scatter properties of neutrophils ( Figure 1b). Analysis of the number of neutrophils per 5000 microsphere latex beads indicated that the rat-Ly-6G antibody did not result in efficient neutrophil depletion at most Cells 2022, 11, 3406 5 of 12 of the time points analyzed (Figures 1c and S1b), as was described previously [5,7,14]. In contrast, the lowest concentration of mouse-Ly-6G (25 µg) significantly depleted neutrophils at days 2, 4, 7, 11, and 22 after the start of the experiment, indicating that 25 µg mouse-Ly-6G can efficiently deplete neutrophils for up to one week, after which it is insufficient to compete with neutrophil production (Figures 1d and S1c). Increasing the concentration of the mouse-Ly-6G antibody to 100 µg per mouse resulted in an almost complete absence of neutrophils in the peripheral blood at all time points tested, indicating efficient long-term depletion up to four weeks (Figures 1e and S1d,e). Cells 2022, 11, x FOR PEER REVIEW 5 of 12 at most of the time points analyzed (Figures 1c and S1b), as was described previously [5,7,14]. In contrast, the lowest concentration of mouse-Ly-6G (25 μg) significantly depleted neutrophils at days 2, 4, 7, 11, and 22 after the start of the experiment, indicating that 25 μg mouse-Ly-6G can efficiently deplete neutrophils for up to one week, after which it is insufficient to compete with neutrophil production (Figures 1d and S1c). Increasing the concentration of the mouse-Ly-6G antibody to 100 μg per mouse resulted in an almost complete absence of neutrophils in the peripheral blood at all time points tested, indicating efficient long-term depletion up to four weeks (Figures 1e and S1d,e). week for four weeks. Blood was drawn via cheek puncture before injection with the antibodies; (b) FACS dot plots of a representative PBS-treated mouse, showing the gating strategy to retrieve the number of neutrophils per 5000 beads. First, the latex beads and CD45 + cells were selected. From the CD45 + population, Siglec-F and CD115 positive cells were excluded. Based on SSC/FSC profile, the neutrophils were selected and purity was confirmed using Gr-1 and CD11b positivity; (c-e) Longitudinal analysis of the number of CD45 + Siglec-F − CD115 − SSC high Gr-1 + CD11b + neutrophils per 5000 beads in the peripheral blood (n = four mice per subgroup) showing (c) inefficient depletion with 50 µg rat-Ly-6G antibody; (d) efficient short-term depletion with 25 µg mouse-Ly-6G spanning the first week, and (e) efficient long-term depletion with 100 µg mouse-Ly-6G. Data is presented as mean with standard error of the mean (SEM). Statistics: one-way ANOVA with Bonferroni correction, * = p < 0.05, ** = p < 0.01, *** = p < 0.001, **** = p < 0.0001. 100µg Mouse-Ly-6G Efficiently Depletes Neutrophils in C57Bl6/J, Balb/c, NXG, and SCID Mice The promising results in C57Bl6/J mice tempted us to investigate whether neutrophils could also be depleted in other mouse strains, e.g., Balb/c, NXG, and SCID. Where C57Bl6/J and Balb/c mice are immunocompetent, SCID and NXG mice are immunodeficient. SCID mice depict a B-and T-cell deficiency, and NXG mice lack mature B-, T-, and NK-cells, show defective dendritic cells and macrophages due to impaired IL2R signaling, and lack hemolytic complement because of a 2-base pair deletion in the C5 structural gene [26,27]. Mice (n = 5 per strain) were injected three times a week with 100 µg mouse-Ly-6G antibody, and blood was analyzed once per week (Figure 2a). Flow cytometry analysis showed significant and almost complete depletion of neutrophils in all mouse strains, even the immunodeficient NXG mice that could not deplete antibodies via CDC (Figures 2b and S2a-e). 100μg Mouse-Ly-6G Efficiently Depletes Neutrophils in C57Bl6/J, Balb/c, NXG, and SCID Mice The promising results in C57Bl6/J mice tempted us to investigate whether neutrophils could also be depleted in other mouse strains, e.g., Balb/c, NXG, and SCID. Where C57Bl6/J and Balb/c mice are immunocompetent, SCID and NXG mice are immunodeficient. SCID mice depict a B-and T-cell deficiency, and NXG mice lack mature B-, T-, and NK-cells, show defective dendritic cells and macrophages due to impaired IL2R signaling, and lack hemolytic complement because of a 2-base pair deletion in the C5 structural gene [26,27]. Mice (n = 5 per strain) were injected three times a week with 100 μg mouse-Ly-6G antibody, and blood was analyzed once per week (Figure 2a). Flow cytometry analysis showed significant and almost complete depletion of neutrophils in all mouse strains, even the immunodeficient NXG mice that could not deplete antibodies via CDC (Figures 2b and S2a-e). antibodies; (b,c) Longitudinal analysis of the number of CD45 + Siglec-F − CD115 − SSC high Gr-1 + CD11b + neutrophils in the peripheral blood (n = five mice per group) per 5000 latex beads, showing; (b) Significant, almost complete, neutrophil depletion in all mouse strains tested (C57Bl6/J, Balb/c, NXG, and SCID) upon treatment with 100 µg mouse-Ly-6G antibody, and (c) Significantly better neutrophil depletion in 100 µg mouse-Ly-6G treated C57Bl6/J mice than when treated with 100 µg rat-Ly-6G antibody. Data is presented as mean with SEM. Statistics: one-way ANOVA with Bonferroni correction. Repeated measures one-way ANOVA with Bonferroni correction was used to compare groups in Figure 2c, * = p < 0.05, ** = p < 0.01, *** = p < 0.001, **** = p < 0.0001. To investigate whether the mouse-Ly-6G antibody was indeed outperforming the rat-Ly-6G antibody, we injected C57Bl6/J mice with 50 µg mouse-Ly-6G (compare to Figure 1c), 100 µg mouse-Ly-6G or 100 µg rat-Ly-6G. As shown in Figure 2c, 100 µg rat-Ly-6G injected mice showed significantly reduced numbers of neutrophils in the peripheral blood. However, the reduction was only partial and significantly less (p = 0.04) than when the same concentration of mouse-Ly-6G antibody was used. Of note, these mice were 12-14 weeks old, thereby ten weeks younger than the mice used in Figure 1, depicting lower neutrophil turnover, explaining why these mice were not completely refractory to rat-Ly-6G treatment [14]. Injecting mice with 50 µg mouse-Ly-6G did reduce the number of neutrophils in the peripheral blood, but the reduction was incomplete after one week, indicating that 100 µg mouse-Ly-6G is needed to efficiently deplete the excess neutrophils produced upon neutropenia. Neutrophils Can Efficiently Be Depleted Intratumorally in IMR32 Tumor Bearing Mice Multiple studies highlight the inefficiency of intratumoral neutrophil depletion using the rat-Ly-6G antibody [6,28]. To investigate whether the mouse-Ly-6G antibody is capable of efficient intratumoral neutrophil depletion, we injected SCID mice s.c. with 2.5 × 10 6 IMR32 cells in a 1:2 mix with Vitrogel Hydrogel Matrix. To attract neutrophils to the tumor site, mice were treated with a combination of IgA ch14.18 antibodies and an IgG1 PGLALA SIRPα-D1 fusion protein [29]. Other treatment groups consisted of vehicle control (PBS) and IgA/SIRPα-D1 in combination with 100 µg mouse-Ly-6G (Figure 3a). Flow cytometric analysis of the blood showed complete neutrophil depletion in the IgA/SIRPα-D1/mouse-Ly-6G treated mice until the end of the experiment, six weeks after the start of the treatment (Figure 3b). As expected, flow cytometric analysis of the tumor indicated a significant increase in the number of neutrophils when mice were treated with IgA/SIRPα-D1 (Figure 3c). Some neutrophils could be detected in the tumors of PBS-treated mice. However, in the IgA/SIRPα-D1/mouse-Ly-6G treated mice, neutrophils were completely absent, indicating that both the neutrophils attracted by the IgA/SIRPα-D1 treatment were ablated, as well as the neutrophils normally present in the tumors (Figure 3c). Neutrophil Depletion with Mouse-Ly-6G Is Not Solely Complement-or Fc Receptor-Mediated Neutrophil depletion with the rat-Ly-6G antibody is shown to be dependent on mononuclear phagocytosis but is facilitated by complement [30]. Since neutrophils were also efficiently depleted in NXG mice (Figure 2b), which, among other defects, did not show complement activity, the mouse-Ly-6G antibody should have been able to exert its function independent of complement [27]. Studies using allogeneic, species-matched antibodies concluded that antibody-mediated cell depletion is complement-independent and mediated by monocytes and macrophages through the cooperation of multiple Fc receptors [31][32][33]. Hinting towards a role of these cells in neutrophil depletion was the observation that some of the 100 µg mouse-Ly-6G treated animals showed increased numbers of CD115 + monocytes (both Ly-6C positive and negative subsets) in the bloodstream, while this was not observed in rat-Ly-6G treated mice or with other leukocyte subsets ( Figure S3a,b). To investigate whether mouse-Ly-6G antibody-mediated cell depletion was mediated through the cooperation of multiple Fc receptors, we used gamma chain "knockout" mice (C57Bl/6J FcRγ −/− ; Figure 4a), which lack the Fc receptor gamma chain essential for, e.g., normal Fc receptor signaling, ADCC, and ADCP [22,34]. Treating FcRγ −/− mice with 100 µg mouse-Ly-6G resulted in a significant and almost complete depletion of neutrophils (Figure 4b), indicating that ADCC/ADCP was not the only mechanism of action. In line with this, the design of the mouse-Ly-6G, using an IgG2a isotype, suggests that this antibody could activate both ADCC/ADCP and CDC pathways, thereby possibly explaining why both NXG mice (no CDC) and FcRγ −/− mice (no ADCC and ADCP) showed efficient neutrophil depletion upon mouse-Ly-6G treatment, emphasizing the increased possibilities while using this antibody [19,20]. Neutrophil Depletion with Mouse-Ly-6G Is Not Solely Complement-or Fc Receptor-Mediated Neutrophil depletion with the rat-Ly-6G antibody is shown to be dependent on mononuclear phagocytosis but is facilitated by complement [30]. Since neutrophils were also efficiently depleted in NXG mice (Figure 2b), which, among other defects, did not show complement activity, the mouse-Ly-6G antibody should have been able to exert its neutrophils (Figure 4b), indicating that ADCC/ADCP was not the only mechanism of action. In line with this, the design of the mouse-Ly-6G, using an IgG2a isotype, suggests that this antibody could activate both ADCC/ADCP and CDC pathways, thereby possibly explaining why both NXG mice (no CDC) and FcRγ −/− mice (no ADCC and ADCP) showed efficient neutrophil depletion upon mouse-Ly-6G treatment, emphasizing the increased possibilities while using this antibody [19,20]. Discussion In this study, we evaluated the efficacy and specificity of the murinized Ly-6G 1A8 antibody as a tool to efficiently deplete neutrophils. By injecting C57Bl6/J, Balb/c, NXG, and SCID mice three times a week i.p., with 100 μg of the mouse-Ly-6G antibody, we showed efficient (>90%) neutrophil depletion for up to at least four weeks, while IMR32tumor-bearing SCID mice showed neutrophil depletion for up to six weeks. The mouse-Ly-6G outperformed the originally described rat-Ly-6G antibody and showed almost complete neutrophil depletion even in old C57Bl6/J mice, resistant to rat-Ly-6G antibody treatment [14]. Problems with the efficacy of the rat-Ly-6G antibody have clouded the results of many studies. Epitope masking by the injected depletion antibody has resulted in numerous false-negative results due to the usage of the same antibody clone for staining. In addition, the neutropenia achieved by using Ly-6G-depletion antibodies resulted in increased differentiation of myeloid progenitors present in the bone marrow, thereby producing more mature neutrophils and releasing them into the bloodstream [35]. The majority of cells that were present after Ly-6G treatment were shown to be newly made neutrophils, indicating that rat-Ly-6G depletion was not able to compensate for the increased Discussion In this study, we evaluated the efficacy and specificity of the murinized Ly-6G 1A8 antibody as a tool to efficiently deplete neutrophils. By injecting C57Bl6/J, Balb/c, NXG, and SCID mice three times a week i.p., with 100 µg of the mouse-Ly-6G antibody, we showed efficient (>90%) neutrophil depletion for up to at least four weeks, while IMR32-tumor-bearing SCID mice showed neutrophil depletion for up to six weeks. The mouse-Ly-6G outperformed the originally described rat-Ly-6G antibody and showed almost complete neutrophil depletion even in old C57Bl6/J mice, resistant to rat-Ly-6G antibody treatment [14]. Problems with the efficacy of the rat-Ly-6G antibody have clouded the results of many studies. Epitope masking by the injected depletion antibody has resulted in numerous false-negative results due to the usage of the same antibody clone for staining. In addition, the neutropenia achieved by using Ly-6G-depletion antibodies resulted in increased differentiation of myeloid progenitors present in the bone marrow, thereby producing more mature neutrophils and releasing them into the bloodstream [35]. The majority of cells that were present after Ly-6G treatment were shown to be newly made neutrophils, indicating that rat-Ly-6G depletion was not able to compensate for the increased neutrophil production [14]. In addition, the newly released neutrophils express lower levels of Ly-6G, making it more difficult to deplete and detect them [14]. Here, we describe a method to efficiently detect neutrophils by flow cytometry, not solely relying on Ly-6G expression. By excluding Siglec-F positive eosinophils and CD115 positive monocytes, neutrophils could be easily detected by their high sideward scatter (SSC) properties. Confirmation of the neutrophil phenotype with Gr-1 and CD11b expression shows >95% purity. The Gr-1 antibody RB6-8C5 has shown fewer efficacy problems due to recognizing more antigens on the surface of neutrophils (Ly-6G and Ly-6C) while also being a rat IgG2a antibody functioning via the fast-acting CDC pathway, achieving~90% reduction of peripheral blood neutrophils [14]. However, Gr-1 also targets Ly-6C positive subsets of monocytes, macrophages, dendritic cells, and lymphocytes, making it impossible, although often done, to draw conclusions about the role of neutrophils alone [8][9][10]. With the mouse-Ly-6G antibody, the best of both worlds was combined since depletion efficacy mimicked the Gr-1 RB6-8C5 antibody, while only Ly-6G-positive neutrophils were depleted, keeping the other leukocyte populations intact. In addition, by virtue of being a mouse IgG2a antibody, the mouse-Ly-6G can activate ADCC, ADCP, and CDC, therefore also being effective in models lacking one of these pathways. Furthermore, as a mouse antibody, the production of anti-rat antibodies was prevented, circumventing the problem of reduced efficacy due to increased clearance of the injected depletion antibodies. Due to this, it was possible to efficiently deplete neutrophils for at least four weeks using the same treatment regimen and emphasizing the importance of antibody species and isotypes. A reliable, transient, long-term neutrophil depletion model is essential to study various functions of neutrophils, e.g., their role in tumor development and progression. In addition, neutrophils have been shown to play major roles in autoimmune diseases and IgA-mediated immunotherapy [36][37][38][39]. To really grasp the role neutrophils play in these diseases and therapies, it is important to be able to efficiently and reproducibly deplete neutrophils for the complete duration of the experiment. Using the treatment regimen described above, it is possible to efficiently deplete neutrophils in the peripheral blood for at least four weeks in C57Bl6/J, Balb/c, NXG, and SCID mice. IMR32 tumor-bearing SCID mice showed efficient neutrophil depletion in blood and tumor tissue six weeks after the start of the treatment and ten weeks after the injection of the IMR32 tumor cells, making it possible to, e.g., study the role of neutrophils in tumor outgrowth, even after the tumor cells have engrafted. Therefore, the mouse-Ly-6G antibody is the perfect tool to conditionally, efficiently, and specifically deplete neutrophils in blood and tissue for a long period of time. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cells11213406/s1, Figure S1: Analysis of the number of neutrophils upon antibody treatment (related to Figure 1
2022-10-30T15:18:41.063Z
2022-10-27T00:00:00.000
{ "year": 2022, "sha1": "19469103336ee82e69700ff2cc29de7b439fca7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/21/3406/pdf?version=1666884890", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "175638896eece2722bd1aee59597e4b10974e174", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234067146
pes2o/s2orc
v3-fos-license
Governmental responses to COVID-19 Pandemic In response to the challenges imposed by the COVID-19 pandemic, governments worldwide adopted a variety of strategies that include not just preventive or mitigation strategies adopted to “flatten the curve”, but also interventions aiming to mitigate economic and social impacts of the pandemic. RAP`s special issue gathered 17 reflexive, timely and relevant contributions of different governmental approaches to the COVID-19 pandemic. In this paper we highlight similarities and differences in governmental responses across countries and regions. We uncover and discuss broad themes covered in the symposium, focusing on: (a) impacts of social distancing strategies; (b) economic-relief responses; c) the role of bargaining, collaboration and coordination across levels of governance; (d) key actors and their role in the pandemic response; (e) pandemic and socio-economic inequalities; and (f) context, policy responses and effectiveness. The symposium adds to an extensive body of knowledge that has been produced on the topic of policy responses to COVID-19 pandemic offering more diverse contextual and comparative analysis. (c) (d) (e) e (f) contexto, políticas INTRODUCTION This RAP symposium about Governmental Responses to the COVID-19 pandemic emerged out of the need to learn from our own and others' experiences. One year ago, not one country leader could have imagined the enormous fiscal, political, and administrative challenges this pandemic would bring. In response to these challenges, governments reacted with a variety of responses in terms of degree of innovativeness, flexibility, bottom-up or bottom-down approaches that include not just preventive or mitigation strategies adopted to "flatten the curve" (Baniamin, Rahman & Hasan, 2020), but also interventions aiming to mitigate economic and social impacts of the pandemic. This special issue aimed to gather reflexive, timely and relevant contributions of different governmental approaches to the COVID-19 pandemic by highlighting differences in responses across countries and regions. To gather comparative approaches and measures against COVID-19, the call for short papers suggested addressing the following issues. Why do some countries handle the virus outbreak more effectively? What are cross-national, cross-regional, within-regional comparisons? How have specific policies (e.g. massive versus focused testing) failed or succeeded in different contexts? How well did political and administrative leaders handle the COVID-19 outbreak? What has been the role of administrative and crises-management capacities in handling the pandemic? What have been the consequences of the pandemic for vulnerable communities? What have been the impacts of the pandemic on education, public security or economic institutions? What has been the role of specific public institutions in handling the emergency? What have been good local or community level practices to mitigate the spread of COVID-19? What coordination and cooperation mechanisms were used between international, national and subnational governments in responding and handling the health crisis? Our call for proposals went out in March 2020 with a submission deadline of June 1, 2020. The timeliness of this special issue is crucial to highlight because the selected contributions present governmental responses in the first three months of the health crisis. Some contributors updated the responses during the reviewing process that ended in October 2020. Therefore, we caution the readers that these responses might have been intensified, modified or discontinued during the second half of 2020. The editors of this special issue express appreciation to all the anonymous 95 reviewers, as well as to all authors who responded to our call. We received 140 submissions, covering 26 countries. Space limitations forced us to select only 17 short papers, which cover not only specific countries -Australia (Wallace & Dollery, 2021), Brazil (Barberia, Cantarelli, Oliveira, Moreira & Rosa, 2021) Brazil and USA (Casarões & Magalhães, 2021), Colombia (Bello-Gomez & Sanabria-Pulido, 2021), China (Santos, 2021) Estonia (Raudla, 2021), El Salvador (Durán, 2021), Ghana (Antwi-Boasiako, Abbey, Ogbey & Ofori, 2021), Italy and Switzerland (Cepiku, Giordano & Meneguzzo, 2021), Mexico (Sánchez-Cruz, Masinire & López, 2021and Renteria & Arellano-Gault, 2021), Netherlands (Sullivan & Wolf, 2021), UK (Resende, Paschoalotto, Peckham, C. Passador & J. Passador, 2021) -but also focus on regions such as Africa (Sotola, Pillay & Gebreselassie, 2021), Latin America (Passos & Acácio, 2021) and the BRICS countries (Puppim de Oliveira, Barabashev, Tapscott, Thompson & Qian, 2021) or larger comparative analysis (Cunha, Domingos, Rocha & Torres, 2021). The more diverse contextual approach of this special issue needs to be integrated with the in-depth approach of the first special issue about COVID-19, published by RAP, about "The response of the Brazilian public administration to the challenges of the pandemic" (Peci, 2020). This symposium contribution adds to an extensive body of knowledge that has been produced on the topic of policy responses to COVID-19 pandemic offering more diverse contextual and comparative analysis. Powell and King-Hill (2020) find more than 400 articles published in Web of Science on July 13, using the terms "COVID and lessons. " Journals, such as Public Administration Review, Policy and Society, Journal of Comparative Policy Analysis and International Review of Administrative Sciences, among others, already have published symposiums on the topic. As we gather real-time knowledge about governmental and policy responses to the pandemic, the comprehension of such an unprecedented crisis becomes challenging. Sanitary, economic and social governmental responses interact with contextual socio-demographic factors in shaping the effectiveness of policies for controlling the pandemic (Banjamin, Rahman & Hasan, 2020). In addition, apparent initial success stories in beating the pandemic are not always sustainable in long term. Complex interactions, whose precise causes and effects are difficult to identify and continuously evolving, make unanticipated consequences of policy actions very likely (Agranoff, 2003) and demand in-depth research. The nature of short papers being published in symposiums limits the reach of the analysis and, specifically, policy drawing lessons (Powell & King-Hill, 2020), yet they still contribute to a piecemeal approach to our systemic comprehension of the pandemic policy responses and their consequences. Most of the papers explicitly or implicitly define the pandemic as a wicked problem that crosses multiple policy domains, levels of government and jurisdictions and demands action from different stakeholders (see Cepiku et al., 2021). Although the 17 pieces are quite diverse in content, some broad themes were identified: (a) impacts of social distancing strategies; (b) economic-relief responses; c) the role of bargaining, collaboration and coordination across levels of governance; (d) key actors and their role in the pandemic response; (e) pandemic and socio-economic inequalities; and (f) context, policy responses and effectiveness. The articles included fall short of addressing all possible governmental policies or contextual responses. For instance, they address neither technological aspects, psychological effects, scientific innovation, nor global trade, among other factors. Future symposiums should address these issues. The advent of the badly needed vaccine also will trigger further research about countries' access to the vaccine and administrative logistics to carry out campaign vaccinations. Future research also will tell us whether the World Health Organization (WHO), along with the People's Vaccine Alliance, will accomplish their goal of distributing COVID-19 shots to poorer countries (Kelland & Nebehay, 2020), or whether rich countries will hoard COVID-19 vaccines (BBC, 2020). However, many of the papers published here will contribute to a better understanding of these vaccine-related dynamics since they anticipate a role for political leadership (see, for example, Casarões & Magalhães, 2021) or for the way capacities, competencies or responsibilities are distributed among different levels of government and jurisdictions (see, for example, Santos, 2021 or Bello-Gomez & Sanabria-Pulido, 2021). SOCIAL DISTANCING POLICIES Social distancing strategies were the primary non-pharmaceutical sanitary strategies adopted by several government worldwide to control the spread of COVID-19. The effects of such strategies are discussed in two papers from this symposium, contributing to important findings considering the second wave of the pandemic and new curfews adopted in several contexts (e.g UK). Cunha et al. (2021) focus on the effect of social distancing policies on the new coronavirus dissemination, specifically on the number of confirmed cases of COVID-19 and on contagion velocity. Based on the analysis of dataset with daily information on 78 affected countries, they demonstrate that social distancing policies reduced the aggregated number of contaminated people by 4,832 on average (or 17.5/100,000), but only when strict measures are adopted. The role of a more complete and coherent set of social distancing policies to tackle the pandemic is also one of the main findings of Barberia et al. (2021). Their study builds on the methodology proposed by the Oxford COVID-19 Government Response Tracker to evaluate the effect of social distancing policy on the Brazilian population`s mobility. They find that anti-contagion policies had a significant effect on producing higher adherence to remain at home, specifically when a more complete and coherent set of policies was introduced and sustained by state government. ECONOMIC-RELIEF RESPONSES TO COVID CRISIS Despite early adherence to social distance policies worldwide, the multiple impacts of the pandemic have triggered other policy responses, particularly to face the economic hardship imposed by the crisis. Our cases illustrate how existing state capacities, the role of political leadership, the difficulties of including informal sectors that characterize a good part of the developing countries, the pressure of organized political groups among others influenced poor policy designs or challenged the implementation of such policies. Durán (2021) introduces El Salvador's economic measure to face the COVID-19 pandemic. In addition to other initiatives, Salvadorian President Nayib Bukele offered a one-time $300 monetary transfer to needy families that is expected to reach 75% of households and cost about $450 million. However, this economic measure lacked clear selection criteria, control and accountability mechanisms. To assess the program's effectiveness in targeting the needy, Durán (2021) conducted a survey of 1,222 recipients and non-recipients and ran a quantitative analysis to identify the demographic and socio-economic drivers of program recipients. His results show family income and education are negatively correlated with the probability of getting the monetary transfer. Antwi-Boasiako et al. (2021) explain Ghana's government responses to COVID-19 by focusing on three areas: health, the economy and social impact. Ghana's economic measures embrace two programs: the Coronavirus Alleviation Programme (CAP) and the Alleviation Program Business Support Scheme (CAPBuSS). They focused on supporting small businesses, protecting livelihoods and reducing job losses, and limiting the impact on economic life. However, Antwi-Boasiako et al. (2021) note that these two programs left out informal sectors of the economy whose micro-businesses count for about 85% of Ghana's economy. Whether the $1 billion Ghana Cedis (about USD $174 million) invested in these programs will benefit the intended beneficiaries and micro-businesses is yet to be seen, although Antwi-Boasiako et al. (2021) highlight some progress despite challenges, such as delayed delivery of monetary benefits. The Dutch case (Sullivan & Wolf, 2021) provides a political economy perspective to explain the distribution of state aid across sectors in the Netherlands, a traditionally corporatist country. While KLM, the biggest player in the Dutch aviation sector, was granted massive loans (about €3.4 billion), the hospitality sector received much less. Sullivan and Wolf (2021) explain this unequal aid distribution in terms of politicians' interests and interest groups' power, rather than purely economic reasons. Their findings highlight the need to consider political interests even during crises. Renteria and Arellano-Gault (2021) present the Mexican case by underlining the economic, political and administrative effects of President Lopez Obrador's downsizing populism, which undermines and negatively portrays public servants and public organizations. This negative image serves to justify the fiscal austerity adopted by Obrador, which has reduced the size and salaries of bureaucrats, as well as reduced the infrastructure and number of public agencies. As a consequence, these reforms weakened public institutions and their capacity to handle the crisis. Finally, Wallace and Dollery's (2021) Australian case describes the fiscal impact of New South Wales local governments' closure of 372 public libraries as a measure to stop the virus spread. Although the temporary closing of public libraries has reduced local expenditures, Wallace and Dollery (2021) note the savings are insignificant compared to the likely losses in tax collection due to unemployed homeowners and struggling businesses. MULTI-LEVEL GOVERNANCE UNDER COVID CRISIS Other studies of this special issue focus on the role of bargaining, coordination and cooperation among different levels of government, jurisdictions or stakeholders (public, nonprofit or private) to tackle the pandemic. Santos (2021) rely on a multi-level governance theory as an analytical framework to analyze the Chinese government's actions to tackle the pandemic. The multi-level regime involved actors with asymmetric power and from different levels of government (e.g., city of Wuhan, provinces, and central government entities), private sector, civil society organizations and international organizations. Through content analysis reports by the Chinese government, the World Health Organization and media information, Santos (2021) highlight the leadership role of President Xi Jinping and the premier in coordinating the diverse stakeholders, governmental levels and sectors to contain the COVID-19. The Colombian case highlighted the need for bargaining, collaboration and coordination across levels of government. At the beginning of the health crisis, hurdles to collaboration and coordination were presented in Colombia due to its unitary but decentralized system that granted greater competencies to municipalities. Tensions between national and subnational responses were evident. However, as time passed, the strong chief executive and presidential system uphold power to make decisions in handling the public health emergency. Further intergovernmental clashes of power ceased, as subnational government started complying with the national guidelines. Puppim de Oliveira et al. (2021) also stress the role of intergovernmental relations in responding to the pandemic, defined as a wicked problem. To understand BRICS countries' variation in capacity to respond and manage a crisis, Puppim de Oliveira et al. (2021) focus on the nature of intergovernmental relations. To explain intergovernmental relations, three dimensions are examined: (a) the political (liberal democracy vs. authoritarian and state system (federal vs. unitary and levels of government) system, (b) reliance on formal or/and informal institutions, and (c) political alignment between levels of government. Their qualitative analysis concludes that state and political systems are influential in timing response to the crisis. Reliance on formal vs. informal institutions matters for implementation. In settings where coordination among layers of government is left to informal agreements (e.g., Brazil), the entities are more likely to show ineffective and inefficient results when confronted with wicked problems. Resende et al. (2021) focus on the coordination and cooperation necessary for Primary Health Care measures adopted by the British government in fighting COVID-19 and Cepiku et al. (2021) also attribute the differences between Italian and Swiss cases to a sound model of governance and interinstitutional cooperation, as well as the development of public-private partnerships of sophisticated health care systems. The authors add to these factors the important role played by citizens' and patients' levels of trust toward the hospitals and health system. The effective implementation of a collaborative approach among hospital, territorial medicine and communities in preventing and fighting the pandemic requires that both sides, territorial medicine and communities need to be well developed and equipped with necessary resources. Many of the papers of this symposium focus on such a theme. The Colombian case also illustrated the key role of Mayor of Bogotá, Claudia López, in questioning President Duque's decisions and the scope of adopted actions in handling the health crisis, as well as the scope of these actions. Casarões and Magalhães (2021) underscored the roles of Presidents Trump and Bolsonaro, far-right leaders, in mobilizing "medical populism and alt-science" in their attempts to promote hydroxychloroquine as a COVID-19 treatment, despite controversial results. We believe their contribution is key to our understanding of current development related to COVID-19, such leadership`s narratives and their role in vaccination campaigns. Renteria and Arellano-Gault (2021) also add to our understanding of populist leadership in time of COVID-19, in comparing the Mexican populist federal government and the non-populist Jalisco state and highlighting how populist beliefs drive the bureaucratic actions taken by a populist government to handle the pandemic. Besides identifying the key actors involved in the Chinese response to COVID-19, Santos (2021) also explore the roles played by those actors. They highlight the mediating role of President Xi Jinping and coordinating role of the premier, while the other actors work on 'brokering' and 'levering. " The strategy was to involve private actors horizontally, while the different levels of government were involved vertically. Sullivan and Wolff 's (2021) Dutch case perfectly illustrates the role of political actors in assigning governmental aid during COVID-19. Through process-tracing and content of hundreds of national media articles, the case seeks to understand the influence of elected leaders, interest groups and experts. Sullivan and Wolff (2021) report that the political ambition, vote-seeking behavior of the elected leaders of the current coalition, as well as the strength of powerful interest groups, (KLM) influenced distribution of financial aid across the aviation and hospitality sectors. Beyond the experiences of Colombia, Brazil, U.S., the Netherlands and China, it is also important to recognize the key roles that "crisis teams" have played in each of the covered countries. Their names vary from Presidential Task Force (Nigeria), National Command Council (South Africa), and National Public Health Emergency Operation Center (Ethiopia), just to mention a few (Sotola et al., 2021). Last, but not least, considering the military role in the Latin American region, Passos and Acácio (2021) brought our attention to the role that armed forces has played in enforcing government measures, such as social distancing and lockdowns across 14 Latin American countries. Based on armed forces' tasks and involvement, Passos and Acácio (2021) attributed a score to each country, with Uruguay exhibiting the lowest score (8) and Honduras the highest (13). PANDEMIC AND SOCIO-ECONOMIC INEQUALITIES A key lesson that emerged from the pandemic has been the disproportionate effects across countries, regions, cities, races, socio-economic groups and levels of government. To illustrate, Bello-Gomez and Sanabria-Pulido (2021) highlight Colombia's challenges of uneven healthcare capacity across the national and subnational governments. This unequal capacity is mirrored through the lack of or deficiency of intensive care unit capacity in regions like the Pacific coast and the Amazon and Orinoco basin (states). Moreover, at that time, Amazonas registered the higher number of COVID-19 deaths per capita compounded by the influx of cases from Brazil. Sotola et al. (2021) also underscore the inequality in testing capacity, number of hospital beds, and the number and quality of health system institutions across African countries. Moreover, when compared to other continents, the African region scores low in its diagnostic tools (Nkengasong, 2020). Passos and Acácio's (2021) analysis of 14 countries in the Latin American regions also illustrates how the involvement of armed forces in securing borders, policing stay-at-home orders, managing crisis, logistics, and medical care can lead to human rights violations toward the most vulnerable groups. The Mexican case also illustrates how the existing structural disadvantages that have limited indigenous communities' access to academic training have been exacerbated by the pandemic. Due to the shutdown of in-person education, Sánchez-Cruz et al. (2021) examine measures taken by the Mexican government to provide online education, in general, and to indigenous groups, in particular. Sánchez-Cruz et al. (2021) reviewed official websites from UNESCO, the Mexican Ministry of Education, and three states -Oaxaca, Yucatán and Chiapas -with the major number of indigenous people, 32%, 28.9% and 27.9%, respectively. Their findings suggest that TV programs and school booklets in indigenous languages mainly have focused on kindergarten and elementary education with very little material for secondary schools. Therefore, despite the measures taken by the national and state governments, these efforts are still limited and biased in favor of monolingual students. Despite revealing the exacerbation of racial, geographic, and socio-economic inequalities, Durán's (2021) quantitative study sheds some optimism when investigating the characteristics of those benefited by a cash-transfer program in the Salvadorian context. Although the program lacked control and accountability mechanisms, family income and education seemed to have negatively driven identification of program recipients. His results show correlation with the probability of getting monetary transfers, indicating the important role for the emergency cash-transfer program for vulnerable communities. CONTEXT, POLICY RESPONSES AND EFFECTIVENESS Recently, public administration scholars have paid more careful attention to the contextual factors and institutional differences which may influence government performance and choice of appropriate policy tools and public management practices (Bertelli, Hassan, Honig, Rogger & Williams, 2020;Meier, Rutherford & Avellaneda, 2017;Milward et al., 2016;Peci & Fornazin, 2017;Suzuki & Hur, 2020). Rather than promoting one universally applicable public management approach or assuming that "all states are alike" (Milward et al., 2016, p. 312), scholars now have more carefully examined what contextual factors lead to specific public sector performance and outcomes. As we have seen so far, governments' measures and responses to COVID-19 are significantly different across countries. Some of the contributions in this special issue highlight unique contextual factors which help to explain variations in government responses and effectiveness of COVID-19 measures. Raudla (2021) contributes with the Estonian case. Raudla (2021) praises Estonia's effectiveness in curbing the spread of corona virus and attributes the crisis management success to contextual conditions, such as political factors, quick policy learning, cooperation with the scientific community, and the existing ICT and e-government infrastructure. Country size favors all these mechanisms. Sotola et al. 's (2021) analysis of South African countries also outlined some of the contextual factors benefiting governmental response to the COVID-19 crisis. Community resilience and support, adoption of timing measures, experience in facing other health crises (e.g., HIV, tuberculosis, cholera, polio and ebola) are among the favorable factors in managing the pandemic. For instance, in Ghana, Rwanda and South Africa, citizens' engagement and support for governmental measures are evident through social media. Nigeria, Uganda, and Lesotho imposed strong measures despite having no or few cases (Sotola et al., 2021). The existence of a Centre for Disease Control in African countries helped to response promptly to the pandemic due to the robustness of the existing health infrastructure. CONCLUSION The COVID-19 pandemic has brought unprecedented challenges to elected officials, public servants, healthcare workers and citizens. The problems that governments have been facing cover a wide range of areas. In this special issue, we present 17 contributions which focus on the impacts of social distancing strategies, economic-relief responses, the role of bargaining, collaboration and coordination across levels of governance, key actors and their role in the pandemic response, pandemic and socio-economic inequalities, and context, policy responses and effectiveness. Our sincere hope is that contributions presented in this special issue will be used as evidence to further stimulate academic and practical discussion in order to have a more comprehensive understanding of government responses to the pandemic and key factors for successes and failures of government approaches and strategies. We also hope that this special issue contributions shed light on some of the neglected regions and topics on pandemic management and contribute to future government strategies and academic discussions.
2021-04-07T13:09:56.889Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "eae87134bc64dec6cac9a9c3a4c09dae0629d9bf", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rap/a/rZsMC9BXJMBk5L4zQBMBLYh/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c5baa2ec19e1e0154c44c8585f7fec4d2de82413", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
15048350
pes2o/s2orc
v3-fos-license
Proteome Serological Determination of Tumor-Associated Antigens in Melanoma Proteome serology may complement expression library-based approaches as strategy utilizing the patients' immune responses for the identification pathogenesis factors and potential targets for therapy and markers for diagnosis. Melanoma is a relatively immunogenic tumor and antigens recognized by melanoma-specific T cells have been extensively studied. The specificities of antibody responses to this malignancy have been analyzed to some extent by molecular genetic but not proteomics approaches. We screened sera of 94 melanoma patients for anti-melanoma reactivity and detected seropositivity in two-thirds of the patients with 2–6 antigens per case detected by 1D and an average of 2.3 per case by 2D Western blot analysis. For identification, antigen spots in Western blots were aligned with proteins in 2-DE and analyzed by mass spectrometry. 18 antigens were identified, 17 of which for the first time for melanoma. One of these antigens, galectin-3, has been related to various oncogenic processes including metastasis formation and invasiveness. Similarly, enolase has been found deregulated in different cancers. With at least 2 of 18 identified proteins implicated in oncogenic processes, the work confirms the potential of proteome-based antigen discovery to identify pathologically relevant proteins. Introduction The antigenicity of tumor cells as defined by antibodies in the sera of cancer patients may provide new insights into the interrelationship of tumor and immune system, and into the molecular pathology of the tumor cells. It, thus, may guide the development of new immune intervention strategies for therapy and lead to new markers for diagnosis and disease monitoring [1][2][3][4][5][6][7][8]. The most often employed approach to the identification of serologically defined antigens so far has been the serological identification of recombinant expression cloning (SEREX) approach [9][10][11][12]. Hereby, cDNA libraries of tumor tissue, tumor cell lines or human testis are cloned into l phage expression systems and screened with sera of cancer patients. SEREX has been employed to identify serological antigens for melanoma and the SEREX database (www2.licr.org/CancerImmunomeDB) lists 102 melanoma-associated antigens [9,12,13]. These antigens can be categorized as differentiation antigens such as tyrosinase, cancertestis antigens such as MAGE-1 or NY-ESO, overexpressed gene products, mutated gene products such as cdk4 or p53 mutants, and cancer-related autoantigens such as CEBP-c. In addition to mutation, over-or ectopic expression, antigenicity of a protein expressed by tumor cells may be determined by posttranslational modifications or altered accessibility. Proteome-based approaches may complement SEREX approaches in these aspects and, by displaying and screening with patient sera the proteomes of tumors in single 2-dimensional electrophoresis gels, provide information on the scope of tumor cell antigenicity. This technology has been applied to renal cell carcinoma, ovarian cancer, pancreatic adenocarcinoma, prostate cancer, gastric cancer, hepatocellular carcinoma, colon cancer and lung squamous carcinoma [4,[14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. Using technologies developed for proteomics research, the antigens are identified by matching antigen spots in Western blots with protein spots in replica silverstained gels, excising these protein spots and identifying the antigens by peptide mass fingerprint (PMF), peptide fragment fingerprint (PFF) or de novo peptide sequencing of tryptic fragments of the proteins [30]. This approach has variously been named proteome serology [31], immunoproteomics [16], serological proteome analysis (SERPA) and other [21]. We employed this combination of Western blot analysis of serological specificities and mass spectrometry-based protein identification to determine the range of the immunogenicity of melanoma and the specificities of the corresponding antibody responses in the patients. Seroreactivities for melanoma-associated antigens To determine the frequencies and specificity patterns of antibody responses to melanoma-associated antigens, we tested by Western blot analyses the reactivities of sera of 94 patients with melanoma (see Table S1 for clinical details of the cases) against protein extracts from the melanoma cell line M-NRT separated by 1-dimensional SDS-PAGE ( Figure 1A-C). As controls, the sera of 9 healthy individuals ( Figure 1D), and of patients with cutaneous lymphoma, pancreas carcinoma or visceral leishmaniasis, an infectious disease that is known to induce responses to autoanti- The numbers atop of each lane represent the number of the sera and are used throughout this report. For comparison, melanoma serum 7 was included in all blots. The Western blot analyses were done with the sera at a dilution of 1/6 in a multiple channel blotting/Western blot developing chamber for panels A-E. The letters underneath the lanes indicated the combinations of sera for the multiple probing of the 2D Western blots shown in Figure 2 and summarized in Table 1. Panel F: High-gens of the infected host, were tested ( Figure 1E). Overall, the signals were relatively weak given the high serum concentrations of 1:6 used for these Western blots. A large number of faint background bands were detected with the sera of the healthy controls and patients alike as illustrated with Figure 1F which shows the Western blots with serum of a healthy donor (Serum 103) and the sera of two melanoma patients (Serum numbers 7 and 18). The Western blots shown in Figure 1F were developed as individual strips in separate plastic bags at a dilution of 1 in 200 which results in a better definition of the bands but is not suited for comparative screening of large numbers of sera. The arrows to the left of the Western blot lanes indicate shared signals found with patient and control sera alike. The arrows to the right of the lanes are stronger and indicate unique bands seen only with sera of melanoma patients. Serum 18 was from the same patient from whose tumor the melanoma cell line M-NRT had been established. This Western blot, thus, documents the reactivity in the autologous combination of tumor cells and serum. All other sera were from different patients or healthy donors, thus displaying the reactivity in heterogeneous combination of tumor cells and serum. The antigens thus detected are, therefore, expected to be antigens shared between the tumors of the serum donors and the test tumor cells M-NRT. With nearly two-thirds of the sera of the melanoma patients prominent bands are detectable that are not found with the healthy control sera or that are much stronger than those in the controls ( Figure 1). The numbers of such prominent antigens detected with the reactive sera ranged between 2 and 6 per patient serum. Their masses were between 21 and 90 kDa with the bulk of the stronger bands between 40 and 80 kDa. The patients whose sera were tested represent, with the exception of ocular melanoma, all forms of melanoma. The majority of the patients were at stage 3 or 4 of disease but some were at earlier stages (Table S1). The course of disease ranged from relatively slow progression to aggressive disease. Within this range of patients there is no correlation of the numbers of antigens targeted, and frequencies or pattern of seropositivity with the clinical conditions. A few stronger reactivities were seen with the sera of healthy controls and patients with lymphoma, pancreas carcinoma or Leishmaniasis. These antigens were different than those detected with the sera of melanoma patients. In summary, about two-thirds of the patients had developed prominently detectable antibody responses against melanoma-associated antigens. The serospecificity patterns, however, were heterogeneous with no antigen that induced responses in a majority of the patients. Identification of melanoma-associated antigens that raise antibody responses Direct identification of antigens from 1-dimensional SDS-PAGE gels is not possible because of the complexity of the protein mixtures at every position in the lanes and difficulties in exactly aligning Western blots with silver-stained gels. We, therefore, separated the proteins of the melanoma cell line M-NRT by 2-dimensional electrophoresis with a pH gradient of 3 to 10 in the isoelectric focusing gels. Nine replica gels were prepared, eight used for Western blot analyses (Figure 2, panels A-H) and one for silver staining (Figure 2). Each of the Western blot filter A through G were probed successively with the sera of 8 melanoma patients, 56 sera in all. These are all available sera with which prominent bands were detected by 1DE. Blot H was tested with the sera of 9 healthy controls. The use of multiple sera per Western blot is required to expose several antigen spots per blot which is essential for spot pattern recognition and alignment of silverstained proteins with Western blot spots. The serum combinations for multiple probing were selected to allow good pattern definition for assigning Western blot spots with protein spots in the silverstained gel, and are listed in Table 1 where the numbering of the sera corresponds to the numbering in Figure 1. The letters underneath the blot lanes in Figure 1 refer to the 2D Western blots in Figure 2. The antigen signals were weak but clearly detectable in the 2dimensional Western blots ( Figure 2). The blots display 26, 19, 21, 20, 24, 3 and 15 antigen spots in panels A through G, respectively, altogether 128 antigen spots. With the sera of the healthy controls, 4 antigen spots were detected ( Figure 2H). The antigens concur in numbers and mass distribution with the antigenicity patterns shown in Figure 1. Fifty-eight of the antigens could be assigned to protein spots in the silver-stained gels, 17,9,4,12,9,1,4 and 2 for the blots in Figure 2A through H, respectively. The antigenprotein assignments were done by first aligning gel and Western blots by their geometry as defined with artificial marker proteins spotted on cardinal points of the gels and marker protein spots detected in the blots after Ponceau S staining. A number of prominent marker spots were mapped for confirmation by partial blotting and matching the corresponding Western blots and silverstained gels. Then, the spot patterns in the local environments of the antigens were compared taking spot sizes and shapes into consideration. The assigned and identified spots are indicated with arrows and numbered in the Western blots and the corresponding silver-stained gel shown in Figures 2 and Figure 3. The protein spots corresponding to the assigned antigens were excised from the gel, destained and subjected to trypsin digest. The resulting tryptic fragments were analyzed by peptide mass fingerprint by MALDI-TOF-MS with MASCOT and PROWL analyses of the peak lists. Forty-six of the 58 assigned antigens could, thereby, be identified. Spots number 10, 12, 13 and 24 were found in 2 Western blots, spots 1 and 5 in 3, spot 2 in 4, spot 3 in 5 and spots 4 and 9 in 6. All other assigned antigens were found only once. The 46 antigen spots, thus, represent 18 different antigens. As examples for the mass-spectrometric identification of the antigens, the fingerprint mass spectra for spot number 9 identified as galectin-3, spot number 1 identified as the gelsolin-like actin filament-capping protein MCP, spot number 4 identified as heat shock protein 60 (HSP 60) and spot number 28 identified as the elongation factor EF-Tu are shown in Figure 4A, B, C and D, respectively. The spectra for the other antigens are provided as supplementary materials (Figures S1, S2, S3, S4, S5, S6, S7, S8, S9, S10, S11, S12, S13, S14, S15, S16, S17) as indicated in Table 2 together with the gene identifier numbers, protein-chemical parameter and the results of the mass-spectrometric identification for all antigens. In the cases where masses remained that could not be assigned to the identified proteins, a secondary peptide mass resolution 1D Western blot for comparison of the seroreactivities of melanoma patients and a healthy donor against the melanoma cell line M-NRT. In contrast to the blots shown in panels A-E, the sera were applied 1/200 to isolated lanes from SDS-PAGE in sealed plastic bags which results in a better definition of the bands when compared to blots from multiple blotting devices. Serum 18 is autologous to tumor cell line M-NRT, serum 7 is from a different patient and serum 103 from a healthy donor. The arrows to the left of the lanes indicated antigen bands that are shared between the patients and the healthy controls and occur in every blot. The arrows to the right of the lanes indicated prominent antigen that are detected only by the sera of melanoma patients. doi:10.1371/journal.pone.0005199.g001 fingerprint analysis was done with these remaining masses. In no case could a second protein be identified indicating that the analyzed spots did not contain major contaminating proteins. The identified antigens can be grouped according to their cell biological functions into different categories. Three of the antigens are heat shock proteins (HSP60, HSP70 and HSP70 protein 9B), 7 are enzymes of the cellular metabolism (enolase I, dienoyl-CoA reductase, aldolase A, fumarate hydratase, aldose reductase, aconitase and lactate dehydrogenase), hnRNP-1 is a nuclear protein involved in RNA processing, EF-Tu an elongation factor in protein biosynthesis, MCP affiliated with the cytoskeleton and involved in cell migration, calumenin a calcium-binding protein involved in regulation of metabolic processes, VCP participates in the regulation of protein export and the organization of the Golgi apparatus, LAP3 and PSME1 are involved in protein metabolism, and galectin-3 is a lectin with specificity for galactose. The sequence coverage of the mass-spectrometric fingerprint analysis of HnRNP is somewhat low. However, HnRNP had been repeatedly identified from different gels with better sequence coverage so that we are sure of its correct identification. For all Table 1. Serum dilutions were 1/200. The blots shown in Panels A through G were probed successively with 8 different patient sera each, blot H was probed with the sera of 9 healthy controls. The arrows indicated the antigens that could be assigned to protein spots in the silver-stained gel show with Figure 3. The numbering for the antigens is used throughout this report. doi:10.1371/journal.pone.0005199.g002 other antigens the sequence coverage is sufficiently high for unequivocal identification of the corresponding proteins. Galectin is extracellularly expressed and involved in various interactions with serum proteins, other cells and extracellular matrix, and variously implicated in cancer-related processes such as metastasis formation and invasiveness. Enolase I, although a cytosolic protein, has been reported to be exported and is found in the extracellular environment of some tumors. Of the identified antigens, galectin-3 and HSP60 most often induce antibody responses in melanoma patients followed by calumenin and The numbering refers to the Western blots shown in Figure 2. enolase I. For HSP60 and calumenin seroreactivity was also detected in healthy donors tested. Discussion About two thirds of the 94 melanoma patients tested in this study had mounted specific antibody responses against melanomaassociated antigens with an average of about 3 antigens detectable per seropositive patient. Only a few of the specific antigen bands were shared between different patients and no two patients displayed the same pattern of antigenicity. In highly resolving 2dimensional Western blots 128 antigens were detected with sera of 56 patients, i.e. an average of 2 to 3 antigens per patient. Four antigens were also found with the sera of the healthy controls which corresponds to an average of less than 0.5 antigens per serum and is comparable to other studies done for cancer and infectious diseases. Fifty-six of these antigens could be assigned to protein spots in silver-stained gels and 44 of them identified by mass spectrometry. They were found to represent 18 different antigens, two, galectin-3 and HSP60, were found in 6 of the 7 Western blots, several more in between 2 and 5 blots but the majority was detected only once. The serological immunoreactivities against the tumor cells, thus, are mostly heterogeneous. Nonetheless, the majority of the patients had mounted antibody responses against several antigens of the melanoma cell line tested and some antigens raised responses in a relatively high proportion of cases. For all antigens, the signals in the Western blots were weak, despite high serum concentrations, indicating that the B cell responses were weak. However, all these responses were secondary, IgG responses, thus, depending on repeated stimulation by the antigens and induction of MHC class II-restricted CD4 + helper T cells with specificity for the same antigens in order to induce immunoglobulin class switch. These antigens, therefore, are expected to harbor CD4 T cell epitopes as well as B cell epitopes. The identified antigens represent different cell-biological categories of proteins including structural proteins (MCP), metabolic enzymes (enolase 1, dienoyl-CoA reductase, aldolase A, fumarate hydratase, aldose reductase, aconitase, LDH-H), heat shock proteins (HSP60, HSP70, HSP70 protein 9B), proteins involved in protein biosynthesis and protein metabolism (HnRNP1, EF-Tu, PSMEI1, LAP3), a calcium-binding regulator protein (calumenin) and a lectin of the outer membrane (galectin-3). The prominence of metabolic enzymes among antigens targeted by serological immune responses has also been reported for patients with pancreatic, lung, renal, colon and hepatic cancers [15,20,[23][24][25]29]. As for the majority of other serologically defined tumor-associated antigens identified so far, there is no recogniz- Table 2 together with the statistics of the mass-spectrometric identification of the proteins. Asterisks indicate the tryptic fragment masses matched to the database sequences of the proteins. doi:10.1371/journal.pone.0005199.g004 able structural cause for the immunogenicity of the antigens (www2.licr.org/CancerImmunomeDB). As judged from the protein gels, they do not grossly deviate in molecular masses and isoelectric points from what is known for these proteins. Comparative proteomics studies have shown that some of the antigens may be overexpressed in tumor cells compared to their normal counterparts [32,33]. Such overexpression may promote immunogenicity of these proteins in cancer patients. With two exceptions, the antigens are intracellular proteins and not directly accessible to the immune system. The one exception is galectin-3. It is very interesting that seroreactivities for this protein were found in 6 of the 7 Western blots suggesting that it is a relatively dominant antigen and, maybe suited for targeted therapy. The second, enolase, has been reported to be, under not yet understood circumstances, exported from tumor cells. For the other antigens it is likely that their immunogenicity is related to destruction of tumor cells and their aberrant exposure to the immune system. Only one of the antigens reported herein, galectin-3, had already been reported for melanoma [12]. Other antigens such as EF-Tu, HSP70, HSP60, aldolase, fumarate hydratase, aldose reductase, aconitase, HnRNP1, EF-TU, VCP and enolase have been reported for other cancers including cutaneous T cell lymphoma, and renal, hepatic, lung and pancreatic cancers but not for melanoma [15,20,[23][24][25]29,34,35]. The remaining antigens appear to be new tumor-associated antigens. So far all serologically defined antigens known for melanoma have been identified by SEREX [9,12], and the SEREX database lists 102 melanoma-associated antigens (www2.licr.org/CancerImmuno-meDB). Besides a number of unknown function and superfamily affiliations, and some cancer/testis antigen [1,3,5,10], the vast majority of the SEREX-defined melanoma-associated antigens are of the same functional categories as those reported herein: chaperones, metabolic enzymes, proteins involved in protein biosynthesis and catabolism, structural proteins and regulators of the cellular metabolism. Galectin-3, found by SEREX and in the present study by proteome serology, is of great interest as it is one of the very few serologically defined tumor-associated antigens that are present at the outer cell surface. It has been described as deregulated in different cancers and as involved in cancer-related processes such as cell plasticity and vasculogenesis [36][37][38][39][40][41][42][43]. Also enolase is exported and has been implicated in cancer [44]. It had been identified as tumor-associated antigen in renal cell carcinoma, lung squamous carcinoma, leukemia and pancreatic ductal adenocarcinoma [23,24,29,45] (Novelli et al. WO/2008/037792). These two proteins may be targets for therapeutic intervention. Proteome serology has been employed for antigen discovery for renal cell carcinoma [24], breast cancer [21], colon carcinoma [15,21], prostate cancer [26], pancreatic adenocarcinoma [20], ovarian cancer [19], hepatocellular carcinoma [25], leukemia [45] and lung squamous carcinoma [23,29]. While many of the specific antigens reported for these cancers differ from those found for melanoma, they do represent the same classes of proteins as discussed above. On the other hand, there are antigens such as galectin-3 that were found by proteome serology so far only for melanoma, and others only for other cancers. Some of these restricted antigens are ubiquitously expressed so that the basis for their restricted antigenicity remains unclear. Very interesting is the limited overlap of antigens discovered by SEREX and proteome serology. This can not be explained by differences in the sensitivity of these technologies as the more sensitive technology should include also the antigens discovered by the less sensitive approach. A possible explanation might be that these two technologies focus on different sets of antigens: SEREX readily identifies mutated antigens and antigens arising from splice variations but not posttranslational modifications. Proteome serology, on the other hand, can detect proteins whose antigenicity is related to posttranslational modifications but mutations only when de novo sequencing approaches are used instead of conventional protein identification by peptide mass fingerprint [46,47]. The two technologies thus are complementary. Tumor cells, cell lines and sera The tumor cells were isolated from melanoma metastases by treating minced tissue with collagenase free of contaminating protease activities and DNAse, passaging the resulting suspension through a cell sieve to remove connective tissue, allowing the tumor cells to adhere to tissue culture plastic plates and wash off non-adherent cells. The tumor cells were harvested by scraping them off the surface for direct use. Melanoma cell lines were established from such primary cells by culturing in DMEM (Gibco, Heidelberg) with 10% FCS (Biochrome, Berlin) and pencillin/streptomycine (Gibco, Heidelberg) at 37uC with 8% CO 2 . The cells were processed for electrophoretic analysis immediately after harvest from the tumor nodules or from cell cultures. The sera were collected from 94 patients with melanoma at different stages of disease and 9 healthy donors of the same average age. The study with human subjects had been reviewed and approved by the institutional ethics committee of the Charité -Universitä tsmedizin Berlin (Si. 277, September 11, 2003). The materials were obtained and used with written informed consent of the donors. Protein sample preparation The cells were collected by centrifugation at 1,6006g for 10 minutes at RT, washed three-times with PBS and solubilized by a protocol adapted from Görg et al. [48] and Chan et al. [49] with lysis buffer (7 M urea, 2 M thiourea, 2.5% Triton X-100, 2% b-mercaptoethanol, 0.8% Pharmalyte pH 3.5-10 (LKB, Freiburg, Germany), 200 mM PefablockH (Merck, Darmstadt), 1 mM pepstatin (SIGMA, Munich, Germany) and 10 mM leupeptin (SIGMA, Munich, Germany) by vortexing and sonication for 10 minutes in an ice cooled water bath. The cell extracts were incubated for one hour at room temperature (RT) with 4,000 U/ ml benzonase (Merck, Darmstadt, Germany) to degrade nucleic acids and then centrifuged at 350,0006g at 15uC for 15 minutes. The supernatants were collected, incubated once again with Benzonase for 10 minutes at RT and cleared by ultracentrifugation as before. Two-dimensional gel electrophoresis (2DE) Isoelectric focusing (IEF) was run in immobilized pH-gradient gel strips (IPG strips 180 mm63 mm; Pharmacia, Freiburg) with a pH range of 3 to 10 [48,49]. About 100 mg protein, extract of 1610 6 cells in 350 ml solubilization buffer were loaded into an IPG strip by in-gel re-swelling overnight at RT under silicon oil in a nitrogen-and water-saturated atmosphere to prevent oxidation of the protein and drying of the gel strips. The loaded strips were rinsed, mounted on a cooled ceramic plate and connected to the electrodes via water-wetted paper bridges. The IEF was run at 20uC under silicon oil in a nitrogen-and water-saturated atmosphere with 0.15 mA per IPG strip and 50 V for 18 h, 150 V for 1 h, 300 V for 2 h, 600 V for 1 h, 3,500 V for 6.5 h and 5,000 V for 3 h, a total of 40,000 Vh for pH 3-10 IPG strips. After the run, the IPG-strips were stored at 220uC. For the second dimension, IPG strips were thawed, rinsed with de-ionized water and equilibrated to SDS-PAGE conditions for 15 minutes in 6 M urea, 30% glycerol, 2% sodium dodecylsulfate (SDS), Tris pH 6.8 and bromophenol blue, 1% dithiothreitol (DTT) followed by 15 minutes in the same buffer but with 4% iodoacetamide instead of DTT. The equilibrated IPG-strips were rinsed with de-ionized water and placed gel-side to gel-side onto the 4.8% acrylamide, 0.13% bisacrylamide stacking gel of a horizontal SDS polyacrylamide gel with a 12.3% acrylamide, 0.34% bisacrylamide separation gel. The settings for the runs were 1,000 V, 40 W and 20 mA for 2-3 h for the pre-run to transfer the protein from the IPG strip into the SDS polyacrylamid gel, followed by 1,000 V, 40 W and 40 mA for the separation until the running front reached the anodic end of the gel. Protein staining The gels were stained with the high-sensitivity silver staining approach by Blum and colleagues [50]. Briefly, SDS-PAGE gels were fixed in 40% Methanol, 10% acetic acid for one hour or overnight. Then, the gels were washed three times for 20 minutes in de-ionized water, sensitized for one minute in 0.02% sodium thiosulfate, washed three times for 20 seconds in water and incubated for 20 minutes in silver staining solution (0.2% silver nitrate, 0.0074% formaldehyde). After washing three times for 20 seconds in water, the gels were incubated in developing solution (6% sodium carbonate, 0.00015% formaldehyde, 0.0004% sodium thiosulfate) until protein spots became visible. The reactions were stopped with 0.025% EDTA in water. Western blots The proteins from unstained SDS-PAGE were transferred onto nitrocellulose membranes (Schleicher & Schü ll, Dassel, Germany) by semi-dry blotting for 2 hours at 400 mA. Free binding sites on the membranes were blocked with 5% low fat milk powder in Tris-buffered saline (TBS) for one hour at room temperature or at 4uC overnight. After blocking, the membranes were incubated with patient sera at a 200-fold dilution for one hour at room temperature, washed thrice for 10 minutes with TBS and incubated for 30 min with alkaline phosphatase-labeled anti human IgG (Anti-Human Ig-AP, Fab fragment; Boehringer Mannheim) at a 5,000-fold dilution. After washing three times 10 minutes with TBS, the membranes were equilibrated to developing buffer (100 mM NaCl, 100 mM Tris-HCl, pH 9.5) and developed in the dark with 100 ml BCIP and 100 ml NBT in 100 ml developing buffer until antigen spots became visible. The reactions were stopped by replacing the developing solution with water. For 1-dimensional Western blots, the proteins of the melanoma cells were separated by SDS-PAGE, 12% acrylamide, 0.8% bisacrylamide, blotted 1 h with 60 mA onto nitrocellulose and processed as above. For detection of antigens, the sera were applied in a multiple channel device ( Figure 1A-E) or, after cutting the blots into strips, in individual sealed plastic bags ( Figure 1F). Since efficient agitation of the serum solutions is not possible in the multiple channel device and antigen binding is solely dependent on diffusion, the sera had to be employed at a high 6-fold dilution. For the blots developed as strips in individual plastic bags, 200-fold dilutions of the sera were used as for the 2D Western blots. Matching antigens and protein spots To match antigen spots in Western blots with the corresponding protein spot in the silver-stained gel, the coordinates of the blots and the gel were defined, first, with artificial spots at the corner points, Ponceau S staining of the blot filter and aligning the spot pattern with the spot pattern of the silver-stained gel and partial blotting of a master gel and definition of marker spot sets, second, definition of spot pattern in the local environments of the antigen spots to match blots and gels in these local regions accurately, third, comparing the sizes and shapes of the antigen and protein spots and considering only those that are alike in these two parameter. In-gel digestion of protein The protein spots were excised manually with a self-made spot picker and de-stained as described by Gharadaghi and colleagues [51] with 50 ml of Farmer's reducing solution (15 mM potassium ferricyanide and 50 mM sodium thiosulfate, both dissolved in water), then washed three times for 5-10 minutes with 150 ml water. Then the gel spots were soaked in acetonitrile and dried under vacuum. The gel pieces were re-swollen in 7.5 ml of 5 mM ammonium bicarbonate with 75 ng of modified porcine trypsin (sequencing grade, modified; Promega; Madison, USA) to fragment the protein. After 10 minutes, 7.5 ml of 5 mM ammonium bicarbonate were added and the solution with the gel pieces incubated for at least 4 hours in a 37uC water bath. For MS analysis, 1.5 ml of the aqueous supernatants were mixed with 1 ml of 2,5-dihydroxybenzoic acid (DHB) (SIGMA, Munich, Germany) (5 mg/ml water) directly on MALDI targets (MTP AnchorChip 600/384, Bruker Daltronik, Bremen) and air-dried. 10 Da trypsin fragments as internal standards. Monoisotopic peptide masses were recorded. The spectra were processed by the ''Xmass'' software (Bruker Daltonik, Bremen) and the peaks annotated automatically and checked manually. Post-source decay (PSD) analyses were done in 10 sections for the entire mass range and data accumulated from up to 300 shots per section. The peak lists of the mass spectra were the basis for peptide mass fingerprint analyses with the Mascot software (Matrix Science; http:www. matrixscience.com/search_form_select.html) and profound (prowl; http://prowl.rockefeller.edu/profound_bin/WebProFound. exe) using the NCBI sequence database. Supporting Information Table S1 Patient data for the sera used in the study. Melanoma patients and sera used for proteome-serological analysis of the tumor-associated antigenicity in melanoma.
2017-09-02T01:47:06.523Z
2009-04-17T00:00:00.000
{ "year": 2009, "sha1": "e12434351fa9ab76ab4139ccf377406e8dbb96a7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0005199&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e12434351fa9ab76ab4139ccf377406e8dbb96a7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18959008
pes2o/s2orc
v3-fos-license
Choosing Weights in Optimal Solutions for Dea-bcc Models by Means of a N-dimensional Smooth Frontier The DEA (Data Envelopment Analysis) smoothed frontier was introduced to solve the problem of multiple optimal solutions in the extreme efficient DMUs (Decision Making Units), which hinders the knowledge of the substitution rates (tradeoffs). It consists of changing the original frontier (piecewise linear) for a smoothed one, being as close as possible to the original one, and having continuous partial derivates at every point. First, a solution was developed only for the BCC (Banker, Charnes and Cooper) model with either a single input or a single output. Then, it was generalized for the N-dimensional BCC model with simultaneous multiplicity of inputs and outputs, but limited by the fact that the polynomial of the output needs to be a linear one. The present article presents a general model, which not only expunges the limitations of the previous models but also includes them. Resumo A suavização da fronteira DEA (Data Envelopment Analysis – Análise Envoltória de Dados) surgiu como uma solução do problema das múltiplas soluções ótimas nas DMUs (Decision Making Units – Unidades Tomadoras de Decisão) extremo-eficientes, o que impossibilita o conhecimento das razões de substituição (tradeoffs). Ela consiste na substituição da fronteira original (linear por partes) por outra suavizada, de modo que esta fronteira suavizada seja próxima da original, e que tenha derivadas contí-nuas em todos os pontos. Inicialmente foi desenvolvida solução apenas para o caso do modelo BCC (Banker, Charnes e Cooper) com apenas um input, ou apenas um output. Em seguida obteve-se uma generalização da solução para o caso BCC N-dimensional com multiplicidade simultânea dos inputs e dos outputs, porém com a limitação da linearidade do polinômio dos outputs. O presente artigo vem apresentar um modelo geral, que elimina as limitações dos modelos anteriores, e também os engloba. Nacif, Soares de Mello & Angulo Meza – Choosing weights in optimal solutions for DEA-BCC models by means of a n-dimensional smooth frontier Introduction Data Envelopment Analysis -DEA was developed by Charnes, Cooper & Rhodes (1978) to compute the efficiency of productive units (Decision Making Units -DMUs), whenever the financial viewpoint is not the dominant consideration.DEA methodology evaluates the efficiency of each DMU taking into account its resources (inputs) and the results it obtains (outputs). Classic DEA models always present dual formulations as any mathematical programming model.There are, thus, two equivalent DEA formulations (Cooper et al., 2000).To put it simply, one of the formulations (the Envelope model) defines a feasible production area and works with the distance of each DMU to the frontier of that area.The other formulation (the Multipliers model) works with the ratio of weighted sums of products and resources.The weighing factors are the most favourable that can be chosen for each DMU under certain circumstances. Being dual problems, the two formulations will obviously provide the same efficiency for each DMU.Furthermore, other information beyond efficiency can be obtained from the two models.The envelope model provides weight coefficients for each efficient DMU (designated as λ i ) to create a virtual DMU that will be used as benchmark for each inefficient DMU.For any given orientation, these coefficients are computed in single given way for each DMU.It is of particular interest to see what happens in the efficient DMUs: they are their own benchmark and, so, the envelope LPP (Linear Programming Problem) returns 1 for λ referring to that DMU and nought for all others.A highly degenerated LPP obtains thus. The multipliers model provides the weighing coefficients that each DMU will allocate to each input or output.The fact that each DMU provides different values for these multipliers is the very essence of DEA.Each DMU is free to value better whatever it is best at, and to ignore the variables in which it does not perform well.All and every DEA model should preserve this freedom in greater or smaller measure. Beside this freedom for each DMU to choose its own weights, certain DMUs may choose more than a unique set of weights.This can cause problems and the aim of this paper is to propose a way of choosing a unique set of multipliers for all DMUs.This paper is organised thus: Section 2 presents a justification for this study.A bibliographic review is presented in Section 3. The general theory of smoothing the DEA frontier is presented in Section 4. The generalisation of the BCC smoothing model and the original contribution of this study are presented in section 5.A numerical example is given in section 6 and, finally, section 7 presents the conclusions of this paper. Reasons for this study The need to know the value of the multipliers (also known as weights) stems from the economical interpretations associated to them.The commonest of these interpretations is that each weight is an indicator of the importance each DMU gives to its value to determine its efficiency.The validity of this interpretation, without further calculations, requires the variables to be previously standardised. From an economic viewpoint, multipliers may have two interpretations.The first is as components of the weight ratio between variables (tradeoffs), meaning how much should a DMU increase a given input when another decreases by one unit or how much does the increase in a given input increase an output.A second and more important interpretation is that weights are standardised shadow prices (Coelli et al., 1998).This is useful to determine the prices of quantities that have no market value; one only needs to know the effective cost of some of the variables.Knowing this, shadow prices cease to be standardised.Reinhard et al. (2000), for instance, have used this approach to calculate the price of pollution in a study on environmental efficiency.One ought to emphasise once more that there is for each DMU a different set of prices, which means that prices calculated for each DMU are valid only for that DMU. The practical use of the multipliers interpretation hits a serious snag that is inherent to the multipliers model LPP.The complementary slack theorem may also mean that multipliers are the coefficients of an equation that defines a hyperplane tangent to the frontier on the DMU projection point (Cooper et al., 2000).As efficient DMUs (more exactly, the extreme efficient DMUs) are vertices, where there are neither derivatives nor tangent hyperplanes but an infinity of support hyperplanes, there is then an infinity of multiplier sets for each extreme efficient hyperplane all of them leading to efficiency 1 for those DMUs.Therefore, besides each DMU having the freedom to choose its own weights, which is desirable, it is impossible for the benchmark DMUs, performers of good management practices, to know the weights they have really allocated to each variable.To determine the importance of each input or output or even to calculate the shadow weights becomes compromised when dealing with extreme efficient DMUs. The lack of unique values for extreme efficient DMUs' weights has further different consequences.From a theoretical point of view, it prevents the calculation of directional derivatives along the whole frontier.From the practical point of view, it is an obstacle against the use of DEA as an auxiliary tool in multicriteria problems.Some circumstances in multicriteria problems may render it desirable that weights be allocated independently of the decision maker's judgement as, for instance, when several decision makers do not agree on them.DEA would be an excellent tool for that if it were not that the weights allocated by some DMUs are not known. Bibliographic review With greater or lesser sophistication, available literature includes several approaches to deal with the problem of more than one set of multipliers.If the numbers of extreme efficient DMUs is small when compared with its total number, it is obvious the weights allocated by the extreme efficient DMUs can be ignored and work can proceed with the weights allocated by the remaining DMUs alone (as done by Lins et al., 2003;Soares de Mello et al., 2008).The problem of more than one set of multipliers for the extreme efficient DMUs has been approached on several occasions, but their solutions have been unsatisfactory.Charnes et al. (1985) had already accepted the problem existed when they proposed the arbitrary use of a single value for derivatives obtained from the computation of an average weight based on the barycentres of the concurring hyper-surfaces.This method has several disadvantages.One is the need to know the equations for all the faces, which requires an intense load of computer work (Dulá, 2002).Another is the sudden variation owing to derivative discontinuities.Finally, it is not applicable to either the DMUs at the start of a Pareto inefficient region or those that are adjacent to an incomplete dimension face (Olesen & Petersen, 1996).Cooper et al. (2007) have proposed to refine the method put forward by Charnes et al. (1985).Some of these drawbacks may have caused the method not to be commonly used.Several authors simply ignore the problem.Occasionally, they mention it and then proceed with the solutions found for the first optimal solution by the algorithm they have used (Thanassoulis, 1993;Chilingerian, 1995). In some specific circumstances, partial solutions are possible for the problem of more than one set of multipliers.Doyle & Green (1994), for instance, needed to calculate a unique value for each DMU's multipliers vector in their crossed evaluation model.They did that by solving lexicographically the traditional DEA model and one of two other models, which the authors named "aggressive" or "benevolent".These auxiliary models determine the multipliers to be used in the cross evaluation model.There are other ways to calculate these multipliers for the cross evaluation model as proposed by Liang et al. (2008).An alternative way to calculate multipliers can be found in a practical application by Soares de Mello et al. (2009). On the other hand, the super efficiency model (Andersen & Petersen, 1993) does not suffer from having more than one set of multipliers because of its formulation.However, it has other disadvantages.To start with, it does not limit efficiencies to the interval [0,1].Furthermore, it eliminates a different constraint for each DMU LPP, which means that the efficiency frontier has a different shape depending on which DMU is being studied. According to Rosen et al. (1998), the multipliers values can vary between the value based on the derivative at left and that on the right.These authors state that it is impossible to get round this multiplicity of values and propose a modified SIMPLEX table to calculate the multipliers variation limits. The impossibility referred to by Rosen et al. (1998) arises from the linear nature of some segments of the DEA frontier.Soares de Mello et al. (2002,2004) have shown that it is possible to get around this impossibility by substituting a new frontier with similar properties, but with derivatives at all points, for the original DEA frontier.Among the identical properties, the allocation of unitary efficiencies to the original DEA model extreme efficient DMUs is included.This technique is described here in general terms and exemplified for two-dimensional cases.It consists of smoothing the original DEA frontier while respecting the DEA Basic properties: convexity, throughput monotonicity (outputs growing together with inputs), the same efficient DMUs and allocation of different weights by each DMU.An application of this technique can be found in Gomes et al. (2004). Another type of smoothed frontier appears in the Hyperbolic and Spherical DEA models (Kozyreff Filho & Millioni, 2004;Avellar et al., 2004;Gomes & Avellar, 2005;Avellar et al., 2005;Avellar et al., 2007).These models have a different objective to that studied herein.In those articles, the authors deal simultaneously with uniformity and smoothing of the frontier.The results obtained by them can be classed as a smoothed variant of the Zero Sum Gains DEA model (Lins et al., 2003;Gomes & Lins, 2008). Smoothing the DEA frontier A pseudo-metric topology was used in Soares de Mello et al. (2002,2004) to smooth the two and three-dimensional DEA frontiers.The pseudo-metric used here measures both how close the smoothened function is to the original frontier and that of their derivatives where they exist. In the two-dimensional case, it was proposed that the difference in the arc length of the function's curve between two points was taken as the measure of proximity of the two functions as well as of their derivatives.The frontier between two consecutive efficient DMUs is a straight line.This is the shortest length between two points and, therefore, any other curve that connects those two points will have a greater arc length.The arc length will be an increasing function of the divergences from that curve to the straight line.Therefore, this arc length is able to measure the proximity of the smoothened and the original frontiers.It may also measure the proximity between the frontier derivatives as shown in Soares de Mello et al. (2002).It is well known that for any given curve y = f(x), the length (lL) of an arc is given by (1). where x 1 and x 2 are the minimum and maximal input values. The same arguments can be generalised for higher dimension problems if a hyper-plane is substituted for the straight line and a multiple integral for the simple one. General Formulation of the Smoothing Model For a single input DEA models, smoothing is but looking for a function that minimises the curve's arc length (or its n-dimension generalization), that contains the Pareto efficient DMUs and that have partial derivatives in every point.For computational ease, the square of the curve's arc length can be minimized with no change of the result.Then, after finding the extreme efficient DMUs in the DEA classic model, smoothing can be obtained through the variational problem (2). ( ) ( ) { } , : , In this model, F is a function of the inputs in the output and x i is the input vector.This is a BCC model (Banker et al., 1984) and, so, it requires the frontier to be convex, i. e., . With this additional constraint, the No Optimal Solution Theorem is obtained.Its proof is in Soares de Mello et al. (2002).It guarantees there is no "closest solution" although there are close enough solutions that can be used.This calculation needs a similar approach to that of the Finite Elements method (Reddy, 1993). At the beginning, when the two-dimensional case was first dealt with, Soares de Mello et al. (2002) did use an approximation for each frontier area.Owing to the two-dimension geometry, it was possible to calculate a single optimal approximate function for each side.However, for a larger number of dimensions, this becomes a computationally intensive problem (Dulá, 2002). Later, Soares de Mello et al. (2004) dealt with the frontier-smoothing problem for a larger number of dimensions and one single input or output.As opposed to the two dimensions model, a single approximate polynomial was used for the whole frontier.Higher degree polynomial functions were used for that purpose. The authors have shown that in the particular case of two inputs and one output the lowest polynomial degree can be obtained as a function of the number of extreme efficient DMUs.This function is described in table 1.This table was arrived at so the number of decision variables is larger than the number of equality constraints.As these variables are the coefficients of the approximate polynomial, its degree will depend on the number of decision variables.(3) In this model, Z stands for the output and x and y for the inputs. The DEA BCC smoothing model with two inputs and one output is shown in (4). ( ) In this model, the variables are the same as before. The objective function ensures that the smoothed frontier and its derivatives are close to the original frontier.The first constraint ensures that the smoothed frontier includes the same efficient DMUs of the original frontier.The constraints that include first derivatives ensure that the output is an increasing function of the inputs.Finally, the constraints that include second-degree derivatives ensure the frontier is convex. Model (4) becomes model ( 5) if it is taken into account that Z is a polynomial function of x and y. ( ) In this model, the variables Z, x and y stand for respectively the output and the inputs.For the case of a single output and n inputs model ( 6) obtains. ( ) There was a change of notation for this model caused by the larger number of variables involved.For the next models, with at least more than two inputs, the notation will be the same.So, x 1 ,...x n are the DMU's inputs; Z will be the DMU's output. For the inverse case of a single input and m outputs, model (7) obtains. ( ) The second derivative is now positive.In this case, the input is a function of the outputs so the theorem of the inverse function ensures that this constraint makes the frontier to be convex. Generalization of the BCC smoothing model To deal simultaneously with both output and input multiplicity in the smoothing problem, a U function is defined as follows: When function U equals a constant, equation ( 8) represents a level.Its multidimensional generalization of the arc length ought to be minimised.As opposed to the model previously shown the new solution will obtain taking into account the differentiation of all inputs relative to all outputs and vice-versa. Formulation ( 9) represents the general DEA model smoothed for any number of input and output variables.The objective function is given by the (n+m)-uple in which x i min and x i max indicate the minimum and maximum value for each input, whereas z i min and z i max do the same for each output.Constraint (9.1) ensures that DMUs extreme efficient be contained in the smoothed frontier.Constraint (9.2) ensures the frontier's increasing monotonicity where inputs hit their maxima, whereas constraint (9.3) does the same for decreasing monotonicity when outputs hit their minima.Convexity obtains from (9.4) and (9.5) respectively for inputs and outputs.Note should be taken that, as in (7), it should always be . As U = F -H, the minus sign of this equation leads to constraint (9.5). Attention is drawn for a fact relative to the convexity of the smoothed frontier (9.4) and (9.5).Similarly to the single input or output model (Soares de Mello, 2002), it will not be possible to keep the signal constraint of the second derivatives at all points.The truth is that this will be only possible for polynomials up to the second degree.In this case, the second derivative for any input or output will be a constant as it represents a parabola with a constant convexity throughout.Therefore, for the remaining cases, a stronger constraint must be imposed: the coefficient of the polynomial term for which the second derivative is not nil, should be: • Lesser or equal to zero if it's a coefficient of the input polynomial; • Greater or equal to zero if it's a coefficient of the output polynomial. Finally, the equation of the smoothed frontier will be as follows: ( ) Model Properties A demonstration will now follow that the proposed model has all required conditions to be a smoothed BCC frontier: it must contain all the extreme efficient DMUs, be convex and outputs must be increasing functions of the inputs. Constraint (9.1) ensures that the frontier contains all the extreme efficient DMUs because it is equivalent to "the equality of functions F and H calculated at each of the extreme efficient DMUs values. Next, it will be shown that in this model, outputs are increasing functions of the inputs, i.e., the model ( 9) constraints lead to (11). For that purpose, the Implicit Function Theorem will be used.It allows the expression to be calculated from the partial derivatives of function U (8): From constraints (9.2) and ( 9.3) the numerator of expression ( 12) must be non-negative, its denominator non-positive.This leads to expression (13), valid for the whole frontier with the possible exception of a finite number of points where the derivative may not exist. As these isolated points do not affect monotonicity, it is demonstrated that in the smoothed frontier, outputs are increasing functions of the inputs. Lastly, to prove that the BCC frontier is convex the Convexity expression in ( 14) must be checked: Having recourse to the Implicit Function Theorem, the Chain Rule, and the Product Derivation, expression (15) obtains: As the polynomials are independent, i.e., there is no term in U that multiplies x and z, function ( 16) is valid. From model constraints (9.1) to (9.5), the two numerator factors are always non-positive while the denominator is non-negative.This leads to expression (18), valid for the whole frontier with the possible exception of a finite number of points where the derivative may not exist. As these isolated points do not affect convexity, it is demonstrated that the smoothed BCC frontier has all the desired properties. It is also easy to show through similar calculations that the one output/several inputs and vice versa cases dealt with in Soares de Mello et al. ( 2004) are but particular cases of this general model. Finding the polynomial degree As there is a polynomial for inputs and another for outputs, degrees must be found for both. To find those values it should taken into account that: • Both the input and output polynomials must have the same degree (called g) so symmetry is maintained when smoothing.• One of the coefficients can be deleted because the smoothing model equality constraint will remain true if it is divided by the coefficient's value.This property allows us to establish the convention, with no loss of generalization, that the polynomial independent term will be 1.With this in mind, function U is defined by (19). such as it satisfies (20) • The total number of coefficients, or decision variables, is the sum of the number of coefficients in output and input polynomials.• The number of output and input variables, respectively m and n, are provided by the real problem. Table 2 can now be built.From the number of variables and the number of polynomial terms, the polynomial degree is determined for both inputs and outputs.The table should be used twice: for inputs and outputs.This is a symmetrical matrix, meaning the number of polynomial coefficients is the same for either degree g and m variables or vice-versa. If no and ni are respectively, the number of outputs and inputs, condition (21) has to apply. N° of extreme-efficient DMUs < N° of coefficients = no + ni (21) This expression ensures that the number of decision variables is greater than the number of equality constraints in the smoothing problem. Degree g will be found on the line where the minimum sum for no + ni that satisfies the above inequality is found. Take an example: with two inputs, three outputs and sixteen extreme efficient DMUs, there must be at least sixteen model decision variables.For two inputs and three outputs, columns 2 and 3 of table 2 must be checked.Then, the corresponding lines in table 2 determine the polynomial degree.Line 1 corresponds to a polynomial of one degree and shows five model decision variables, which is insufficient.Line 2 gives a different reading: two inputs correspond to five coefficients and three outputs nine coefficients.Their sum of fourteen is still insufficient.If polynomials are of third degree, the corresponding line 3 will show nine coefficients for two inputs and nineteen for three outputs.Their sum of twenty-eight is more than enough to avoid equality constraints to generate unfeasibility.Therefore, this example requires degree 3 for both input and output polynomials. Finding multipliers From the smoothed frontier equation: multipliers for each input and output can be obtained. To start with, the hyperplane tangent to the smoothed frontier on a point correspondent to an extreme efficient DMU must be determined.Let ( ) ,..., , ,..., where DMU o is an extreme efficient DMU. The tangent hyperplane equation is the internal product ( )( ) ,..., , ,..., .-, -,..., -, -,... 0 On the other hand, from the DEA theory the support plane equation on DMU o is (Lins & Angulo-Meza, 2000): Should it exist, the support plane is also a tangent plane.As the smoothed curve is designed to ensure that tangent hyperplanes do exist, the multipliers values are obtained from the equivalence of equations ( 24) and ( 25) from the constraint that the inputs weighted sum must be one. For notation's simplicity sake, further development will follow a two input, two output case that can be easily generalised. The equations to be taken into account are: If expressions ( 26) and ( 27) are compared term-to-term, multipliers values that do not obey equation (28) are obtained.For that term-to-term comparison, please note that equation ( 27) is equivalent to equation ( 29): To solve the compatibility problem with equation (28) it should be noted that equation ( 29) is equivalent to equation (30) for any value of α that is not nil. Term-to-term comparison between equations ( 30) and ( 26) constrained by equation ( 28) leads to: and to multipliers: Calculation of the relative importance of each variable The multipliers calculated hereabove might not have any meaning for the decision maker as they simultaneously incorporate scale effects and subjective importance.A more meaningful quantity is the relative importance of each input (output) to establish the virtual input (output).This can be used to increase discrimination in DEA (Angulo-Meza & Lins, 2002; Adler et al., 2002). The importance of any input i is given by (37). Therefore, entering the multipliers' values in expression (37), the importance of input i is given by expression (38). Similar calculations lead to the importance of output j being given by expression (39) Numerical example The theoretical concepts hereabove will be now exemplified.The data are found on Table 3. X 1 and X 2 are inputs; Z 1 and Z 2 are outputs.These data were obtained from Nacif (2005).To determine the input and output polynomial degree account must be taken that there are six extreme efficient DMUs, two inputs, and two outputs.From table 2, only the column corresponding to two variables for both input and output polynomials should be taken into account.Table 4 is then built from the second column of Table 2.The number of extreme efficient DMUs should be lower than the total number of polynomial coefficients to avoid unfeasibility.Thus, Table 5, that relates the number of extreme efficient DMUs with the polynomial degree, is built.To do it, it must be noted that the number of extreme efficient DMUs must be lower than that shown on the last column of Table 4.Both the input and output polynomials should be second degree, for the proposed problem. Function U shall be thus of the following type: ( ) whereas the smoothing equation will be: ( ) After calculating the input and output values as well as function U, the above smoothing equation (40) becomes model (42). ( Calculating the decision variable values and placing them on the smoothing equation, equation (43) obtains. 1 2 2 1000 0,00008x 0,39109x 0,04183x x 79,71422x 14,99935z 0,10645z z 10,21461z This is the smoothing equation for the given problem.It represents a hyper-surface in 4 with derivatives on all points that replaces the original DEA frontier, which was piecewise linear.It should be noted that some quadratic terms have disappeared because their coefficients were nil. Conclusions A methodology to solve the unique weights calculation problem for DEA BCC models with simultaneous multiplicity of inputs and outputs has been proposed.The methodology is an extension of the one developed by Soares de Mello et al. (2004) based on the replacement of the original DEA frontier (piecewise linear) by another smoothed frontier (with continuous derivatives).The smoothed frontier will be represented by a polynomial equation bringing together the input and output polynomials. A model was initially developed by Nacif et al. (2004) that dealt with the n-dimensional but restrained the output polynomial to be linear.Together with the model, the algorithm to determine the polynomial degree was generalised was generalised so it could be calculated for all cases. To generalise smoothing even further, a new model, called General n-Dimensional Smoothing Model (model 9) was proposed.This model was gathered all those previously proposed as shown in section 3.1.For shortness sake, calculation details have been omitted; they can be seen in Nacif (2005).This new generalisation is needed to avoid asymmetric handling of inputs and outputs.In the previous model, either the input or the output polynomial had to be arbitrarily made linear.Either option would change completely the problem and, therefore, would not lead to a unique value for each multiplier, the main target of this paper. The smoothing model proposed here has more algebraic calculi than the n-dimensional model by Soares de Mello et al. (2004).It will be recalled that the latter model had no simultaneous input and output multiplicity.The number of algebraic calculi increases, in the model presented here, as the number of inputs, outputs, and extreme efficient DMUs increases.This becomes apparent when tables 1 and 2 are compared and makes for the model high complexity when a large number of variables and extreme efficient DMUs are involved.The calculations need several steps, some of which use different programmes.The longest operation is the transposition of data among them; the slow whole procedure brings to the fore the need for a specific software for the smoothing problem.This will allow cases that are more complex to be studied. It is emphasised that this study has satisfied the need for development for multiple inputs and outputs cases mentioned by Soares de Mello et al. (2004).However, models with constant scale returns (CCR) have yet to be looked into. Finally, it should be mentioned that this smoothing model eliminates two of the worst DEA problems: inefficient Pareto regions on the efficient frontier and non-complete dimension facets.The first because of a monotonicity constraint: they do not exist because output partial derivatives with respect to inputs are never nil.Non-complete dimension facets are eliminated, as the frontier will be defined by a single polynomial equation.This avoids the problems raised by Gonzalez-Araya (2003). Besides, x efin , y efin represent the inputs of any extreme efficient DMU, Z efin represents the output of the same extreme efficient DMU n.Variables d, f, g, h, and i are decision variables.Once again, it should be noted that the function Z is polynomial; the result of the double integral in the objective function is a quadratic function of the polynomial coefficients.As the constraints are linear, the smoothing problem becomes a quadratic programming problem. Figure 1 Figure 1 shows the smoothed frontier obtained by Soares de Mello et al. (2004) for two inputs and one single output. Table 1 - Finding the polynomial degree. The approximate polynomial for a BCC DEA problem with two inputs, a single output and three efficient DMUs, for instance, can be: Table 2 - Polynomial degrees in function of number of variables and their terms. Table 3 - Meza et al., 2005)ical example.Meza et al., 2005), the extreme efficient DMUs are found to be A, D, E, F, H and J.Only the data for these DMUs are needed to calculate the smoothed frontier. Table 4 - Number of coefficients for each degree g for two inputs and two outputs.
2016-01-22T01:30:34.548Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "eec09141f9812fb75ea4d7f1cee622057b09df83", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/pope/a/S3fYrkw9vhL6rydWGkR3swn/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eec09141f9812fb75ea4d7f1cee622057b09df83", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
10117291
pes2o/s2orc
v3-fos-license
Diagnostic Performance of Commercially Available Enzyme-Linked Immunosorbent Assay Kit in the Diagnosis of Extrapulmonary Tuberculosis Objectives: Antibody based serodiagnosis tests for tuberculosis (TB) was used widely in developed and developing countries. Pathozyme Myco® immunoglobulin (Ig) M, IgA, and IgG were evaluated in pulmonary TB in many studies. Materials and Methods: In this study we assessed this commercially available kit in detecting extrapulmonary TB (EPTB). Results: A total of 354 subjects were recruited for the study, of which 217 (61.2%) were EPTB patients and 137 (38.7%) were subjects with no suggestive TB. The mean age was 29.7 ± 13.7 and 31.2 ± 15.2 years, respectively for two groups. Serum samples were tested for IgM, IgA, and IgG using Pathozyme Myco® IgM, IgA, and IgG kit. The individual specificity rates of IgM, IgA, and IgG were 70.8% (95% confidence interval (CI): 62.7-77.7), 77.3% (95% CI: 68.6-83.5), and 68.6%. (95% CI: 60.4-75.7); while their sensitivity was 29% (95% CI: 23.4-35.4), 24.4% (95% CI: 19.1-30.5), and 34.5% (95% CI: 28.5-41.1); respectively. Conclusion: The serological tests either singly or in combination failed or performed poorly to diagnose EPTB. INTRODUCTION T he World Health Organization (WHO) declared tuberculosis (TB) a global public health emergency in 1993. TB causes ill-health among millions of people each year and ranks as the second leading infectious disease causing death worldwide. Human immunodeficiency virus (HIV) coinfection and the development of multidrug resistant (MDR) and extensively drug resistant (XDR) strains cause a major hurdle in treatment and containment of TB. [1] The latest estimates of 2011 reports that there are almost 9 million new cases and 1.4 million TB deaths of which 990,000 are among HIV negative people and 430,000 HIV-associated TB deaths. [2] TB affects the lung which results in symptoms of respiratory system known as pulmonary TB. However, other organs, such as the pleura, lymph node, kidney, and meninges, may be involved in TB under certain circumstances and this is known as extrapulmonary TB (EPTB). Patients with EPTB present with organ-related symptoms which would develop into serious complications that threaten patients' lives and cause morbidity. Reports of recent years infer that the prevalence of EPTB is getting worse, which has drawn more public attention. In 2011, 19% of new EPTB cases were notified in India. [2] One critical and early aspect of managing the TB epidemic is early diagnosis, so that the appropriate treatment can be started at the appropriate time. Diagnosis of EPTB is challenging. Routine methods for TB diagnosis such as smear for acid-fast bacilli (AFB) and culture of Mycobacterium tuberculosis on solid media, and X-rays are poorly sensitive worldwide, nucleic acid amplification test, and liquid culture methods, such as BACTEC MGIT 960 are costly and required sophisticated infrastructure. Due to these diagnostic limitations antibody based immunodiagnostic tests have been widely used which do not require a specimen of the affected organ for microbiological or histological examinations. There are numerous M. tuberculosis antigens, including native, semisynthetic, and recombinant ones that have been used in these diagnostic kits. Some of the antigens such as lipoarabinomannan (LAM) were commonly used and evaluated in areas with low and high burden of TB. These antibody detection methods have evolved into various formats, includes microtiter well enzyme-linked immunosorbent assay (ELISA) and immunochromatographic assays. The latter being the most widely adopted, because these do not require sophisticated instrumentation. Based on these serological methods many studies are carried out on pulmonary TB patients, while only few studies are done on EPTB patients. A meta-analysis conducted in 2007 [3] on nine published articles (25 studies) provide glimpse of variability in sensitivity (0-100%) and specificity (59-100%) of these tests. However, only one study was included from India [4] in this meta-analysis. In 2011, World Health Organization (WHO) [5] issued strong policy recommendations against the use these antibody-based commercialized serological tests. An expert group [3,6] that reviewed the evidence on use of commercial, antibody-based serodiagnostic tests found that these tests provide inconsistent and imprecise results with highly variable values for sensitivity and specificity. No evidence was found that existing commercial serological assays improve outcomes that are important to patients. We evaluated Pathozyme Myco ® immunoglobulin (Ig) G, IgA, and IgM (Omega Diagnostic Limited, Scotland, UK) on pulmonary TB patients from Delhi, India [7] and the findings were also highly discouraging. Simultaneously we also evaluated Pathozyme Myco ® IgG, IgA, and IgM in EPTB patients and the results are presented here. Setting The study was conducted between 2007 and 2011 at the Tuberculosis Research Laboratory, Division of Clinical Microbiology and Molecular Medicine, All India Institute of Medical Sciences, New Delhi. Ethical committee of the All India Institute of Medical Sciences (AIIMS), New Delhi approved the study protocol in accordance with National Guidelines by Indian Council of Medical Research, New Delhi. All patients with suspected EPTB were prospectively enrolled. All the demographic details and relevant clinical symptoms, signs, and duration were documented in predesigned subject information form. No subjects identified as having human immunodeficiency virus infection was included in the study. Case definition Enrolled patients were classified as ''confirmed tuberculosis'' if his sample was positive for smear/culture for M. tuberculosis; or had positive M. tuberculosis-specific DNA amplification from biological specimens; or histo-pathological findings consistent with TB or was highly suggestive radiological findings of TB (having excluded other disease) including appropriate response to antituberculosis therapy. We defined non-TB patients as those who were bacteriologically negative for M. tuberculosis with either a resolution of clinical symptoms after an antibiotic therapy, or confirmed to have alternative diagnosis on histopathology. Sample collection and processing Following inclusion, a 2-4 mL venous blood sample was collected in 4 mL BD vacutainers (Backton-Dickinson, Sparks, USA) without anticoagulation and allowed it to clot at room temperature for 1 h, centrifuged at 4°C for 10 min at 3000 × g and clear serum was collected, aliquoted, and stored at −80°C till further use. No sample underwent more than one freeze-thaw cycle before analysis. All the presumably contaminated samples were processed using modified Petroff 's methods (NALC-NaOH), but samples from sterile sites such as cerebrospinal fluid (CSF), synovial fluid, bone marrow, etc., were inoculated directly in BACTEC MGIT™960 (Backton-Dickinson, Sparks, USA) and Lowenstein Jensen (LJ) medium as published elsewhere. [8] Before culture inoculation all samples were examined microscopically after Zeihl-Neelsen stain. All the isolates were confirmed as M. tuberculosis by species specific in-house multiplex polymerase chain reaction (PCR) and phenotypic methods. [9] PATHOZYME ® MYCO IgG, IgA, and IgM test These three assays/kits are based on two highly purified immunodominant antigens. One is cell wall lipoarabinomannan (LAM) antigen which has been proved to elicit early stage antimycobacterial immune response in some studies, and second is 38-kDa mycobacterial recombinant antigen expressed and purified from E. coli. The kits claimed to be having 91% specificity and 72% sensitivity. [10] Three immunoglobulin based ELISA materials and methods) and 137 (38.7%) non-TB subjects were recruited. The mean age of patients with confirmed EPTB and non-TB patients was 29.7 ± 13.7 and 31.2 ± 15.2 years, respectively. Among the confirmed EPTB patients, disseminated TB was the most common (n = 75), followed by genitourinary (n = 57), lymph node (n = 37), pleural (n = 20), central nervous system (n = 13), gastrointestinal tract (n = 9), and skeletal system TB (n = 6). All subjects were examined for Bacillus Calmette-Guérin (BCG) vaccination, and 181 (83.4%) were found positive in confirmed EPTB patients and 117 (85.4%) positive in non-TB groups. Performance of PATHOZYME ® MYCO IgG, IgA, and IgM ELISAs were performed in all 354 subjects. The performance of all the three ELISAs were analyzed [ Table 1]. The individual and overall sensitivity rates of IgM, IgA, and IgG were 29, 24.4, and 34.5%, respectively; while their specificities were 70.8, 77.3, and 68.6%, respectively. When we analyzed these in combination of two or more ELISAs, the sensitivity further decreased and ranged from 7.8 to 11.9%; but their specificities improved significantly and ranged from 83.9 to 96.3% [ Table 1]. The PPV for IgM, IgA, and IgG were 61.7, kits (PATHOZYME ® MYCO IgG, IgA, and IgM) were used to check levels of antimycobacterial antibodies against two antigens in the serum of diseased and controls. The ELISA tests were performed according to the instructions in kits' manual (Omega Diagnostics Limited, Scotland, UK) and repeated three times as published earlier. [7] Statistical analysis For proper analysis of performance, ELISA tests were evaluated on confirmed EPTB cases and non-disease group. Sensitivity, specificity, positive predictive values (PPV), negative predictive value (NPV), and likelihood ratio for positive (LRP) test were calculated with 95% confidence intervals (CI) to determine the correct diagnosis potential of the tests. STATA SE.9 (StataCorp LP, Texas, USA) software was used for all statistical analysis. Subjects and clinical characteristics During the study period a total of 217 (61.2%) confirmed EPTB (as per the criteria mentioned in ELISA: Enzyme-linked immunosorbent assay, Pos: Positive, Neg: negative, CI: Confidence interval, PPV: Positive predictive value, NPV: Negative predictive value, LRP: Likelihood ratio for positive test, Ig: Immunoglobulin, EPTB: Extrapulmonary tuberculosis; *Any ELISA represents subjects detected positive by at least one of the three (IgM/IgA/IgG) ELISA tests for IgA performed best in pleural TB (odds ratio of 2.55, 95% CI: 0.71-9.08), and IgG positive lymph node TB (odds ratio of 3.69, 95% CI: 1.02-13.34) as shown in Table 2. While comparing the detection rate among the three kits, IgG kit detected the most number of cases among lymph node TB (51.3%), gastrointestinal TB (44.4%), disseminated TB (37.3%), and bone/joint TB (33%) cases. Effect of BCG vaccination on detection rate of ELISA kit When compared the performance of ELISA on BCG vaccination status, no statistical significance was observed neither in confirmed EPTB patients nor in non-TB groups [ Table 3]. 63.1, and 63.5%, respectively; whereas their NPV were 38.6, 39.2, and 39.8%, respectively. The LRP test for IgM, IgA, and IgG were 0.9, 1, and 1.1; respectively. The mean optical density (OD) values of IgM, IgA, and IgG for confirmed EPTB patients were 2.1 ± 0.7, 1.8 ± 0.8, and 1.6 ± 0.4 OD; respectively and for without TB subjects were 1.78 ± 0.6, 1.8 ± 0.9, and 1.6 ± 0.5 OD; respectively [ Figure 1]. Performance of serology in various organ-specific disease forms The performance of ELISA among different disease categories is shown in Table 2. The highest degree of detection for IgM was found best in central nervous system (CNS) TB (odds ratio of 2.67, 95% CI: 0.69-11.06), DISCUSSION The diagnosis of active TB largely depends upon initial clinical suspicion and radiographic findings, with subsequent laboratory confirmation by bacteriologic studies. Because appropriate specimen might be difficult to obtain from extrapulmonary sites, and the number of bacilli is generally low, bacteriological confirmation of EPTB is often more difficult than pulmonary TB. [11] Even if the appropriate sample (e.g., CSF in suspected meningitis) is obtained, the sensitivity of conventional bacteriological methods are dismally poor. Because of these limitations, number of investigators relied on serology as a diagnostic tool in all settings. With the technical advances, sets of M. tuberculosis specific antigens were first identified using either hybridoma or recombinant DNA technologies. [12] However, the findings that antibody levels are considerably higher and more frequent in the multibacillary than in paucibacillary forms of the disease [13] was noted as an obstacle to clinical application of these antigens in many studies, and thus combinations of multiple antigens were used. [14] Here, we report poor utility of ELISA test in which the sensitivity of Pathozyme ® Myco IgM was 29%, IgA was 24.4%, and IgG was 34.5% in our patients of EPTB. These findings were by-and-large similar to our findings in pulmonary TB patients, where the sensitivity rates of the same kit were 48.7, 25.7, and 24.4%; for IgM, IgA, and IgG, respectively. [7] However, the specificity of serology was better in EPTB than pulmonary TB [ Not many authentic studies are published, but reported sensitivity of ANDA IgG detection kit used for the diagnosis of EPTB was 64% (95% CI: 28-92) for lymph node TB and 46% (95% CI: 29-63) for pleural TB, and its specificity was reported to be 90% and 87%, respectively for these patients. [3] The alternative serology method MycoDot™ is reported to have higher sensitivity of 80% and 66.7% in TB lymphadenitis and in pleural TB patients, respectively. [15] However, all studies have reported highly variable results, mainly because of difference in patient selection, criteria of diagnosis and cut-offs used. In one study, sensitivity of microwell ELISA based Pathozyme ® Myco IgM in EPTB has been reported as low as 9% [16] but in another study based on 38 kDa antigen immunochromatographic-TB test the sensitivity of 46% and specificity of 59.6% were reported in pleural TB patients. [17] Senol et al., [18] also reported sensitivity of 22.2% and 25% in pleural TB and lymph node TB, respectively; and had similar specificity of 93.3% using Pathozyme-TB Complex Plus test. This explains the fact that not all TB patients produce antibodies against all antigenic epitopes in the cell walls of the tubercle bacilli which infers on the inconsistency in specificity of antibody based assays among different patient groups like gender, age, ethnicity, and geographical distribution. It is also well known [19] that person-to-person variation in antigen recognition is the key feature of human humoral immunity against TB. [20] Despite the inconsistency and imprecise sensitivity and specificity of the commercial serological tests, these were being marketed liberally and used widely in major endemic regions including India until recently when WHO banned commercialization of these kits. [5] As a result of this WHO guideline, in June 2012 the Government of India stopped the import, manufacture, distribution, and sale of commercial serodiagnostic tests for TB. This bold action is expected to greatly reduce the frequency of false diagnoses of TB and facilitate the introduction of WHO approved diagnostics into the market. Therefore, in conclusion even though the specificity of the Pathozyme Myco IgA, IgM, and IgG was significantly better and acceptable as compared to pulmonary TB, the sensitivity of these kits was dismally poor and their use cannot be recommended for diagnosis of EPTB.
2018-04-03T00:16:09.510Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "b6643f812a81f3555cde7d004e908c98a024e9e1", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0974-2727.115902", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a5db1f128ece2232de6469809125003a3b4aa53", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119203790
pes2o/s2orc
v3-fos-license
Diffraction-limited Subaru imaging of M82: sharp mid-infrared view of the starburst core We present new imaging at 12.81 and 11.7 microns of the central ~40"x30"(~0.7x0.5 kpc) of the starburst galaxy M82. The observations were carried out with the COMICS mid-infrared (mid-IR) imager on the 8.2m Subaru telescope, and are diffraction-limited at an angular resolution of<0".4. The images show extensive diffuse structures, including a 7"-long linear chimney-like feature and another resembling the edges of a ruptured bubble. This is the clearest view to date of the base of the kpc-scale dusty wind known in this galaxy. These structures do not extrapolate to a single central point, implying multiple ejection sites for the dust. In general, the distribution of dust probed in the mid-IR anticorrelates with the locations of massive star clusters that appear in the near-infrared. The 10-21 micron mid-IR emission, spatially-integrated over the field of view, may be represented by hot dust with temperature of ~160 K. Most discrete sources are found to have extended morphologies. Several radio HII regions are identified for the first time in the mid-IR. The only potential radio supernova remnant to have a mid-IR counterpart is a source which has previously also been suggested to be a weak active galactic nucleus. This source has an X-ray counterpart in Chandra data which appears prominently above 3 keV and is best described as a hot (~2.6 keV) absorbed thermal plasma with a 6.7 keV Fe K emission line, in addition to a weaker and cooler thermal component. The mid-IR detection is consistent with the presence of strong [NeII]12.81um line emission. The broad-band source properties are complex, but the X-ray spectra do not support the active galactic nucleus hypothesis. We discuss possible interpretations regarding the nature of this source. Introduction The galaxy M82 (NGC 3034) hosts the nearest and best example of an ongoing massive starburst, making it an excellent target for detailed studies at all wavelengths. The galaxy is thought to have undergone an interaction event with its neighbor M81 about 10 8 yr ago (Gottesman & Weliachew, 1977), triggering a massive nuclear starburst about 5×10 7 years ago (Rieke et al., 1980). There is also evidence for several other star formation episodes, both older and younger (de Grijs et al., 2001;Förster Schreiber et al., 2003a). Around 40 supernova remnants (SNRs) have been identified in the core (Fenech et al., 2008), and one new supernova (SN) is produced every ∼3 years (e.g. Rieke et al., 1980;Jones & Rodriguez-Espinosa, 1984). The energy output of the super star clusters hosting the SNe is thermalised and drives a large scale superwind along the galactic minor axis (Heckman et al., 1990) which can be observed in detail because of the favorable edge-on inclination of the source. In the infrared, M82 has been extensively studied with all space missions. Its infrared luminosity is measured to be 5×10 10 L ⊙ , and it shows prodigious dusty outflows and polycyclic aromatic hydrocarbon (PAH) grains in the mid-infrared (mid-IR) extending on kpc-scales (e.g. Helou & Walker, 1988;Sturm et al., 2000;Förster Schreiber et al., 2003b;Engelbracht et al., 2006;Kaneda et al., 2010;Roussel et al., 2010). There are very few high-spatial-resolution mid-IR studies of the core itself on scales of order 100 pc, because of size limitations of space missions. With their large primary mirrors, ground-based telescopes can achieve the best spatial resolution currently possible. In the N (8-13 µm) band atmospheric window, the highest resolution studies thus far are the works of Telesco & Gezari (1992, hereafter TG92) and Achtermann & Lacy (1995, hereafter AL95), with nominal resolutions of 1. ′′ 1 and 2 ′′ , respectively. In this work, we present the first sub-arcsec mid-IR N-band images of the core of M82. The galaxy was observed as an extension of our recent work of Seyfert galaxies (Gandhi et al., 2009), as part of a study to understand the mid-IR emission of galaxies at high angular resolution. The observations were carried out at the 8.2 m Subaru telescope. At wavelengths of 11.7 and 12.81 µm, our imaging is diffraction-limited at < 0. ′′ 4. These new images provide the sharpest mid-IR view of structures at the base of the superwind, and allow an extensive multiwavelength comparison of individual sources. Several HII regions are newly identified in the mid-IR. We also discuss the nature of a putative active galactic nucleus candidate. Using a mid-IR detection and new Chandra data, we rule out the AGN hypothesis and discuss other possibilities, include supernova remnant ionization and emission from a starburst. Distance estimates to M82 have ranged over 3.2-5.2 Mpc (e.g. Burbidge et al., 1964;Tully, 1988;Sakai & Madore, 1999). Some of the latest measurements suggest a distance at the lower end of this range based upon accurate determination of the tip of the red giant branch magnitude (Karachentsev et al., 2004;Dalcanton et al., 2009). We adopt a value of 3.53 Mpc herein, resulting in a physical scale of 17.1 pc per arcsec. Our imaging resolution limit corresponds to ≈6.1 and 6.7 pc at 11.7 µm and 12.8 µm, respectively. Observations Observations were carried out at Subaru on the night starting 2009 May 04, under generally clear weather conditions. The source was observed at the beginning of the night close to meridian, i.e. near its maximum altitude of 40 • . The Cooled Mid-Infrared Camera and Spectrometer, or COMICS (Kataza et al., 2000) was used in imaging mode with standard chopping and nodding off-source. Due to the extended nature of the emission from M82, a relatively large chop throw of 30 ′′ was used so that the sky position lay completely outside a single field-of-view of the detector. Technical difficulties prevented us from using larger throws. A North-South chop direction was adopted, because this is approximately perpendicular to the apparent major axis of the galaxy. Integration times of 60 s were used at each chop position. Subsequently, the telescope was nodded to a position at an offset of 1 arcmin to the North, and the above chopping series was replicated, this time entirely on sky. The above sequence was repeated 10 times, resulting in a total on-source exposure of 600 s per filter and a total telescope time of about 50 min per filter after accounting for observing efficiency. Imaging was obtained in the NeII (narrow-band) and the N11.7 filters, with central wavelengths (and full widths at half maximum) of 12.81 (0.2) µm and 11.7 (1.0) µm, re-spectively 1 . The median airmass of the source was 1.56 and 1.65, respectively. The Cohen standard star PI2 UMa (HD 73108; Cohen et al. 1999) was observed for photometric calibration just before the target at an airmass of ≈1.44. On-chip chopping with a throw of 10 ′′ was used, and two nod observations were obtained with exposure time 10 s each. Seeing was measured in this observation, as well as in a standard star observation of HD 108381 (with identical observational setup to PI2 UMa; in the N11.7 filter only) carried out approximately 1 h after the target observation was completed. The maximum systematic error in the photometry due to the absolute calibration of the standard star is ≈3%, and this error is added quadratically to all target photometric errors. Data reduction Data reduction was carried out using routines provided by the COMICS team (q series package 2 ), and following the procedures recommended in the COMICS Data Reduction Manual v.2.1.1. Some additional image manipulation was carried out in IRAF 3 . The COMICS data flow returns two sets of images for each observation: 1) a 'COMA' cube of data frames including all images obtained in one observing block; 2) a single chop-subtracted 'COMQ' image, co-added over all the frames in that block. The recommended flat fielding procedure for imaging is a 'self-sky-flat', in which the 'off' beam chop positions are used for flat fielding the 'on' beam ones, and vice-versa. This is possible because the detector is illuminated quite uniformly by sky background, much brighter than the imaged sources. To create these flats, dark frames were first subtracted from each cube of data. For the standard star reduction, the on and the off chop beam data cubes were averaged to create a separate mean image for each of the two beams (hereafter, called 'mean beam images'). Low spatial frequency trends across the detector are then removed by simply dividing through a heavily Gaussiansmoothed version of the mean beam images. This results in the desired flats for one observing block, and each COMQ coadded frame can be divided by these flats to create calibrated on and off beam standard star images, with the pixel-to-pixel sensitivity variation removed. This flat fielding procedure was carried out separately for each nod position, following which the two beams are shifted and averaged (with flux inversion of one). The target was observed with a large chop angle so that the off beam images are devoid of any bright sources. Hence, only the flats created from the off beam positions are required to calibrate the on beam target images. So, for the target, we simply divided each COMQ image by the off beam flat created as described above. This was done identically for the nod position as well. Each flat fielded nod position image was then subtracted from its immediately preceding flat fielded target image, resulting in 10 calibrated and background-subtracted images of the target. One of the point-like sources in each image was used as a reference for determining small shifts necessary to create the final co-added image. Two systematic sources of noise affect COMICS observations. Firstly, a low level random noise in the form of horizontal stripes is introduced into every exposure by the readout amplifiers. Although it is largely periodic in nature across the field of view as a result of the 16 identical amplifier channels employed, this is difficult to remove because of 1) its random nature from exposure to exposure; 2) the fact that our target covers most of the field of view, leaving free very little of the CCD for determination of the noise level. No optimal solution was found for removal of this read noise component without introducing additional statistical uncertainties in the process, so no attempt is made to remove it, except for display purpose. Secondly, we find non-negligible background residuals in our N11.7 image. This is a result of the fact that 1) the chop throw (and nod offset) is relatively large which means that the background and target optical light paths differ significantly; and 2) the typical time delay between a chopped beam observation and its corresponding nod observation is of order several minutes, during which time the sky level can vary significantly. These residuals are more prominent in the N11.7 image because of the wide filter width could make it more sensitive to sky variations. In order to remove these, the final NeII image was rescaled in order to produce the best match to the target in the N11.7 filter, and then subtracted. This leaves a background image dominated by the N11.7 background residuals, with only some additional low level structures due to the varying SEDs of targets across the field of view between the two wavelengths. This background image can then be subtracted from the shifted and co-added N11.7 image to create the desired residual-free product. Final calibrated images are shown in Fig. 1. The attached absolute astrometry was determined based on multi-wavelength comparison of detected sources, as described in the following sections. The coordinates attached to all images presented herein are for the J2000 equinox. Point spread function The point-spread-function (PSF) size in our observations may be accurately measured from the photometric standard star observations, which were carried out both before, and after, the target observations. The stellar flux contours are displayed in Fig. 2. Also overplotted is the expected diffraction limit resolution for both filters. The 50 per cent stellar flux contour matches the expected ideal full-width at half-maximum (FWHM), meaning that our observations were closely diffraction-limited at 0. ′′ 36 and 0. ′′ 39 in the N11.7 and NeII filters, respectively. Source detection and photometry In order to isolate the most significant discrete sources in the target field of view, we carried out automatic detection with the SExtractor package (Bertin & Arnouts, 1996). The dominant source of uncertainty for source detection and photometry is the diffuse and bright galactic background emission. An it-erative process of interpolating the image on a smooth mesh and removing detected sources is employed for proper modeling of this background. But the complex nature of the observed emission makes the local background quite sensitive to the choice of the mesh size. We thus decided to run SExtractor in two passes using different mesh sizes. In the first instance, a background map is created from a mesh with squares of size 8×8 pixels. A limit of 3σ (with σ being the root-mean-square variance in the background map at the mesh location of interest) was chosen for defining significant sources, and a rather small minimum detection area of 3 contiguous pixels with a significant signal was adopted for detecting the more compact sources. In a second pass, we chose a smaller background mesh size of 4×4 pixels, and ran the source detection procedure again. SExtractor fits ellipse profiles to all sources, and only those sources whose centroids remained consistent to within some threshold between the two passes were retained in the end. A threshold of size equal to the semi-minor axis for the NeII filter (and twice this value for the N11.7 images which are affected by worse residuals, as described above) was found from experimentation to work well. Note that a very small background mesh can result in subtraction of some flux from the source wings themselves. Test runs on the photometric star images showed that using a mesh as small as 4×4 pixels results in fluxes lower than the total expected values by factors of about 1.2 and 1.3 (N11.7 and NeII, respectively), so all SExtractor-determined fluxes have been increased by these amounts. Because of the clumpy nature of the galaxy emission, additional systematic uncertainties of ≈10% are estimated for this background correction, and these have also been included. A better photometric solution will require accurate mapping of the galactic background emission (in longer exposures or using space telescopes), which the present data do not allow. Calibration of counts to flux was carried out using the photometric standard observations described in § 2. Final source fluxes are listed in Table 1 and the sources are identified in Fig. 3. Finally, in order to determine limiting fluxes for non detections, we simulated two dimensional Gaussian profiles matched to the PSF, scaled these according to flux and added them in at specific positions in the images where limits were required. We then passed these images through our detection pipeline, and determined the flux limit corresponding to the faintest source that could be detected. We find nominal point source flux limits of ≈43 mJy and 18 mJy in the NeII and N11.7 filters, respectively, at the radio kinematic center position of Weliachew et al. (1984). But these limits can vary significantly with position and source morphology assumed, so we quote relevant numbers for some interesting sources individually in the following sections. Multi-wavelength registration of images Registration of images at multiple wavelengths (to better than half-arcsec accuracy) is a non-trivial task, given the small field-of-view of COMICS, the extended nature of the bulk of the emission and the strong dust extinction towards the core. Extensive high resolution radio studies over the years have accurately mapped many supernova remnants (SNRs) and HII re-[Vol. , Table 1. The source position angle and extents are those determined from isophotal photometry with elliptical profiles by SExtractor. gions, which arguably provide the best reference system for cross-registration. We used the VLA/MERLIN 5/15 GHz study of McDonald et al. (2002, hereafter M02) to search for source matches with our mid-IR NeII image. A possible solution was identified in which four radio HII regions coincided with mid-IR NeII detections to within 0. ′′ 5. These four are listed in Table 1 and are highlighted in Fig. 4. The rms error of the astrometric solution fit was 0.27 pixels, or 0. ′′ 036, about 10 times smaller than a single resolution element. Adopting this solution required a shift of about 0.5 arcsec from the raw Subaru astrometry baseline. The astrometry for the N11.7 image was tied to that determined for the NeII filter, and all four HII regions are also detected in the N11.7 image. In order to check whether these mid-IR detections are consistent with being dust emission from HII regions, we plot their radio to mid-IR spectral energy distributions (SEDs) in Fig. 4, and compare to the SEDs of the massive embedded star clusters identified by Galliano et al. (2008) in NGC 1365. The overall match is excellent given the long 'lever arm' between the radio and infrared, and given that some natural variation amongst the strengths of the [NeII]λ 12.81µm line dominating the NeII filter is expected. Such a registration procedure is not foolproof, so we have carried out cross-checks of our astrometry against published (comparatively) low spatial resolution data mentioned previously: the [NeII] line map of AL95 and 12.4 µm imaging from TG92. A near-perfect astrometric match is found with the former at the center of our field of view, but with an increasing radial offset towards the edges. The 12.4 µm images of the latter show a small systematic offset to larger right ascension with respect to both our astrometry and to the line map of AL95. All of the above relative offsets lie below ≈1. ′′ 4 and are unlikely to be significant, because the absolute position accuracies or spatial resolution available to these authors were ∼1. ′′ 5. The above comparisons are self-consistent with our astrometric solution, and we estimate the absolute positional accuracy of our images to be better than 1 ′′ . Further detailed checks must await high resolution data and more cross-identifications over a larger field of view. Comparison with near-IR images We start with a qualitative comparison of the overall distribution of stars with that of dust. For the stellar distribution, we used a 1.6 µm NICMOS image obtained from the Hubble Space Telescope archive, and its astrometry was calibrated by using the association found by Kong et al. (2007, see their Fig. 8) for their point source designated as J095551.0 in Chandra X-ray imaging. Fig. 5 presents our NeII image and the NICMOS 1.6 µm image. These have been overlaid on to a single image in Fig. 6. There is a distinct anti-correlation between the visibility of stars at near-IR wavelengths, and the appearance of mid-IR dust. In particular, the group of wellknown super-star clusters MGG-7, 9 and 11 (cf. Fig. 8 of Kong et al. 2007) is neatly nestled within the mid-IR gaps. Other clusters, e.g. MGG-3, 6, 8, q and other massive agglomeration of stars (McCrady et al., 2003), all appear where the mid-IR emission is weakest. Heavy and patchy dust extinction is known to affect the core of the galaxy -A V >25 mags (Willner et al., 1977;Rieke et al., 1980;O'Connell et al., 1995) -which may easily be higher within individual clumps. Using the standard interstellar extinction law (Rieke & Lebofsky, 1985), the corresponding H band extinction is expected to be A H >4 mags. Thus the effect of dust is sufficient to cause the anti-correlation between mid-IR and near-IR structures. Comparison with X-ray images We also carried out a comparison against two long X-ray images obtained from the Chandra archive: sequences 600735 and 600736 with observation dates of 2009 Jun 24 and 2009 Jul 1, respectively. In each case the center of M82 lies on the S3 chip of Advanced CCD Imaging Spectrometer (ACIS). Results presented here use CIAO v4.2 and the CALDB v4.3.0 calibration database. The data were re-calibrated using VFAINT cleaning, with random pixelization removed and bad pixels masked, following the software 'threads' from the Chandra X-ray Center (CXC) 4 to make new level 2 events files. Only events with the recommended grades of 0,2,3,4,6 were used. The observations were free from large background flares, and after removal of time intervals when the background deviated more than 3σ from average, the exposure times for sequences 600735 and 600736 were 118.413 and 118.054 ks, respectively. The default astrometry attached to the images was found to agree to within ∼ <0. ′′ 3 with the data presented by Kong et al. (2007), and no further refinement was done. In Figs. 5 and 6, an exposurecorrected 1.2-5 keV merged image of the two observations is included. Note that the brightest source in the field ULX X-1 is likely to be piled up (see also Feng & Kaaret, 2010) and a 'halo'-like counts depression visible in Fig. 5 seems to be caused by this. There is also a faint readout streak extending from the source position in both directions on the ACIS data. But these effects are not relevant for our work, because the chip regions used for spectral analysis (later in § 8) do not overlap with this; nor do we use this source for astrometric calibration or any other analysis. There are hints of association of the mid-IR wind streamers with structures in the diffuse X-ray emission at faint levels. There is also a conspicuous dust lane of extinction which bisects the stellar distribution (e.g. O'Connell et al., 1995), and this appears to be associated with cold gas obscuration as well. This is manifest as a sharp decrement in the diffuse galactic Xray emission (easily visible in the bottom panel of Fig. 5) coincident with the near-IR deficit. The mid-IR emission avoids several spots along this band (in particular, near the dynamical center) which may be sites of extremely thick and cold absorbing material. Detailed analysis of these individual structures is left for future work. With regard to the X-ray point sources, the most important coincidence with an mid-IR source is that of source # 18, which will be discussed in detail in § 8. The well-known bright ultraluminous X-ray sources (ULXs) known in the core of M82 do not show bright mid-IR counterparts. There is, however, one potential match. Kong et al. (2007) note that their X-ray source J095551.0 (which we use for the NICMOS astrometric calibration in the previous section) coincides with the radio source Table 1. The four cross-identified sources are labelled in red, and their radio positions are marked with 1 ′′ diameter circles. The corresponding mid-IR counterparts are labelled with brackets in green (see Table 1 and Fig. 3. A fifth possible association of the radio source 42.21+59.2 is with mid-IR source # 11; see text in § 6. (Bottom) Radio to mid-IR SEDs of the four cross-identified sources are shown. The dashed red and orange lines shows the SEDs for massive clusters M5 and M6 from Galliano et al. (2008). 42.21+59.2 [B1950] from M02. Our source # 11 (I51.01+45.7 [J2000]) lies at a separation of 0. ′′ 5 from the radio position. This offset, combined with the fact that the mid-IR counterpart is elongated (roughly along the north-south direction; Fig. 3), makes cross-identification uncertain. But there are two noteworthy points: 1) M02 also found an extension for the radio counterpart along a similar direction (see also Fenech et al. 2008); 2) our mid-IR counterpart bridges the near-IR+X-ray position and the radio one. So we tentatively assign this source as the fifth HII region identified in the mid-IR (it is marked in Fig. 4). As discussed by Kong et al. (2007), the X-ray counterpart may be a ULX hosted within the star cluster detected by M02 in the radio (and also now by us in the mid-IR). Discrete sources The general source population in the core of the galaxy consists of discrete sources whose emission is superposed over a diffuse radiation field. Of all detected sources, only one (source # 8 [I50.44+48.2] in Table 1) appears to be truly point-like based on the structural parameters returned by SExtractor and a FWHM of close to that expected for the diffraction limit. Note that the morphology of source # 18 [I52.70+45.9], which we will discuss further in § 8, is uncertain due to its faintness and the fact that it is affected by low level read pattern noise. Most of the discrete sources appear extended at our high angular resolution of ∼ <0. ′′ 4 (corresponding to a linear scale of ∼ <6-6.5 pc). This is in contrast to the optical and near-IR ap-pearance of super star clusters detected in HST NICMOS imaging, where most were found to have half light physical extents of ∼3.5 pc or less (O'Connell et al., 1995;McCrady et al., 2003). Our detected dust features may then be a result of large scale outflows from the starburst regions. The measured source centroids and fluxes are listed in Table 1, and the sources are identified in Fig. 3. There is good overall agreement in the emission structures between the two images, though the NeII filter shows a greater number of significant individual detections as compared to N11.7, especially towards the outer parts of the field of view. Many sources have NeII filter fluxes higher than the corresponding N11.7 filter fluxes by factors of several at least, meaning that the [NeII]λ 12.81µm dominates in these cases. This is true for all the HII regions cross-matched with the radio catalog of M02, which is consistent with the fact that this emission line is a strong star-formation indicator, and also why more sources are detected in the NeII filter. In Table 1, we comment on some source properties and associations with ionized gas sources identified by AL95. In particular, we have found additional structure to their 'E1, W1 and W2' peak emission sites on the eastern and western limbs, and on the western ridge of [NeII] line emission. A couple of sources are also found on the faint 'bridge' of emission connecting the eastern and western limbs of the ring. Finally, two of the tabulated sources (# 19 and # 21) have NeII fluxes of less than 1.5 times the continuum flux in the N11.7 filter, much -- Table 1. Parameters of detected discrete sources. Source designation is relative to (09h55m, +69d40m) and (09h51m, +69d54m) in J2000 and B1950, respectively. The prefix 'I' in the designation refers to the fact that these are Infrared coordinates. The final column states cross-identification with the radio catalog of M02, or with diffuse structures in AL95. In some cases, other comments are given. † These sources were used for astrometric calibration. less than in other cases. This may suggest atypical SEDs with continua rising to shorter wavelengths. Upon examining Fig. 3, though, it seems that extended emission is present in the NeII filter at the positions of both these sources, but SExtractor is simply unable to isolate it from the highly clumpy diffuse background. In fact, simulating the addition of a weak point source at the position of source 21 in the NeII filter suggests a limit of ∼90 mJy which, in turn, implies a NeII/N11.7 flux ratio consistent with the distribution of flux ratios for detected sources in Table 1. Diffuse features and galactic emission Underlying the discrete sources is a diffuse emission field from the galaxy which completely dominates the radiated flux. Integrating all observed photon counts over the COMICS field of view yields total fluxes of 55 and 108 Jy in the N11.7 and NeII filters, respectively. Systematic uncertainties from detector cosmetics and the standard star absolute calibration dominate here ( § 5), for which we allow for 10% variations along the full field. The 11.7 µm flux is within a factor of 2 of the continuum flux reported by Beirão et al. (2008) for their 'total' aperture. This comparison is likely to be full consistent, given the difference in aperture sizes and positions used, perhaps some contribution of the 11.2 µm polycyclic aromatic hydrocarbon feature to our broad-band photometry, and absolute cross-calibration uncertainties. Removing the combined flux of the discrete sources identified in Table 1 gives 53 and 106 Jy for the two filters. This diffuse emission possesses substantial sub-structure, of which we highlight a few specific aspects. The most striking features of our high resolution imaging are several wind 'streamers', which stand out better in Fig. 7 which has been smoothed by convolution with a square top hat kernel 5×5 pixels (0. ′′ 67×0. ′′ 67) wide, in order to enhance faint diffuse features. Most of these are visible on the northern side, with an orientation roughly perpendicular to the major axis of the galaxy. The longest of these in our image is a chimney-like straight structure, with a linear extent of over 7 ′′ , or about 120 pc. Chopping in the mid-IR cuts out a lot of extended emission. So, we are undoubtedly seeing only the inner regions of much larger streamers and chimneys which have been detected on wider scales (e.g. Engelbracht et al., 2006;Kaneda et al., 2010). Towards the north-east of the field of view, two structures extend outwards and curve in towards one another, resembling the edges of a limb-brightened bubble. Assuming a circular shape, the projected diameter of this feature is ≈8 ′′ , corresponding to a physical size of ≈140 pc. Limb-brightening is much less pronounced near the top, which may suggest wind rupture, if real. Ruptured bubbles are a natural consequence of over-pressurization of outflowing hot gas in the superwind model which explains the burst of star formation in the core of M82 (e.g. Chevalier & Clegg, 1985;Heckman et al., 1990). If there is entrained clumpy dust within this outflow, the dust distribution will follow that of the gas. On the other hand, there is no obvious evidence from multi-wavelength data that this bubble is filled in with hot plasma. Its reality as a bubble or as two separate streamers need confirmation from further observations. These various features do not point back to a single site of energy ejection. The starburst in M82 is known to occur in a 'ring', rather than a compact nuclear concentration (e.g. Nakai et al., 1987, AL95), and the mid-IR emitting hot dust indeed traces out portions of this ring. Multiple burst events of dust expulsion along the entire inner ∼500 pc region of this ring are required to explain the data. Dust temperature and mass The mass of the hot, mid-IR emitting matter may be estimated by assuming optically-thin thermal radiation from uniform spherical dust grains. The largest uncertainty in this estimate is the unknown and spatially-variant dust temperature. Telesco & Harper (1980, hereafter TH80) were able to fit a modified black body emission model with a temperature of ∼45 K for the dust emitting at far-IR wavelengths of 41-141 µm over the central ∼900 pc region. They also found the observed fluxes at shorter wavelengths from 2-21 µm to overestimate the thermal model predictions by many orders of magnitude, meaning that hotter dust components characterize the mid and near-IR emission. In Fig. 8, we overplot the 5-41 µm fluxes and 45 K thermal model from TH80, along with our integrated N11.7 filter flux. There is some non-negligible uncertainty resulting from varying beam sizes and centroids amongst the data plotted in this figure so we do not carry out a detailed fit. But a qualitative comparison is reasonable if the flux is mostly centrallyconcentrated. If a two-temperature-only model is assumed, we find that a modified black body with flux density varying as ν 1.5 B ν,T (where B ν,T is the Planck function at frequency ν and temperature T ) is a fair representation of the 10.5-21 µm integrated source fluxes for T ∼160 K. The emissivity power-law index of 1.5 has been assumed to be identical to that found for the cooler dust by TH80. This model is also plotted in Fig. 8 (normalized to our 11.7 µm data), and shows that the 5 µm excess requires other even hotter dust components, if this emission is thermal in origin. Using this model, the dust mass may be estimated by employing the infrared emissivity relation of Hildebrand (1983), where M d , D, a and ρ are the dust mass, galaxy distance, grain radius and specific density, respectively. F ν and Q ν,a are the observed flux density and the grain emissivity. Assuming emission from the grain surface into all hemispherical solid angles gives the factor of π. Taking canonical values of the size and density for a graphite grain composition gives for B ν,T expressed in W m −2 Hz −1 sr −1 . A mean 11.7 µm emissivity of Q ν,a ≈0.009 for graphite grains with diameter of a=0.1 µm is used (Draine, 1985). The observed total flux and model temperature then imply M d ∼950 M ⊙ for the hot dust. In comparison with the mass of cooler 45 K dust determined by TH80, this is about 500 times lower. 6. RGB overlay of the three panels from Fig. 5, with radio regions plotted as dashed circles (HII regions) and SNRs (plus signs). The white X sign is the kinematic center (Weliachew et al., 1984). There are two heavy plus signs: about 4 ′′ to the East of the center, and 1 ′′ South is the AGN candidate in magenta (see § 8), and about 2 ′′ to the West and 1 ′′ South is SN 2008iz in white (Brunthaler et al., 2009). The white box marks the position of the unusual radio transient reported by Muxlow et al. (2010). and Rieke & Low (1972) (5, 10.5 and 21 µm) overplotted with our integrated COMICS 11.7 µm flux (red box). The green and blue curves represent modified black bodies with temperatures of 45 K (cf. Fig. 1 of TH80) and 160 K, respectively. The black curve is the sum of these two. Candidate AGN Our target # 18 (I52.70+45.9 [J2000]) has a uniquely interesting story. In our astrometry-corrected images, it is found to be coincident (within ≈0. ′′ 1) with the radio source 44.01+59.6 [B1950] from McDonald et al. (2001, and references therein). Based upon several lines of evidence, it has been suggested that this may be a weak active galactic nucleus (AGN) in M82. The evidence includes the atypical radio SED, detection of OH maser satellite lines and also a possible elongated radio jet (Wills et al. 1997;Seaquist et al. 1997;Wills et al. 1999). Other detailed studies have shown that its SED is not atypical for a SNR, and it has an expanding shell with an expansion velocity of 2700±400 km s −1 (Allen & Kronberg, 1998;Fenech et al., 2008). Several hard X-ray AGN searches have been carried out over the years (Tsuru et al., 1997;Ptak & Griffiths, 1999), and a potential counterpart has been detected with ROSAT (Stevens et al., 1999, cf. their source X-3) and even Einstein (Watson et al., 1984, who also suggested this as a possible nucleus, based on its proximity to the 2.2 µm core). But there is considerable diffuse soft X-ray emission in this region, and the high resolution of Chandra is required for resolving the complex unambiguously (e.g. Matsumoto et al., 2001). No unambiguous and strong X-ray:radio association to our knowledge has been reported to date. The source lies close to, but not exactly at the dynamical center of the galaxy. It lies 4. ′′ 1 (70 pc) and 2. ′′ 1 (36 pc) from the radio and optical kinematic centers quoted by Weliachew et al. (1984, see also O'Connell & Mangano 1978 errors are ≈1. ′′ 5-2 ′′ in each coordinate). In summary, the nature of this object still remains uncertain. In Fig. 9, we present zoom-in images of the source position in the mid-IR (our NeII filter), the near-IR (NICMOS 1.6 µm), and in X-rays (Chandra ACIS 0.5-2 keV and 3-7 keV). There is no point-like source in the near-IR, but it is detected significantly in our NeII image. A source also clearly appears in X-rays, especially at hard energies. We designate this source as CXOU J095552.7+694046 (with J2000 coordinates of 09:55:52.7+69:40:45.8), or J095552.7 for short. Its position relative to other X-ray sources within the core of M82 discussed by Kong et al. (2007) is shown in the bottom panel of Fig. 5. We now present analysis of the new long-exposure Chandra archival data analyzed herein. In the next section, we will also examine the broad-band SED and comment on the source nature. Chandra X-ray data First, we extracted a radially-averaged surface brightness profile of the source, and compared this to a simulated PSF image. The PSF was constructed using the Chandra ray tracing package CHART 5 assuming observational parameters including nominal and source coordinates, roll angle, and ACIS-S chip identical to those for the analyzed data (the sequence 600735 was used for this comparison). The ray trace was then projected onto the detector plane and a PSF events file generated using MARX. The profile of the source for photons extracted over the hard 3-7 keV energy range, compared to the PSF expected for 4.5 keV photons, is shown in Fig. 10. A constant background level corresponding to the outermost radial bin has been removed. The source appears to be consistent with a point source, at least within the core. There may be evidence of an extended 'wing' around 1 ′′ radius, but systematic effects of the (position-dependent) underlying galactic background cannot be ruled out. Next, the X-ray spectrum of the source was extracted. A circular aperture of radius 1. ′′ 25 was used to accumulate source counts. The diffuse galactic flux around the source is clumpy and affected by cold gas absorption to its immediate north. An irregularly-shaped polygonal region surrounding the source was selected for background subtraction, avoiding nearby point sources as well as strong absorption to the north-west. These regions are shown in Fig. 9. For spectral extraction the CIAO task psextract was followed by mkacisrmf and mkarf to apply the latest calibrations. Models were fitted jointly to the two separate observations. Fitting was performed in XSPEC (Arnaud, 1996) and includes absorption along the line of sight in our Galaxy fixed at a column density of N H = 4×10 20 cm −2 (based on the data by Dickey & Lockman, 1990) 6 . A minimum grouping of 30 counts per spectral bin was applied for fitting and uncertainty determination with the χ 2 statistic. A fit to the resultant 0.7-6.8 keV spectrum (ignoring 'bad' bins and energies below 0.5 keV) with a single power law component results in a hard power law with photon index Γ=1.11±0.08 (all uncertainties are quoted at 90% confidence), but with an unacceptable goodness of fit χ 2 =175 for 100 degrees of freedom (dof), with residuals suggesting the presence of emission features below 4 keV including those from highly ionized Si, Ne and Ar. Several models, including a combination of absorbed power laws, or a power law combined with a thermal (APEC) plasma model (Smith et al., 2001) were tried. Although some of these yield statistically-acceptable fits, the power law slope above 3 keV turns out to be very soft, with Γ restricted to ∼ >2-3 in all cases. Furthermore, residuals suggesting additional emission bands still remain unaccounted for. The best model is a combination of two absorbed thermal plasmas, with temperatures of 0.6 keV and 2.6 keV, absorbed by significantly different columns of N H =8×10 21 and 4.3×10 22 cm −2 , respectively. With abundances fixed at Solar, assuming the Lodders (2003) abundance table for both temperature components, a χ 2 /dof=101.5/96 is found. The spectrum with this best fit is shown in Fig. 10 and the parameters are listed in Table 2. Adopting only a single absorber produces an unacceptable fit. Regarding abundances, assuming a yield appropriate for SN II instead (Nomoto et al., 2006, averaged over a Salpeter initial mass function from 10 to 50 M ⊙ , with a progenitor metallicity of Z=0.02) produces a fit similar to the best one. Letting the abundances of the two components vary also does not improve χ 2 significantly. On the other hand, those for SN Ia (Iwamoto et al., 1999, their W7 model) yield an unacceptable fit. Residuals suggest a high abundance for specific elements, but these are not strictly required by the fit with the present data quality. Lastly, we point out that the diffuse galactic emission underlying the source has a soft spectrum, resulting in some additional systematic uncertainty in the source morphology and soft band spectral parameters. The source appears to have a relatively mild peak excess in Fig. 9. We carried out an additional check of the soft band spectrum by changing the background extraction regions to an annulus encompassing only background counts from the immediate source vicinity. This did not remove the need for the soft thermal component, though the confidence intervals of the fitted parameters did change, as expected. The hard band, however, is not affected significantly by the galactic background. Fe K emission line The high temperature plasma should also emit a strong ionized Fe line around 6.7 keV, near the limit of the binned energy range available. In order to analyze this line, in Fig. 10 we show the ungrouped spectrum around 6.7 keV, overplotted with the best fit model determined above (binning is applied for display purposes only). The default model clearly produces a line feature also seen in the data. We estimate its significance in a couple of ways. Accumulating net source counts over 6.5-6.9 keV after subtracting off the model continuum measured at the ends of this energy range, and comparing these counts with the total (background inclusive) counts in the same range, yields a simple Poisson signal:noise ratio of 2.4 and 3.1 for the line feature in the two Chandra data sets, respectively. Another estimate can be obtained by parametrizing the continuum as a power-law locally, and then fitting the line as a Gaussian superposed on this. The energy range of 5-8 keV is used in order to estimate the continuum, and we assume a Gaussian width σ fixed at 10 eV. The C-statistic is the appropriate one to use for ungrouped data (Arnaud, 1996). We find a line energy of 6.68(±0.03) keV and a 90% equivalent width range of 0.6-4 keV; in other words, the line is significantly detected (despite large uncertainties on its strength) and its center energy is consistent with highly ionized Fe at 6.7 keV, and not the neutral line energy of 6.4 keV. Comparison with prior archival data Detection of any flux variability can be important for understanding the nature of the source. The fluxes inferred from the two Chandra observations above are consistent with each other, and one must examine longer timescales. Table 2. X-ray joint spectral fit to the two Chandra archival data sets analyzed. The model is PHABS (WABS 1 *APEC 1 + WABS 2 *APEC 2 ), with abundance fixed to that of Lodders (2003). Errors correspond to 90% parameter uncertainties on a single interesting parameter, when not fixed. This model returns a statistic of χ 2 /dof=101.5/96. † The units of norm are 10 14 cm −5 . previous sections. We reduced and analyzed this archival data set in an identical manner to the 2009 observations. The extracted net spectrum is shown in Fig. 11. Fitting with the same absorbed double thermal model as in § 8.1 yields parameters consistent with those listed in Table 2 (though with much larger uncertainties), and a flux F 0.5−10 =1.06 +0.4 −0.2 ×10 −13 erg s −1 cm −2 corrected for Galactic absorption only. The 90% confidence interval overlaps with the flux determined from the 2009 observations ( § 8.1). Thus, there is no indication for strong flux variability on a timescale of several years either. A similar inference has also been reached by Chiang & Kong (2011) in a recent analysis of other observations (their source designation is CXOU J095552.8+694045). Our comparisons limit any flux variations to less than 20%. Discussion We have presented the highest resolution imaging of the nuclear regions of M82 to date in the mid-IR. The Subaru diffraction limit at wavelengths of 11.7 and 12.81 µm is 0. ′′ 36 and 0. ′′ 39, respectively. Compared to the previous best resolution observations (the 12.4 µm imaging with PSF of 1. ′′ 1 presented by TG92), our images have PSFs improved by factors of 3.1 and 2.8 at 11.7 and 12.81 µm, respectively, and we cover a core region larger by a factor of at least two. The referee also made us aware of a conference proceeding by Ashby et al. (1994), where an image at 11.7 µm with an angular resolution of 0. ′′ 6 is presented. The absolute astrometry of their image agrees to within ∼1 ′′ with our work, as well as with the works of AL95 and TG92. To our knowledge, no further details have been published. Multi-wavelength image comparisons show an anticorrelation between the observed stellar distribution (probed in the near-IR) with the distribution of warm dust that we probe. Mid-IR Near-IR 0.5-2 keV 3-7 keV This means that obscuration in these dusty regions is heavy enough to scatter even near-IR radiation. Our observations provide the best view of the base of the dusty superwind that is known to exist in this galaxy, and reveal several elongated features on projected physical scales of up to ℓ=120 pc at least. If the mid-IR emitting dust is mixed in with and entrained in outflowing gas, then the travel time (t) is t = 5.9 ℓ 120 pc 200 km s −1 v × 10 5 yr. where a velocity of v=200 km s −1 identical to that found by Nakai et al. (1987) for the molecular gas is used as reference. This time period suggests recent energy input from young starbursts. The kinetic energy required to expel this dust is only a fraction of that channeled into the total gas mass, and so expulsion by SNe occurring at various locations around the ring of star-formation can easily account for this (Nakai et al., 1987). Using the integrated fluxes over the COMICS field-of-view alongside archival data at longer and shorter wavelengths, and assuming only a two temperature phase for the dust, we are able to describe the broad-band mid-IR diffuse emission with a T =160 K modified black body with a hot dust mass of ∼1000 M ⊙ , in addition to the 45 K cool dust component responsible for the far-IR emission. Assuming a standard gas:dust ratio of 100, the mass of the gas associated with the mid-IR emitting dust is then ∼10 5 M ⊙ . This is lower than the ionized gas mass of 2×10 6 M ⊙ (TH80; Willner et al. 1977) by a factor of about 20, meaning that the ionized gas may be the predominant environment for the hot dust that we are observing. The cooler dust, on the other hand, is likely to be tracing the distribution of molecular gas (TH80). More than 20 discrete sources are detected, and most are found to have extended profiles. Matching against radio catalogs suggests that we have resolved at least four (and tentatively, five) HII regions in the mid-IR for the first time. These HII regions have monochromatic continuum dust luminosities ranging from νL 11.7 µm ν =2×10 6 L ⊙ (radio ID 39.29+54.2) to 8×10 6 L ⊙ (41.17+56.2), consistent with being powered by embedded super star clusters similar to those seen in some other nearby galaxies (e.g. Galliano et al., 2008). As seen in Fig. 6, no source is detected at the position of the radio transient identified by Brunthaler et al. (2009) as SN2008iz, the brightest radio SNR in M82 over the past 20 years. The start of bright radio flaring activity is limited to the time interval of 2007 Oct 29 to 2008 Mar 24. Our observations were carried out one month after the final set of followup radio observations on 2009 Apr 04 reported by Brunthaler et al., when they found the source to have an integrated 22.2 GHz flux of 9.2±0.2 mJy. Our flux limits are F NeII <38 mJy and F N11.7 <18 mJy, assuming a point source of angular size equal to the diffraction limits in each filter. In general, there is little overlap between confirmed SNRs and mid-IR detections. Additionally, no mid-IR counterpart is detected at the location of the unusual radio transient found by Muxlow et al. (2010). The source appeared on radio images obtained within the period of 1-5 May 2009 -contemporaneous with our mid-IR observations -and was not present one week before. It is located ≈1 ′′ from the position of our source # 18 (see also Kong & Chiang 2009 for an identification of the X-ray counterpart). We estimate point source upper limits of 34 mJy and 15 mJy in the NeII and N11.7 filters. . The empty gray squares show the normalized counts profile before background subtraction. The filled black circles denote the normalized net profile, and these can be compared to the expected PSF from a point source (histogram). The source profile is extracted from the 3-7 keV image, and the PSF from ray-tracing of 4.5 keV photons. (Bottom Left ) X-ray spectrum and residuals to a fit with two absorbed thermal plasma models. Data from both analyzed observations (black and red for sequences 600735 and 600736, respectively) are included. The two thermal models are shown as the dotted lines. (Bottom Right ) Ungrouped spectrum around the Fe K line and best fit model from the left panel (rebinned to obtain a minimum 2σ significance for plotting only). No space observatory is foreseen to have a better resolving power than Subaru at ∼10 µm, though the Mid InfraRed Instrument (MIRI) onboard JWST should provide excellent sensitivity for only a modest loss in angular resolution at the same wavelengths (e.g. Wright et al., 2004). MIRI is also expected to have a field of view larger by a factor of ∼ >2 on a side, making source identification much more secure. On a longer timescale, the European/JAXA mission Spica (Nakagawa, 2004) will be crucial for deep searches of far-IR signatures of AGN activity at this source location. On the ground, the Gran Telecopio Canarias, with a primary mirror diameter of 10.4 m, can improve the resolution slightly to 0. ′′ 3 under good observing conditions. Even better resolution must await larger ground-based observatories such as the Extremely Large Telescope. This will not be diffraction-limited under natural seeing conditions and will require additional adaptive optics capabilities in the mid-IR to improve upon the results presented herein. Nature of AGN candidate source The puzzling radio source 44.01+59.6 has a significant NeII mid-IR detection (our source # 18 [I52.70+45.9]), and is also coincident with a source visible in high-resolution X-ray images (J095552.7), to well within our estimated absolute positional uncertainties. From Chandra data, it is immediately apparent that the X-ray spectrum is not consistent with that of AGN, which usually display broad-band power-laws with photon-indices ∼1.9 characteristic of radiatively-efficient accretion (e.g. Mateos et al., 2005). Low luminosity AGN with radiatively-inefficient flows or jets, too, usually display hard X-ray continua (e.g. Yu et al., 2010). Similarly, one can argue against direct association with other kinds of accreting sources such as X-ray binaries. The detection of a strong He-like Fe line instead of a neutral line centered on 6.4 keV implies the absence of cold reflecting matter, also atypical for an accreting source. Highly ionized lines have actually been attributed to heavily obscured AGN in some cases (e.g. Iwasawa et al., 2005;Nandra & Iwasawa, 2007, and references therein), though on larger scales. In such a scenario, the AGN itself is not readily apparent; rather, the visible spectrum is dominated by surrounding gas which is irradiated by the AGN along directions out of our line of sight. If this is true for the case of source J095552.7, the underlying accreting object (be it an AGN or an X-ray binary) could power the radio jet found by Wills et al. (1999) and also provide a ready source where accreting gas is in rotation (as inferred from the maser lines found by Seaquist et al. 1997). But this would also result in X-ray photoionization spectral features or even an ionized reflection continuum. An extra layer of optically-thick obscuration is then also required (in addition to the two that we detect) with an extreme covering factor very close to unity so as to completely hide the reflection continuum and any neutral Fe K line. This is not consistent with the detection of a jet which would be expected to decrease the covering factor by clearing away some surrounding matter. Finally, the 2-10 keV luminosity of 5×10 38 erg s −1 means that the source is unlikely to have a bolometric X-ray power which would place it in the ULX regime (e.g. Makishima et al. 2000) because of the steep spectra of the fitted thermal models. Thus, our analysis allows us to reach some firm conclusions as to what the source is not. The true source nature still remains uncertain, though, and in the following, we discuss plausible alternatives. In addition to our [NeII] image, the source appears most prominently as a compact object in hard X-rays. One possibility may then be that both the mid-IR and the hard X-ray components are associated with the SNR visible in the radio. To test this, we may compare the source power to that of Cas A, a powerful and young Galactic SNR. From Suzaku observations summed over the entire remnant, the 4-10 keV X-ray luminosity of Cas A is determined to be L CasA 4−10 keV ≈4×10 35 erg s −1 (Maeda et al., 2009). This is about 150 times smaller than the deabsorbed X-ray luminosity L 4−10 keV ≈6×10 37 erg s −1 that we find for our source by using the high temperature APEC component alone ( § 8.1). This requires that the electron density (n e ) in the X-ray emitting interstellar medium (ISM) around source # 18 be much larger than in the case of Cas A. n e may be determined by using the normalization values returned by the APEC 2 component and the emission volume which, in the scenario under consideration here, should correspond to the volume of the radio-emitting plasma. Fenech et al. (2008, see their Table 3) measure a diameter of 0.86 pc for the SNR, using which yields an electron density of n e ∼1.75(±0.3)×10 3 cm −3 . A low plasma filling factor would only push this value up. This is, indeed, much larger than typical ISM densities of 1-10 cm −3 relevant for Galactic SNRs. We also note that the present limit on any X-ray variability in the source ( ∼ <20% over seven years) derived from archival data comparisons in § 8.3 is consistent with the observed smaller flux changes in Cas A over similar timescales (e.g. Patnaude et al., 2010). The detected collimated radio jet may then suggest some kind of axisymmetry in the progenitor (e.g. a binary merger event). On the other hand, the mid-IR power for source # 18 (λ L λ (12.81 µm) =1.5×10 40 erg s −1 ) is in excess of the integrated 12 µm IRAS power of Cas A (λ L CasA λ (12 µm) =5×10 36 erg s −1 ; Saken et al. 1992) by a much larger factor of ≈3000. But given the unknown dust temperature, the fraction of freshly formed vs. ambient heated dust, and the fact that our NeII filter flux is likely to be dominated not by continuum but by the [NeII]λ 12.81µm line (which is known to be a strong cooling line for SNRs; e.g. Rho et al. 2008), further comparison of the mid-IR fluxes is difficult. The other possibility to consider for the X-ray thermal plasma components is that of a hot ISM phase in a compact star cluster. In fact, collisionally ionized plasmas with a wide range of temperatures have been previously inferred to exist within starburst galaxies (e.g. Iwasawa et al., 2005;Ranalli et al., 2008;Strickland & Heckman, 2009), though on larger scales. Fig. 12 shows the compiled radio-to-X-ray fluxes for source # 18. A mid-IR flux dominating the SED is consistent with an origin in a starburst, though this is not a unique solution. The detection of two distinct layers of absorption ( § 8.1) may also suggest a scenario which combines the above possibilities as follows. The hot X-ray thermal plasma and the radio counterpart could both be associated with a SNR, which is embedded within a host star cluster. The high column density affecting the APEC 2 component (Table 2) may easily be explained by strong local absorption within molecular clouds, for instance. The point-like profile of the hard X-ray data is also consistent with this. The cluster, on the other hand, could appear prominently in soft X-rays and also in the mid-IR. The comparatively low aborption affecting this component is then due to gas along the plane of the galaxy on larger scales. Further insight into the nature of this source will be possible if it can be isolated at longer wavelengths characterizing the peak of typical star-formation SEDs (this may be within reach of JWST), or through detection of spectral features in the sub-mm with ALMA. Subsequent long Chandra exposures would be useful to place tighter constraints on X-ray variability. Meanwhile, the question of whether M82 hosts an AGN or not remains to be answered. Broad-band SED of source # 18. The radio data are the 1.3 cm to 74 cm fluxes (and one limit) from Allen & Kronberg (1998). In the mid-IR, we plot our NeII filter flux, and a 3σ N11.7 detection limit measured assuming Poisson statistics within an aperture equal in size to the diffraction limit. The X-ray regime shows the unfolded spectrum relevant for the model fitted to the observed data in Fig. 10. PG acknowledges a JAXA International Top Young Fellowship and a RIKEN Foreign Postdoctoral Researcher Fellowship during parts of this work. This research is supported in part by a JSPS Grant in Aid Kakenhi number 21740152. PG also thanks S. Konami and P. Ranalli for discussions. The referee is acknowledged for their useful comments. Based on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. Data archives for the Chandra and HST missions were essential for the multi-wavelength comparisons presented herein. Subaru observatory staff are thanked for their assistance in the observations.
2011-01-25T21:00:08.000Z
2011-01-25T00:00:00.000
{ "year": 2011, "sha1": "d74b3897c72d8ea7b2c6021c446227202204dcd3", "oa_license": null, "oa_url": "https://academic.oup.com/pasj/article-pdf/63/sp2/S505/17269983/pasj63-S505.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d74b3897c72d8ea7b2c6021c446227202204dcd3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10841164
pes2o/s2orc
v3-fos-license
Partial dissociation in the neural bases of VSTM and imagery in the early visual cortex Visual short-term memory (VSTM) and visual imagery are believed to involve overlapping neuronal representations in the early visual cortex. While a number of studies have provided evidence for this overlap, at the behavioral level VSTM and imagery are dissociable processes; this begs the question of how their neuronal mechanisms differ. Here we used transcranial magnetic stimulation (TMS) to examine whether the neural bases of imagery and VSTM maintenance are dissociable in the early visual cortex (EVC). We intentionally used a similar task for VSTM and imagery in order to equate their assessment. We hypothesized that any differential effect of TMS on VSTM and imagery would indicate that their neuronal bases differ at the level of EVC. In the “alone” condition, participants were asked to engage either in VSTM or imagery, whereas in the “concurrent” condition, each trial required both VSTM maintenance and imagery simultaneously. A dissociation between VSTM and imagery was observed for reaction times: TMS slowed down responses for VSTM but not for imagery. The impact of TMS on sensitivity did not differ between VSTM and imagery, but did depend on whether the tasks were carried concurrently or alone. This study shows that neural processes associated with VSTM and imagery in the early visual cortex can be partially dissociated. Introduction Visual short-term memory (VSTM) and visual imagery are both functions of the visuo-spatial working memory (VSWM) (Baddeley, 2003;Cornoldi and Vecchi, 2003). While VSTM refers to processes associated with maintaining visual information beyond the presentation of the target stimulus (e.g. Baddeley, 2003), imagery is a form of sensory experience occurring in the absence of perceptual input (Kosslyn, 1994). VSTM and visual imagery are closely related, as imagery enables the content of memory to be consciously experienced in the "mind's eye". Indeed, there is much evidence to indicate that they both involve visual cortical neurons which encode incoming sensory information (Sparing et al., 2002;Slotnick et al., 2005; see e.g. Postle (2006) for review; Serences et al., 2009;Harrison and Tong, 2009;Van de Ven et al., 2012;Albers et al., 2013). For example, imagery of visual stimuli is associated with neuronal firing in the same neurons that are activated by the visual presentation of those stimuli (Kreiman et al., 2000), and the content of VSTM can be decoded from the activity patterns of the visual cortex (Harrison and Tong, 2009;Serences et al., 2009;Emrich et al., 2013). Thus imagery, VSTM, and the perception of external visual input all make use of overlapping resources in the visual cortex (cf.; "sensory-recruitment" model of working memory; e.g. Awh and Jonides, 2001;Postle, 2006;D'Esposito, 2007). On the cognitive level, this overlap has been explained in terms of the visual cache, a component of the visuospatial sketchpad, which is involved in the maintenance of both VSTM and imagery content (Logie, 1995). In his model, Logie (1995) proposed a functional structure of the visuospatial sketchpad (VSSP), a structure previously elaborated by Baddeley and Hitch (1974). In this model, the VSSP comprises two subsystems; the inner scribe, a system responsible of actively rehearsing information that are spatially organized and/or in motion; and the visual cache, a system responsible of passively holding visual information related to forms and colors. Whereas VSTM representations based on a single input object are held in the visual cache, mental imagery generation depends on the visual buffer (Kosslyn and Thompson, 2003). Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neuropsychologia In this view, after a mental image has been generated, its maintenance relies on the visual cache (Borst et al., 2012) where also VSTM content is actively rehearsed (Coleman and LeFevre, 2002). In addition, imagery requires the engagement of the central executive that enables the mental image to be maintained in conscious experience (Logie, 1995). Even though VSTM and mental imagery may share neural/ cognitive resources, they are nevertheless two distinct psychological processes that can be dissociated behaviorally. For example, dynamic visual noise presented concurrently with visual imagery generation impairs subjects' performance in imagery tasks, whereas no impairment is found when visual noise is presented during a VSTM task (Quinn and McConnell, 1996;Andrade et al., 2002;Zimmer and Speiser, 2002). Recently, VSTM and mental imagery have also been dissociated with respect to their impact on the detection of concurrently perceived visual stimuli (Saad et al., 2013a). Whereas imagery suppresses external visual information (cf. Perky (1910)), VSTM appears to facilitate the encoding of matching visual input (e.g. Soto et al., 2005). These dissociations raise the questions of where in the brain their processing diverges. The objective of the present study was to assess whether this divergence is present in the early visual cortex (EVC). This was tested using transcranial magnetic stimulation (TMS) as a probe of visual cortical activation state (e.g. Van de Ven and Sack, 2013;Silvanto and Muggleton, 2008;Sandrini et al., 2011;Cattaneo et al., 2010). We assessed two aspects of participants' performance: 1) accuracy of the memory for the original cue (conventional VSTM task); and 2) accuracy of the mental image. In the "alone" condition, either VSTM or imagery was assessed, of which participants were informed before each block. In the "concurrent" condition, participants were informed only at the end of the trial whether they would be asked to perform a comparison task relative to their mental imagery, or relative to their memory of the original memory cue. Thus maintenance of the original memory cue as well as engagement in imagery was required on all trials. The concurrent condition was carried out to understand how imagery and VSTM might interact when they are engaged simultaneously. In order to equate the assessment of VSTM and mental imagery we intentionally used the same task for both conditions. The difference, however, resides in the instructions. To monitor participant's compliance with task instructions we administered a questionnaire at the end of the experiment addressing cognitive processes used during VSTM and imagery. We hypothesized that any differential effect of TMS on VSTM and imagery would indicate that their neuronal bases differ at the level of EVC. Participants 23 participants (9 females; mean age 25 years) with normal vision participated in the experiment. All were naïve to the aim of the study and provided written informed consent, in agreement with the Declaration of Helsinki and approved by the ethics committee of Aalto University. Participants received a monetary reward for their participation. Stimuli Stimuli and task were controlled by E-prime v2.0 (Psychology Software Tools Inc., Pittsburgh, USA; http://www.pstnet.com/eprime.cfm). All stimuli were sinusoidal luminance-modulated gratings (with a diameter of 5°of visual angle; generated with Matlab), presented foveally from a viewing distance of 57 cm on a gray background. The spatial frequency of the gratings was 1.44 cycles/degree. All gratings were vertical in orientation. The Memory/imagery cues had a Michelson contrast of either 0.2, 0.3, 0.4, or 0.5 and participants needed to hold the contrast in memory/imagery (specific instructions are described in the next section). The test cue presented at the end of each trial had a Michelson contrast that differed from the Memory/imagery contrast by either 7 0.06 ("difficult" difficulty level) or 0.09 ("easy" difficulty level) of Michelson contrast. (For example, for a memory/imagery cue of 0.2, the test cues could be either 0.14 or 0.26 Michelson contrast (for the 70.06 difference condition) and 0.11 or 0.29 Michelson contrast (for the 7 0.09 difference condition). The mask was a uniformly black circle with the same diameter as the gratings (as used in previous studies; Saad and Silvanto, 2013a,b). The stimuli were presented on 22-inch screen with 1600 Â 1200 pixel resolution. Experimental sessions 3 types of blocks were run: 1. VSTM alone-assessment of memory for the original memory cue. In these blocks, participants were instructed to hold the cue contrast in memory, without engaging in imagery throughout the trial. In other words, participants were not required to maintain a conscious mental image throughout the delay period (in other words, phenomenal experience of the qualia of the memory content was not required during maintenance). At the end of the trial, they were required to judge whether the test cue was of lower or higher contrast than the original memory cue. We refer to this block as "VSTM alone"-block. 2. Imagery alone; assessment of accuracy of the conscious mental image. In these blocks, participants were asked to maintain conscious mental image of the original imagery cue throughout the delay period, where the mental image contrast was compared to the test cue. In other words, the contrast judgment was based on an online inspection of the mental image. As in these blocks the actual memory for the original memory cue was never assessed, we refer to it as "Imagery alone"-block, as strictly speaking it does not contain a conventional VSTM task. Thus the main difference between the two tasks is that, in "Imagery alone", participants are asked to use visual imagery to perform the task, by keeping the imagery cue in phenomenal experience throughout the delay period. This was not required in "VSTM alone" condition. 3. Concurrent VSTM and imageryassessment of either VSTM accuracy and accuracy of the conscious mental imagery (similar to Slotnick et al., 2012;Saad & Silvanto 2013a). In these blocks, participants were informed at the end of the maintenance period whether memory or imagery would be assessed. Within the same day of testing, two sub-sessions were carried out for each participant. In session 1, conditions 1 and 2 were run (see Fig. 1A). For both conditions, 2 blocks of 32 trials were run for both TMS conditions (Early Visual Cortex, Sham). In Session 2, condition 3 was run in 4 blocks of 32 trials for both TMS conditions (see Fig. 1B). The order of sessions was counterbalanced, as was the order of blocks within each session. Each block contained Memory/imagery main cues of four different contrasts (0.2, 0.3, 0.4, and 0.5 Michelson contrast). The contrast difference between the test cue and the main cue was either 70.06 or 70.09. General procedure Each trial began with a fixation point (1 s), followed by the Memory/imagery cue (300 ms). The cue was a vertical sinusoidal grating and had a Michelson contrast of either 0.2, 0.3, 0.4, or 0.5. To avoid any afterimage induction by this cue, a mask (a uniformly black circle, appeared after the offset of the cue for 100 ms). The imagery/memory contrast (depending on the experimental condition) was then assessed by a forced choice task. On each trial, participants were asked to judge whether the contrast of the test cue was lower (press 1) or higher (press 2) than the original memory cue or their mental image. TMS stimulation and site localization TMS was delivered using a Magstim rapid2 (Magstim super Rapid Plus, Magstim company, UK) using a figure-of-eight 70-mm air-cooled coil. The coil was held using a custom-made magic-arm and placed tangentially on the skull. To stimulate the early visual cortex, the coil was placed 2 cm above the inion and 0.5 cm laterally on the right hemisphere, and the coil position was slightly adjusted such that participants reported phosphene in the location where stimuli would appear in the main experiment (see e.g. Pascual-Leone and Walsh (2001), Campana et al. (2002) for this approach). Participants who did not perceive phosphenes (n¼ 9) were stimulated with the above coordinates. For Sham TMS, the coil was placed above the central parieto-occipital (POz) electrode region, and foam was used to increase the distance between the coil and the skull. For half of the participants, MRI scans were available and these were used to confirm that the location of stimulation was in the vicinity of the calcarine sulcus via a neuro-navigation system. Phosphene thresholds were measured for each participant using a Modified Binary Search Paradigm (MOBS; Tyrrell and Owens, 1988). TMS intensity was adjusted for each participant such that an intensity of 90% of the phosphene threshold was used. Participants who did not perceive phosphenes were stimulated with an intensity of 65% of machine output (as used in a prior study; Saad and Silvanto, 2013b). None of the participants reported phosphenes during the experiment. On each trial, a pulse train (consisting of five pulses applied at 10 Hz; i.e, pulse gap of 100 ms; (e.g. Ashbridge et al., 1997;Campana et al., 2002Campana et al., , 2006Muggleton et al., 2003;2013b) was applied 2.5 s after the onset of the 4 s maintenance period. This specific time window was used in order to allow sufficient time for the generation of the mental image and to avoid a close temporal proximity of the TMS pulse train with the test cue. Questionnaire assessing task strategy At the end of the experiment, participants were asked to fill a questionnaire assessing their cognitive strategies during the experiment and ensure that instructions were followed. The questions were as follows: For VSTM: "Please describe in detail how you memorized the original cue; what strategy or process did you follow until asked to judge your memory of the cue". For imagery: "Please describe in detail how you formed the mental image and what strategy or process did you follow until asked to judge your image of the cue?". For Condition 3, participants were asked the following question: "Where you able to memorize and make a mental image of the main cue?" Please describe in detail how you memorized the memory/imagery cue and made a mental image of it; what mental strategy or process did you follow until asked to judge your memory/imagery of the cue? A representative response for VSTM and imagery were as follows:VSTM: "I memorized the memory cue and waited until the test without doing any mental process; somehow it was unconscious. At test, I tried to dig out the memory". Imagery: "I made a mental image of the imagery cue, kept looking at it in my head through focusing and repetition the whole time, until the test". On the basis of this questionnaire, five participants were excluded from the data analysis. Two participants were excluded because they had not used imagery when required. Three participants reported using the same maintenance process across all sessions (e.g. using imagery even in VSTM alone blocks and vice versa). The rest of the participants (n ¼18) reported having followed task instructions. Results Three participants were removed due to reaction times being of 3 SD above the mean across conditions; therefore both the reaction times analysis and sensitivity analysis were conducted on 15 participants. Fig. 2(a-d) shows the mean (n ¼15) sensitivity (d′) for VSTM and imagery as a function of TMS site and difficulty level. We initially carried out an ANOVA into which all independent variables were entered. This 2 Â 2 Â 2 Â 2 ANOVA, with task (imagery or VSTM), condition type (alone or concurrent), TMS site (EVC or sham), and difficulty (easy or difficult) revealed a main effect of difficulty (F (1,14) ¼63.62; p o0.001), condition type (F (1,14) ¼ 63.75; p ¼0.02), and a 2-way interaction between condition type and TMS (F (1,14) ¼21.36; p o0.001). None of the other main effects or interactions were significant. Overall effects of VSTM and imagery on sensitivity To understand the nature of these effects, we carried out posthoc comparisons. In these t-tests we collated the data across tasks (Imagery, VTSM) and the difficulty levels (easy or difficult) as neither factor was involved in significant interactions in the AN-OVA. These pairwise comparisons revealed that, in the alone condition, EVC-TMS enhanced the sensitivity relative to sham (t (14) ¼ À5.80; p o0.002); in contrast, in the concurrent session, EVC-TMS did not modulated the sensitivity relative to sham (t (14) ¼1.3; p ¼0.42). To investigate whether baseline performance (i.e. sham TMS condition) were modified across condition type we conducted a 2 Â 2 Â 2 ANOVA in which we entered condition type (alone or concurrent), task (VSTM or imagery), and difficulty level as independent variable. This revealed a significant effect of difficulty level (F (1,14) ¼16.27; p¼ 0.001). However, no other main effect or interaction was found (highest p-value 0.9). In summary, EVC-TMS enhanced the sensitivity of both VSTM and imagery when conducted separately. In contrast, TMS had no impact on sensitivity in the concurrent condition. The baseline performance level of imagery and VSTM did not differ, and was not modulated by the task (i.e. alone or concurrent). Fig. 3(a-d) shows the mean (n ¼15) median reaction time during VSTM and imagery conditions as a function of TMS site and contrast difficulty level. Fig. 1. Timeline of an experimental trial. At the start of each trial, participants were presented with a cue (a vertical grating). The task involved maintaining the contrast of the grating by holding it in memory and/or forming a conscious mental image of it and maintaining it throughout the maintenance period. TMS pulse train was applied 2.5 s after the onset of the maintenance period. At the end of each trial, participants were asked to judge the test cue contrast relative to VSTM/imagery content (i.e. is the test cue of lower or higher contrast). (A) In "VSTM alone" and "Imagery alone" blocks, the assessment of memory and imagery were carried out in separate blocks. (B) In "concurrent" blocks, participants were informed at the end of the trial whether memory for the original memory cue would be assessed, or whether they should perform the contrast discrimination task relative to their conscious mental image. As task was interacting with TMS site, we conducted separate ANOVAs for each task selectively in order to investigate these effects. For VSTM, we conducted a 2 Â 2 Â 2 ANOVA with condition type (alone or concurrent), TMS site (EVC or sham), and contrast difficulty (easy or difficult). This revealed a main effect of condition type (F (1,14) ¼26.4; p o0.001), difficulty level (F (1,14) ¼5.8; p ¼0.031), and TMS site (F (1,14) ¼ 9.23; p ¼0.009). None of the interactions were significant. The main effect of TMS indicates that TMS induced a slowing down of RTs for VSTM. For imagery, we conducted a 2 Â 2 Â 2 ANOVA with condition type (alone or concurrent), TMS site (EVC or sham), and contrast difficulty (easy or difficult). This revealed only a main effect of condition type (F (1,14) ¼28.5; p o0.001). None of the other interactions were significant. Thus TMS had no effect on RTs for imagery. In sum, these results show that TMS applied at EVC increased reaction times relative to Sham in the VSTM task. No such effect was found for imagery. Discussion The aim of this study was to examine whether the neural bases of VSTM and imagery are dissociable in the early visual cortex. VSTM and imagery differed in terms of the level of phenomenal awareness of the memory/imagery content during the maintenance period. In "imagery" conditions, participants were asked to perform the task by keeping the imagery cue in phenomenal experience throughout the delay period; this was not required in "VSTM alone" condition. Our results can be summarized as follows: in the "alone" condition (i.e. when participants knew in advance of each block whether VSTM or mental imagery would be assessed at the end of the trial), TMS over the early visual cortex increased the sensitivity of both VSTM and imagery. In contrast, TMS had no effect on sensitivity in the "concurrent" condition (where both VSTM and imagery maintenance was required on each trial). A dissociation between VSTM and imagery was present in the reaction times. Whereas TMS increased reaction times for VSTM, no such effect was found in the imagery task. Condition type did not interact with the impact of TMS on RTs. Thus while the impact of TMS on sensitivity was similar for VSTM and imagery (but differed between alone and concurrent conditions), the impact of TMS on reaction times was different for VSTM and imagery, independently of condition type. The facilitatory effect of TMS on sensitivity in the "alone" condition is consistent with previous findings on the overlap of imagery and VSTM in the EVC (Kosslyn et al., 2006, Gains et al., 2009, and prior demonstrations of TMS-induced facilitations in VSTM and imagery paradigms (Cattaneo et al., 2012;Silvanto and Cattaneo, 2010;Silvanto and Soto, 2012). For example, phosphene studies have shown TMS to facilitate the features contained in both imagery (Sparing et al., 2002;Cattaneo et al., 2011) and in VSTM . However, the results of this study beg the question of why the effects on sensitivity and reaction times were qualitatively very Error bars indicate 7 SEM from which between-subjects variance has been removed. different. It is generally assumed that the impact of TMS on these two measures occurs via the same mechanism. TMS is believed to act by indiscriminately activating neurons in the targeted region, thereby adding noise to the highly organized pattern of neural activity associated with perceptual processes (see e.g. Walsh et al., 2003;Ruzzoli et al., 2010). This can slow down reaction times, as more time is needed to accumulate the necessary level of evidence required for the discrimination judgment, due to the increased amount of noise. Induction of noise can also reduce sensitivity by reducing the quality of the sensory representation on which the discrimination judgment is based. Of these two measures, reaction times are generally more sensitive to TMS-induced disruption, possibly because even small amounts of noise can slow down evidence accumulation, whereas the sensory signal might have sufficient redundancy to deal with low noise levels . In this view, whenever accuracy is reduced, an effect on reaction times should also be observed, as the latter is more susceptible to disruption. Our results are inconsistent with this, as TMS induced a general slowing down of RTs only for VSTM (for both alone and concurrent condition), whereas it facilitated sensitivity for both VSTM and imagery, but only in the "alone" condition, To account for these results, it thus appears to be necessary to postulate that reaction times and sensitivity reflect distinct neural processes. What might these be? Successful performance in the discrimination task consisted of at least two components: firstly, maintaining an accurate memory/imagery representation, and secondly, accessing it for conscious inspection for comparison against the test cue. The key difference between items in VSTM and imagery is that the latter are already in the conscious domain (Logie, 1995), and thus do not require a separate stage of retrieval before they can be compared with the test cue. In contrast, VSTM content needs to be consciously accessed for the discrimination task to be performed. TMS in the present study had two distinct effects: one on the actual memory/imagery representations (reflected in enhanced sensitivity, and found for both VSTM and imagery), and a second on the process of conscious retrieval (reflected in reaction times). It might be that for the latter, TMS had no impact in the imagery task because the mental image is already in the conscious domain and therefore such retrieval is not needed. However, VSTM content needs to be re-accessed in order for the discrimination task to be performed. In this study VSTM maintenance did not entail efforts in regenerating an image of the cue once the iconic memory had faded. For this reason, the VSTM task (but not the imagery task) involves a separate stage of retrieval of accessing memory content and the ease of this process might have been affected by TMS. Interestingly, whereas EVC TMS facilitated VSTM and imagery in the "alone" condition, it had no impact when they were carried out concurrently. This indicates that the memory/imagery trace is in a different neural state when the two are engaged simultaneously, compared to "VSTM alone" and "Imagery alone" conditions. One possibility is that, during simultaneous VSTM and imagery, the underlying memory trace on which both items are based is stronger. It could be that with sufficiently strong representation, TMS is no longer able to enhance it further. However, what argues against this explanation is that baseline level of performance of either VSTM or imagery was not higher in the "concurrent" condition relative to the "alone" condition, which this view would predict. (C) VSTM in "concurrent" condition. (D) Imagery in "concurrent" condition. TMS significantly slowed down reaction times for VSTM but not for imagery. The Error bars indicate SDs from which between-subjects variance has been removed (Loftus and Masson, 1994). Another possibility is that VSTM and imagery involve different representations and that during the maintenance period, these interact. If the imagery representation is derived from the VSTM representation, then one would expect VSTM content to constantly update the mental image. This interaction could be bidirectional, with the mental image itself strengthening the underlying VSTM trace. An interaction between VSTM and imagery might have modulated the nature of both the VSTM and imagery traces, rendering them both differentially susceptible to TMS relative to the "alone" condition. In this view, VSTM and imagery would be based on partly distinct representations. In summary, the key finding of the present study is that TMS had a differential impact on the reaction times of VSTM and imagery, dissociating these processes at the level of the early visual cortex. While the current literature often emphasizes the visual cortical overlap in neural resources for VSTM and imagery, our study demonstrates that differences between these two cognitive functions exist not only at the "high" level of executive functions (e.g. Logie, 1995), but also at the level of the visual representations.
2016-10-25T01:17:25.114Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "2c02e198f645d9f81c449aa9ff2238046ed15a06", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.neuropsychologia.2015.05.026", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eeb4a24484d271d3f9ef37b6aa4b30863a9ef03f", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
92100644
pes2o/s2orc
v3-fos-license
Global relative species loss due to first‐generation biofuel production for the transport sector Abstract The global demand for biofuels in the transport sector may lead to significant biodiversity impacts via multiple human pressures. Biodiversity assessments of biofuels, however, seldom simultaneously address several impact pathways, which can lead to biased comparisons with fossil fuels. The goal of the present study was to quantify the direct influence of habitat loss, water consumption and greenhouse gas (GHG) emissions on potential global species richness loss due to the current production of first‐generation biodiesel from soybean and rapeseed and bioethanol from sugarcane and corn. We found that the global relative species loss due to biofuel production exceeded that of fossil petrol and diesel production in more than 90% of the locations considered. Habitat loss was the dominating stressor with Chinese corn, Brazilian soybean and Brazilian sugarcane having a particularly large biodiversity impact. Spatial variation within countries was high, with 90th percentiles differing by a factor of 9 to 22 between locations. We conclude that displacing fossil fuels with first‐generation biofuels will likely negatively affect global biodiversity, no matter which feedstock is used or where it is produced. Environmental policy may therefore focus on the introduction of other renewable options in the transport sector. | INTRODUCTION Over the last several decades, various national and local incentives have promoted the use of renewable energy sources as a step toward more sustainable energy use. In major renewable energy markets such as the US, Brazil and the EU, bioenergy from biomass is the most important renewable energy source, and further growth is expected in all sectors including the transport sector (IEA, 2018). However, an increasing demand for biomass can only partly be met by intensifying existing agriculture, and will thus require expansion of the global agricultural area (Beringer, Lucht, & Schaphoff, 2011;Helmut et al., 2013). A potential downside of such expansion is the potential loss of species when natural vegetation is transformed into croplands (Dale, Kline, Wiens, & Fargione, 2010;Elshout, Zelm, Karuppiah, Laurenzi, & Huijbregts, 2014;Strona et al., 2018). Additionally, expansion or intensification of agricultural land use may require the extraction of extra surface water to irrigate the feedstocks (Gerbens-Leenes, Hoekstra, & Meer, 2009). Therefore, biofuel production may negatively affect the freshwater biodiversity as well as the wetland species that depend on surface water (Verones, Pfister, Zelm, & Hellweg, 2017;Vörösmarty et al., 2010). Furthermore, to provide fertile soils, the removal of natural biomass and the disturbance of the original soil carbon dynamics (e.g. due to tillage) will induce the release of greenhouse gases (GHGs) into the atmosphere (Searchinger et al., 2008). Additional GHGs are emitted during crop cultivation as a result of farm machinery use, cropland fertilization and irrigation, and other processes that require fossil fuels (Lal, 2004;Snyder, Bruulsema, Jensen, & Fixen, 2009). Various studies have provided evidence that switching to firstgeneration biofuels may effectively result in an increase in GHG emissions (Don, Osborne, & Hastings, 2011;Fargione, Hill, Tilman, Polasky, & Hawthorne, 2008;Hoefnagels, Smeets, & Faaij, 2010;Immerzeel, Verweij, Hilst, & Faaij, 2013;Searchinger et al., 2008), and could thereby contribute to climate change rather than reduce it. According to Verones, Moran, Stadler, Kanemoto, and Wood (2017), land use, water use and GHG emissions are the three main drivers of ecosystem damage. Hence, when assessing the impact of displacing fossil fuels with biofuels on biodiversity, it is important to consider all three drivers. Previously, the global impact of (agricultural) land transformation on biodiversity has been quantified, typically based on species-area relationships (De Baan, Mutel, Curran, Hellweg, & Köllner, 2013;Chaudhary, Verones, Baan, & Hellweg, 2015;Schmidt, 2008). To date, only a few studies have applied such models to the case of biofuels. Chaudhary et al. (2015) analysed biodiversity impacts of bioethanol production in different areas of the world showing that sugarcane production in Brazil results in a greater species loss than sugar beet production in France and maize (grain or stover) production in the USA. However, they did not address the additional impacts of water use and GHG emissions on biodiversity. Danielsen, Beukema, and Burgess (2009) compared species richness in natural tropical ecosystems with species richness in oil palm plantations to quantify the impact of oilpalm-related land transformation. While they also estimated the CO 2 emissions related to land transformation, they did not quantify the impact of climate change on biodiversity. Strona et al. (2018) concluded that large-scale expansion of oil palm cultivation in Africa will have unavoidable negative effects on primates, as there are very few areas that combine a high productivity with low biodiversity importance. Gibon, Hertwich, Arvesen, Singh, and Verones (2017) and Van Zelm et al. (2014) carried out comprehensive assessments of the impacts of GHG emissions and land use (along with acidification and toxicity, but no water use) related to electricity generation and wood-based biofuel production, respectively. A study that assesses the biodiversity loss related to first-generation biofuel production worldwide is currently lacking. The goal of the present study was to quantify the impact on global relative species richness of current first-generation biofuel production. The selected biofuels included bioethanol from corn and sugarcane, a potential replacement for fossil petrol, and biodiesel from rapeseed and soybean, an alternative to fossil diesel. The focus area included predominant biofuel-producing countries, namely, the USA (corn and soybean); Brazil (soybean and sugarcane); China (corn); and several European countries including Austria, France, Germany, Italy and Poland (all rapeseed). We assessed the three most important stressors: (a) habitat loss due to land use, (b) habitat loss due to water use and (c) climate change due to GHG emissions. For GHG emissions we not only included the potential species loss in the current situation, but also in future years, as GHG emissions are not directly removed from the atmosphere. We used a default of species loss integrated over a time horizon of 100 years. We analysed two scenarios where biofuels are being produced respectively with and without accounting for the conversion of natural grassland or forest. The scenario "without land conversion" accounts for potential global species loss in the current situation due to cropping activities (e.g. irrigation, fertilizer application) and land occupation compared to the natural state. The scenario "with land conversion" adds biodiversity impacts due to initial loss of carbon after land conversion and the recovery time required for the cropland to go back to the natural state. | MATERIAL AND METHODS The biodiversity impact related to biofuel production is expressed as the global potentially disappeared fraction (PDF) of species per MJ of bioenergy produced every year. We quantified this potential global loss of species due to biofuel production by using the LC-IMPACT method . LC-IMPACT distinguishes itself from other life cycle impact assessment methods including the ReCiPe method , which typically quantified potential species losses at the local scale. The total biodiversity impact was divided in two components, i.e. occupation and transformation. Biodiversity impacts were allocated between the biofuels and by-products (e.g., corn stover, sugarcane bagasse) based on their respective market values. The allocation factors were collected from Wang, Huo, and Arora (2011) and are shown in Table S1. Throughout our analysis, we assume that natural vegetation (either grassland or forest) would be the counterfactual to the croplands being transformed and occupied for feedstock cultivation. | Occupation The impact of land occupation from crop x cultivated in location i under management strategy j (I occ,x,i,j in PDF·yr MJ −1 ) was calculated as the sum of the fraction of species lost due to habitat loss, water stress, and GHG emissions: where CEF is the crop-to-energy conversion efficiency (in MJ m −2 yr −1 ); BF HL,occ is the terrestrial biodiversity impact factor for species loss caused by land occupation (in PDF m -2 ); W occ is the amount of water used during feedstock cultivation (in m 3 m −2 yr −1 ); BF WS is the biodiversity impact factor for species loss caused by water stress (in PDF m −3 ); M occ,GHG is the GHG emission during biofuel production (in kg CO 2 eq m −2 yr −1 ); and BF GHG is the terrestrial biodiversity impact factor per unit of GHG emission (in PDF yr kg CO 2 eq −1 ). The CEF was calculated as, where Y is the crop yield (in kg crop m −2 yr −1 ); CBF is the crop-to-biofuel conversion factor (in kg biofuel kg crop −1 ); and EC is the biofuel energy content (in MJ kg biofuel −1 ). The BF GHG is calculated as, where IAGTP is the time-integrated absolute global temperature potential of 1 kg of CO 2 emitted (°C·yr kg CO 2 eq −1 ), and EF terr is the effect factor representing the increase in global PDF due to an increase in global mean temperature (PDF °C −1 ). The IAGTP varies with the time horizon. We used a 100-year time horizon as default and applied the long-term effect at a 1,000-year time horizon as a sensitivity check. | Transformation The biodiversity impact related to transformation (I trans,x,i,j in PDF yr MJ −1 ) was calculated as the sum of species lost caused by initial GHG emissions directly after natural land conversion and the habitat loss due to destruction of the original ecosystem: where M trans,GHG is the GHG emission resulting from land transformation (in kg CO 2 eq m −2 ); BF HL,trans is the biodiversity impact factor per m 2 of transformed land (in PDF yr m −2 ); and PT is the plantation time (in years). The default plantation period was set to 30 years, which means that we allocated 3.3% (1/30) of the land conversion impacts to the amount of crops produced in a year. As a sensitivity check, we also calculated transformation impacts for a plantation period of 100 years. | Crop data Locations of crop cultivation were collected from SPAM (http://mapspam.info), a model that simulates agriculture at a resolution of 10 km by 10 km at the equator and reduces grid-cell sizes as the distance to the equator increases. It distinguishes among four farm management strategies, which were reduced to three strategies by combining the farms under low input, rain-fed management and those under subsistence, rain-fed management into one low input-no irrigation category. The other two farm management strategies are high input-no irrigation and high input-irrigated. We assume that any agricultural arable land within a country producing the crops of interest can supply the feedstock for that country's biofuel production. Furthermore, we do not include international trade of biofuel feedstocks. Spatially explicit crop yields were collected from SPAM, while crop-to-biofuel conversion efficiencies and biofuel energy contents were based on the ecoinvent database (Weidema et al., 2013;Wernet et al., 2016) and its documentation (Jungbluth, 2007). | Carbon stock data The GHG emissions resulting from land transformation (M trans,GHG ) were calculated as the difference between the carbon and nitrogen stocks of the original, natural system (i.e. natural forest or natural grassland) and those of the cropland. GHGs from three different pools were considered: biomass carbon, soil organic carbon (SOC), and soil nitrogen. Spatially-explicit biomass carbon stocks of natural forests at a ~1 km by ~1 km resolution were collected from Gibbs, Yui, and Plevin (2014), and default biomass carbon stocks of different types of natural grasslands were collected from Ruesch and Gibbs (2008). The biomass carbon stock of the crops was set at zero, which is similar to previous work (Elshout et al., 2015). Spatially-explicit SOC stocks for both natural forests and croplands at a ~1 km by ~1 km resolution were also collected from Gibbs et al. (2014). The SOC stocks for natural grasslands were calculated for 18 agro-ecological zones (AEZs) around the globe as a function of soil carbon concentration, bulk density, and depth (as per Guo & Gifford, 2002) using data from the Harmonized World Soil Database (Fischer et al., 2008). The GLC2000 land-cover map (Bartholome & Belward, 2005) was used to identify natural grassland areas. Finally, the average natural grassland SOC stock was calculated for each of the AEZs. The change in soil nitrogen was directly related to the change in soil carbon and was calculated using the equation from Flynn et al., (2012). All SOC values were based on the top 30 cm of soil. | Other GHG emissions CO 2 , N 2 O and CH 4 emissions during the biofuel production processes were collected from the ecoinvent database (Weidema et al., 2013;Wernet et al., 2016). This included emissions from both production and application of various inputs, such as pesticides, irrigation water, and machinery use during farming and refining. Country-specific data were preferred, but for missing countries global or rest-of-the-world data were used. Direct and indirect emissions from nitrogen fertilizer application were calculated separately using the methods from Shcherbak, Millar, and Robertson (2014) and the IPCC (2006), respectively. The amount of nitrogen fertilizer applied was collected from Elshout et al. (2015). In order to convert quantities of N 2 O and CH 4 to CO 2 -equivalents, they were multiplied by their respective global warming potentials (GWPs) of 265 and 30 kg CO 2 eq kg −1 , respectively, in the case of the 100-year time horizon (IPCC, 2013) and 79 and 5 kg CO 2 eq kg −1 , respectively, in the case of the 1,000-year time horizon . The impacts of biogenic GHGs and nonbiogenic GHGs were all considered equal (as per Hanssen, Duden, Junginger, Dale, & Hilst, 2017). However, the biogenic GHGs emitted upon combustion of the biofuel are not considered, given that the atmospheric residence time of these GHG can be considered net zero when the biofuel is produced from annual crops (Cherubini, Peters, Berntsen, Strømman, & Hertwich, 2011). An overview of data collected from the ecoinvent database can be found in Table S2. | Biodiversity impacts related to habitat loss The BFs for both land occupation and land transformation were collected from Chaudhary and Brooks (2018), who calculated at an ecoregion level the average global impact of transforming and occupying annual croplands on species of all terrestrial taxa (mammals, birds, amphibians, reptiles and vascular plants) relative to the total species richness of these taxa across the globe. Their factors are calculated by combining a Species-Area-Relationship model with the affinity to broad land use types of 22 386 species of mammals, birds and amphibians from the IUCN Red List Habitat Classification Scheme (IUCN, 2015) and reptile and plant data from Newbold et al. (2015). The use of such Species-Area-Relationship-based BFs to calculate the biodiversity impact of land use associated with a products' life cycle was recently recommended by the UNEP-SETAC life cycle initiative (Teixeira et al., 2016;UNEP, 2017). We determined which ecoregion each grid cell with feedstock cultivation was located in and selected the corresponding BFs (see Table S3). Chaudhary and Brooks (2018) distinguish between three farming intensity-levels, and we used data for minimal use for the low input-no irrigation scenario, and data for intense use for both high input scenarios. | Biodiversity impacts related to water stress For all feedstocks grown under high input-irrigated management, the biodiversity impact of water stress was accounted for. As spatially-explicit data on water use by croplands was lacking, we used water consumption data from ecoinvent (Weidema et al., 2013). Only the water used during feedstock cultivation was considered, given that water withdrawn during feedstockto-biofuel processing is minimal compared to water usage for irrigation (Mielke, Diaz Anadon, & Narayanamurti, 2010). Country-specific impact factors for water stress were collected from LC-IMPACT (http://www.lc-impact.eu; Verones et al., 2016) (Table S4). These factors account for the relative species loss of freshwater species, terrestrial species living in river sheds, and terrestrial vascular plant species outside the wetlands. | Biodiversity impacts related to GHG emissions The IAGTP was set at 4.76 10 −14°C yr kg CO 2 eq −1 for a 100year time horizon, based on Joos et al. (2013). For the effect factor, we used data from Urban (2015), who predicts that temperatures 0.8°C above preindustrial levels will cause the extinction of 2.8% of the terrestrial species and that temperatures 4.3°C above preindustrial levels cause the extinction of 15.7% of the terrestrial species. An effect factor of 0.037 PDF °C −1 Celsius was calculated from the differences between these two scenarios, i.e., an average of 3.7% global species loss is expected per degree Celsius global mean temperature rise. Combining the IAGTP from Joos et al. (2013) and the effect factor from Urban (2015), we derived a BF GHG of 1.76 10 −15 PDF yr kg CO 2 eq −1 . Using the same approach and data sources, a BF GHG of 1.57 10 −14 PDF yr kg CO 2 eq −1 was calculated for the 1,000-year time horizon. | Reference calculations The biodiversity impact of producing and combusting fossil fuels (I f,w ) was calculated as a reference to the impact of producing biofuels. GHG emissions (from combustion as well as e.g. mining and refining of the crude oil), habitat loss (due to land transformation and occupation) and water stress (mostly due to cooling water extraction) were included in the calculations: where the type of fossil fuel w was petrol or diesel, as a reference to bioethanol and biodiesel, respectively; M GHG is the GHG emission during fossil fuel production and combustion (in kg CO 2 eq MJ −1 ); A trans is the area of land transformation required for fossil fuel production (in m 2 MJ −1 ); A occ is the land area occupied for fossil fuel production (in m 2 ·yr MJ −1 ); and W is the amount of water used during fossil fuel production (in m 3 MJ −1 ). Area-weighted global averages of the biodiversity impact factors of habitat loss were provided by Chaudhary (personal communication; 30-04-2018), and those for water use were collected from LC-IMPACT (http:// www.lc-impact.eu; Verones et al., 2016). Data on GHG emissions, land use and water use for the production and combustion of petrol and diesel were collected from the ecoinvent database (Weidema et al., 2013;Wernet et al., 2016) and its documentation (Jungbluth, 2007). The GWPs mentioned above were used to convert emissions of N 2 O and CH 4 to CO 2 -equivalents. No by-products of fossil fuel production were considered. | Fuel blends Default calculations were performed for the production of pure bioethanol and biodiesel. However, biofuels are most often used in blends with petrol and diesel at varying mixing ratios, such as E25 (25 vol% bioethanol, 75 vol% petrol) commonly used in Brazil (Macedo, Seabra, & Silva, 2008), and B5 (5 vol% biodiesel, 95 vol% diesel) in the EU (Kousoulidou, Fontaras, Ntziachristos, & Samaras, 2010). We therefore calculated the global relative species loss related to the production of the most common fuel blends (I x+w ), i.e., E10, E25, E85, B5 and B20, as follows: where φ is the volume fraction of biofuel and fossil fuel in the fuel blend, and ρ is the fuel density (in L kg −1 ). Data on fossil fuel and biofuel densities were collected from Atabani et al. (2012) and Yüksel and Yüksel (2004), and can be found in Table S5. Impacts on the global biodiversity were calculated per liter of fuel, rather than per MJ, in order to avoid uncertainty from mixed fuel energy contents. Potential impacts of the blending process were not covered in the calculations. | Variable importance We determined to what extent the variation in biodiversity impact was attributable to the producing country, crop type, farm management strategy, plantation time, and time horizon of choice by using an ANOVA on the log-transformed biodiversity impact values. The unexplained variance (i.e., residual) can be attributed to the remaining spatial variation in biodiversity impacts within countries. | 1.1. Biofuels versus fossil fuels The occupation and transformation impact of biofuel production on global relative species loss was calculated for a total of 35,699 grid cells in the main biofuel-producing countries. Overall, the global relative species loss caused by bioethanol and biodiesel production systems turned out to be larger than the global relative species loss caused by fossil diesel and petrol production in more than 90% of the locations. Replacing fossil fuels with biofuels would on average increase the time-integrated global relative species loss by a factor of 30-128. Neglecting land transformation and only accounting for land occupation (referring to situations where feedstocks are grown on already established croplands), biodiversity impact of biofuel production still exceeds the impact of fossil fuel production ( Figure 1). Bioethanol produced from Chinese corn and Brazilian sugarcane was found to have the largest median impact on biodiversity. The impacts of bioethanol production in these countries also showed highest spatial variation, with outcomes ranging +/-a factor of 19 in the case of Chinese corn and 22 in the case of Brazilian sugarcane (based on 90% range; Figure 1a). The biodiversity impacts of fuel blends increase with the share of biofuel in the mix (Figure 2). On average, B5 from European rapeseed has the smallest impact followed by E10 and B5 from USA corn and soybean, respectively. E85 from Chinese corn and Brazilian sugarcane are the worst performing fuel blends. | Environmental stressor importance The impact of habitat loss due to land transformation and occupation dominates the total impact of biofuel production, as it was found to be two to three orders of magnitude higher than the impacts of water stress and GHG emissions. The biodiversity impact of water stress is found to be negligible, except for the production of corn-based bioethanol in the USA, where it contributes more than 25% in 10% of the locations ( Figure S1a). When neglecting the impact of land transformation, the biodiversity impact of land occupation is still dominant for all biofuel production systems ( Figure S1b). | Variable importance Country and management type were found to explain 17% and 11% of the variance, respectively, while the other variables explain less than 5% (Table S6). The residual represents the spatial variation within countries, and attributes to 67% of the variance. This indicates that the environmental performance of biofuels would improve more by selecting the most suitable locations within the countries currently producing biofuels than by switching to production in other countries, adopting different farm management strategies, growing crops for a longer time period, or approaching the impact of GHG emissions in an alternative way. | Sensitivity analysis When assuming a plantation time of 100 years, the impact of land transformation is distributed over a larger amount of crop harvested, which lowers the median global relative species loss per TJ of bioenergy produced by a factor of 1.6-2.7 (Figure 1b). On the other hand using a 1,000-year time horizon as the starting point for the time-integrated impact of GHG emissions hardly changes the median global relative species loss of the biofuel production systems ( Figure S2a) owing to the negligible contribution of GHG emissions to the total impact. However, the impacts of fossil petrol and diesel production more than doubled in case of a 1,000-year time horizon, which caused the land occupation impacts in about 25% of the European rapeseed-producing locations to become lower than the total impact of fossil petrol production. The same holds for the extreme scenario with a 1,000year time horizon for GHG impacts and a plantation time of 100 years ( Figure S2b). | DISCUSSION We show that potential global species loss per unit of first-generation biofuel production for transport exceeds the biodiversity impacts of their fossil counterparts. The models used in the present study come, however, with a number of limitations. First, all feedstocks were assumed to be solely mono-cropped; however, many farmers use multi-cropping systems. For example, approximately one-third of the farmlands in the Midwest USA alternates between corn and soybean biannually (sometimes also including other crops, such as wheat or alfalfa) (Borchers, Truex-Powell, Wallander, & Nickerson, 2014;Plourde, Pijanowski, & Pekin, 2013). In this situation, the overall impact of bioethanol and biodiesel production would equal the average of the impacts of USA corn and USA soybean. Alternatively, multi-cropping within 1 year would lower the impact of land transformation, as impacts are allocated among more crop biomass in the same number of years. A complete investigation of the effect of crop rotation on the relative global species loss exceeded the scope of this study, but it could potentially entail an increase in crop yield, greater soil carbon and soil nitrogen storage, and less fertilizer application, compared to a situation of mono-cropping. Whether or not this would sufficiently improve the performance of the first-generation biofuels to outperform fossil fuels should be investigated in future work. Second, our outcomes rely heavily on the data input, such as the crop yields simulated by SPAM (http://mapspam. info). Recently, Anderson, You, Wood, Wood-Sichra, and Wu (2014) analysed four major agricultural models including SPAM, and identified considerable differences in crop yields. F I G U R E 1 Global relative species loss due to bioethanol and biodiesel production when adopting a plantation time of (a) 30 years and (b) 100 years, and considering GHG impacts over a 100-year time horizon. The total impact is the sum of the impacts of occupation (also provided separately) and transformation in that country. The boxes show the first quartile, median, and third quartile, and the ends of the whiskers show the 10th and 90th percentiles of the grid-specific impacts. The dashed line shows the impact of the fossil alternatives, i.e., petrol (upper graph) and diesel (lower graph) F I G U R E 2 Global relative species loss due to production of various common fossil fuel-biofuel blends, when adopting a plantation time of 30 years and considering GHG impacts over a 100-year time horizon. Only combined impacts of occupation and transformation are shown. The boxes show the first quartile, median, and third quartile, and the ends of the whiskers show the 10th and 90th percentiles of the grid-specific impacts. Results for other scenarios can be found in Figure S3a Still, as there is no clear preference for any alternative model, we consider SPAM as appropriate for the purpose of the current work, especially given the useful disaggregation in three farm management systems it provides. Third, while the present study bases the biodiversity impacts of land use, water stress and climate change on recent, scientifically acclaimed and, to our opinion, most suitable methods, the biodiversity loss factors are not without uncertainty. For land use, this is demonstrated by the fact that the land use impact factors from Chaudhary and Brooks (2018) differ two orders of magnitude from those derived in previous work (Chaudhary et al., 2015) owing to methodological choices. The biodiversity loss factors are based on a comprehensive meta-analysis from Urban (2015). Climatic tolerance of species is, however, difficult to quantify, and evolutionary changes in populations cannot be predicted (Araújo & Rahbek, 2006). Furthermore, the meta-regression model does not account for the fact that a response to climate change by one species will have indirect impacts on the species that depend on them (i.e., biotic interactions at the community level) (Bellard, Bertelsmeier, Leadley, Thuiller, & Courchamp, 2012). Also, the LC-IMPACT method we applied, assumes that the species losses of the three main drivers are mutually exclusive, whereas the species lost due to the three stressors may actually partly overlap. Note, however, that given the domination of land use as stressor in the total impacts of biofuel production, the influence of the assumption of simple additive effects is relative small. Finally, it is important to emphasize that we do not take into account any potential impacts that occur abroad due to relocation of food or feed croplands after biofuel feedstock production has replaced the local food or feed production, i.e., indirect land-use change (Searchinger et al., 2008;Verstegen et al., 2015). In our study, we always quantify species loss of land use and GHG emissions compared to the natural state, regardless of the current land use at the location. This means that biofuel production at a certain location is always evaluated compared to the natural reference. We may underestimate global species loss due to biofuel production, in situations where biofuel production results in indirect land use change in areas with higher species richness and/or higher initial carbon stocks. This would be the case, for instance, if producing corn-based bioethanol from the US leads to indirect agricultural land transformation in the tropical rainforest of Brazil (e.g. Keeney & Hertel, 2009). In conclusion, the current study quantified the impact of first-generation biofuels on biodiversity due to GHG emissions, land-use-induced habitat loss, and water-use-induced habitat loss. Our findings suggest that first-generation biofuel production in the countries evaluated here is unfavourable compared to fossil fuel use in the transportation sector, even if the biofuel feedstocks are grown on existing cropland for a period of 100 years. Habitat loss following land transformation and occupation was found to be the dominant cause of global species loss. Hence, when aiming to protect global biodiversity, the present work suggests that policy makers should support the development of other renewable energy sources with lower land demand than first-generation biofuels, such as third-generation biofuels (Correa, Beyer, Possingham, Thomas-Hall, & Schenk, 2017). Further research is required to assess the biodiversity impacts of other renewable energy sources for the transport sector.
2019-04-03T13:09:09.126Z
2019-03-06T00:00:00.000
{ "year": 2019, "sha1": "d1d7c035f87a6bf114dbb2449b8b48eed6fbd3e9", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/gcbb.12597", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1d7c035f87a6bf114dbb2449b8b48eed6fbd3e9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
259202473
pes2o/s2orc
v3-fos-license
Shannon Entropy and Herfindahl-Hirschman Index as Team’s Performance and Competitive Balance Indicators in Cyclist Multi-Stage Races It seems that one cannot find many papers relating entropy to sport competitions. Thus, in this paper, I use (i) the Shannon intrinsic entropy (S) as an indicator of “teams sporting value” (or “competition performance”) and (ii) the Herfindahl-Hirschman index (HHi) as a “teams competitive balance” indicator, in the case of (professional) cyclist multi-stage races. The 2022 Tour de France and 2023 Tour of Oman are used for numerical illustrations and discussion. The numerical values are obtained from classical and and new ranking indices which measure the teams “final time”, on one hand, and “final place”, on the other hand, based on the “best three” riders in each stage, but also the corresponding times and places throughout the race, for these finishing riders. The analysis data demonstrate that the constraint, “only the finishing riders count”, makes much sense for obtaining a more objective measure of “team value” and team performance”, at the end of a multi-stage race. A graphical analysis allows us to distinguish various team levels, each exhibiting a Feller-Pareto distribution, thereby indicating self-organized processes. In so doing, one hopefully better relates objective scientific measures to sport team competitions. Moreover, this analysis proposes some paths to elaborate on forecasting through standard probability concepts. Introduction Team ranking is a complex matter, as debatable as individual ranking, and it has been well known in modern times since Condorcet's paradox [1] and Arrow's impossibility theorem [2]. This matter is highly relevant in sports competitions and is also a common practice in almost all aspects of social life, whether real or virtual. It is unnecessary to further point out the examples of university rankings, academic hiring, or promotion. Regardless of the method used, resulting ranks lead to prestige or contempt and are often accompanied by monetary rewards. Most of the time, in social life, ranking indicators encourage progress and help avoid mediocrity. Ranking is derived from a set of indicators, and there are many such indicators used in sports to rank athletes [3] or teams [4,5]. In team competitions like football (soccer), basketball, hockey, rowing, etc., where the focus is sometimes on specific athletes, the quality of a team is determined based on various quantitative measurements of their "quality" or "value". The ranking "problem" often arises due to the fact that competitions involve duals between athletes or teams, with incomplete round-robin formats. The case of cycling races is somewhat unique. It is widely accepted that cycling races are won by individual riders, but the role of the team is of crucial importance [6,7]. Quoting Cabaud et al. [7]: "A large proportion of cyclists in a race take part in support of another rider, meaning that they do not care about their personal result but instead try to help their team leader(s). Moreover, a team leader generally has one specific objective among a range of possible ones." In fact, cyclist races are quite different from other sport competitions, which emphasise the performance (physical or mental) of individual athletes, as in golf, boxing, escalade, tennis, judo, climbing, chess, bridge, etc. In cyclist races, many teams, typically consisting of an equal number of athletes at the start of the race, compete in order to be rewarded through monetary prizes, UCI (UCI: Union Cycliste Internationale, e.g., https://fr.uci.org/, accessed on 1 May 2023) points, or glory. The competition can be highly strategic: some athletes are geared toward a certain type of prowess, such as a time trial, sprint, mountain climbing, and other specialized cyclist races, not performed on roads. For simplicity, races that take place "off the roads" are not considered in the discussion below, although it is evident that the concepts discussed here can be extended to such cases. Therefore, this paper specifically focuses on professional road cycling, particularly multi-stage races. Team Ranking The literature about ranking teams, in sport, and in various socio-economic matters, is huge, but is much less abundant about cycling teams. The papers of Sorensen 2000 [8] and Vaziri et al. 2018 [9] appear to be the most pertinent ones about ranking teams in (various) sports, because of their general viewpoints. More pertinent to our concern, a 2020 paper by Ausloos [10] studied data of professional cyclists from the Tour de France, along the rank-size law method, deducing financial considerations from classical UCI measures (time-based) team ranking. Along a similar perspective, Ficcadenti et al. [11], in 2022, discussed the above-mentioned rank-size law on a soccer competition in Italy, thereby observing some regularities in team values within a ranking process. One should notice that papers said to be on "performance ranking" do not truly rank teams (or athletes) from their score result at the finishing line, but rather refer to body physiology and/or athletes' physical and biochemical conditions. These aspects, outside the present discussion, are not considered here. For completeness, one should point to a non-scholarly website by ProCyclingStats (PCS) as a data source where interesting metrics are calculated; see https://www.procycli ngstats.com/info/point-scales (accessed on 1 April 2023). Competitive Balance Fairness and equal chance opportunities are a priori pillars of sport competition. Hence, the literature on competitive balance is so huge, stressing qualitative and quantitative aspects, this paper mostly pertains to league activities rather than to teams point of view [12]. The construction of rules to quantify competitive balance is of utmost importance, particularly for the lower-ranked teams whose main sources of revenue come from sponsorship and viewership [13,14]. This is why specific strategies are employed in cyclist races to ensure that teams, which may not have the most likely winners, have "attacking riders" right from the starting line. Therefore, a dynamic measure is required for cyclist races to adequately capture and reward competitive balance. However, such a measure does not appear to be currently available. Physiological performance is often linked to competitive balance as well [7]. For additional perspectives, readers may refer to economic-related papers, such as those focusing on the Tour de France [15,16] (During the writing of this paper, an interesting example of indirect imbalance occurred in the 2023 Giro when organizers provided helicopters to teams for transporting a few riders down from the Gran Sasso finish line after stage 7, but at a cost that only a few teams could afford [17]). Shannon Entropy and Herfindahl-Hirschman Index The cyclist teams sporting performance is based here on the team's intrinsic entropy, while the Herfindahl-Hirschman index is used as the team's competitive balance indicator; both are mathematically defined here below, and thereafter adapted to the studied sport cases. The literature abounds on both measures, but not much on cyclist races. Shannon Entropy One can hardly comment in an original way on the use of entropy for measuring complex disorder [18]. Alas, I have not found many papers relating entropy to sport competition results, with the exception of [19][20][21]; however, their approach is somewhat different than that of the present framework. This lack of papers in the literature is surprising since one could easily transform the occurrence of sport results into probabilities, making it potentially useful for audacious forecasting and betting purposes, given appropriate scaling. Thus, let it be considered that this paper is a pioneer contribution to the field. In information science-particularly in scientometrics, in order to discuss uncertainty in measures, the concept of entropy (S) is a classical concept [18]-although it is sometimes misunderstood, or abusively misinterpreted. Its mathematical formulation reads as follows: (In information science, entropy is usually defined through a log in base 2 in "shannon units"; in thermodynamics, the natural log is used, ln ≡ log 2 : then, S is given in "nat units", as it is here; it is well known that a base change is easy, using the formula log a (b) = log x (a)/log x (b); log 2 (a) = ln(a)/ln(2); ln(2) = 0.69315). where y i ∑ j y j ≡ p i is the probability of the number of occurrences, y i , of the i-th event among N possible ones. The entropy maximum occurs when all p i are equal to each other, i.e., when there is no disorder: p i = 1/N: Notice for the present study that the entropy maximum occurs when each team wins as many stages as another; it may happen that some teams do not win any stage. The team winning a stage might not be the team of the winning rider. It may also (often) happen that there are not enough stages such that each team wins one stage at least. The minimum entropy occurs when one team wins all L stages. Herfindahl-Hirschman Index In brief, measurements of competitive balance are often based on "Standard Deviation of Win Percentages" [22]. Other measures have been discussed [23,24]. Aside from the "Standard Deviation of Win Percentages" [22], the Herfindahl-Hirschman index is the most frequently used measure [25,26]. The HHi is a concentration measure, which is typically used in business to emphasize monopolies by measuring the "size" of companies through their market share, hence providing some numerical relationship between the firms and the competition they face. This measure can be adjusted so that it reflects some aspect of competitive balance in sport by calculating the distribution of points, won through time or place, obtained by riders in a race competition. Notice that the HHi has been used on cyclist races such as the Tour de France in [10], but with a focus on its "market competition" aspect, i.e., measuring teams' financial gains. Recall that the Herfindahl-Hirschman index (HHi), is a measure of concentration [27]. It is usually applied to describe company sizes (from which the concentration is measured with respect to the entire market): a HHi below 0.01 indicates a highly competitive in-dex (in more usual language, from a portfolio point of view, a low HHi implies a very diversified portfolio). When applied to the case of sport team ranking, the HHi measure is proposed as an indicator of the level of fair competition among teams, rather than a measure of wealth concentration. Therefore, it serves as a competitive balance indicator. (In political economy and finance, the HHi in Equation (3) represents the sum of the squares of the market shares of the largest companies (traditionally, when expressed as fractions, with N = 50). However, in the context of sports, typically N < 50). Formally, the HHi measure is defined as: where y i represents the number of wins by the i-th team, and N is the appropriate number of teams. The higher the value of HHi, the smaller the number of teams with a large value of wins; in other words, HHi is a measure of the number of the best-performing teams in a given competition. Thus, an increase in HHi represents a decrease in competitive balance. In conclusion of this Introduction section, and in order to prepare the numerical illustrations and the subsequent discussion of findings and features, let it be mentioned that the data pertain to the Tour de France 2022 and the Tour of Oman 2023. In Section 2.2, I explain where these illustrative data can be obtained, i.e., from the official organizer websites. Application to Multi-Stage Races Let us define the notations for clarity moving forward. Consider a race with N teams and a total of M riders participating in an L stage race. Assuming that all teams start with an equal number of riders, there are K = M/N riders per team initially. However, due to rider abandonments during certain stages (s), or riders not starting certain stages for reasons not relevant here, in a given team (#), there are K # riders who finish stage s, such that M(s) = ∑ # K # (s); s = 1, . . . , L. For future reference, it is important to note that each rider (i) is assigned a bib number (d i ) by the race organizers at the start of the race. On the 2022 Tour de France (TdF), there were L = 21 stages, and N = 22 teams. Each team started with eight riders; therefore, M = 176. In the case of the 2023 Tour of Oman (ToO), there were N = 18 teams competing on L = 5 stages. Each team, with the exception of two of them, started with seven riders; therefore, M = 124. Data The relevant data can be obtained from the official websites of the race organizers or from media sources. To conduct the necessary treatments and analyses, it is essential to download the results for the twenty-one stages and five stages, respectively. The official reports provided by the organizers come in various formats. In order to ensure consistency and facilitate subsequent analysis, certain data treatments have been performed manually. For instance, one can begin by accessing the stage results of the 2022 Tour de France from the official website https://www.letour.fr/en/rankings (accessed on 1 April 2023). The stages can be reviewed in reverse order starting from stage 21 and progressing backwards. Similarly, for the 2023 Tour of Oman, the official website https://www.tour-o f-oman.com/en (accessed on 19 March 2023) can be visited to retrieve the final ranking in a single step through https://www.tour-of-oman.com/en/rankings (accessed on 15 February 2023). Then, the previous stages can be accessed and reviewed in reverse order. Notations The classical UCI counting goes as follows: the time (t (#) s ) of the fastest 3 riders of a team (#) after a stage (s) is aggregated, in order to give the "team time" for this given stage, say T (#) s . The "final team time" results from the aggregation of each "team stage time", T s , -even though several of the riders, considered for the aggregated "team stage time" might not have finished the whole race. One may also rank teams according to the finishing place of the first three riders of a team at the end of each stage. (Notice that this measure differs from the so-called "green jersey race" for riders in the Tour de France). Let us introduce the relevant notations. Let such riders be at place p Thereafter, one can define another objective team ranking place after stage s from p and calculate some P at the end of the multi-stage race, for the final ranking, -according to the team placing at different stages, again, irrespectively of the involved riders. Notice that these t and p lists do not necessarily give the riders in the same order, due to the last (3) kilometer(s) neutralisation rule, allowing riders to have "technical problems", tire punctures, even falls, or to willingly stop racing, along such a distance. Moreover, one can define the "adjusted team final time", A where, in Equation (6), j = 1, 2, 3 refers to the "three fastest" riders of the team (#) having completed all L stages. Thus, A L can only be so obtained at the end of the multistage race. Let it be strongly emphasized again that these three "j" riders might be quite different from the various three "i" riders having contributed to any t L , based on the final place of the three "best riders", at the end of the race: P (#) L has been defined in Equation (5); recall that this P (#) L measure refers to many different riders. To further refine the team ranking by considering only the riders (j) who successfully finish the race, ensuring a more comprehensive measure of team performance throughout the entire race, another quantity is defined as: This corresponds to A (#) L as defined in Equation (6). It is important to reiterate that in Equation (7), the index j ranges from 1 to 3, representing the three best-placed riders of team (#) in the various stages, who have successfully completed all L stages. This leads to four different team rankings: in each case, the best teams are those which have the lowest values of the four measured "variables", according to the ranking from measures in ascending order. One can easily plot and observe the distributions of such values. Moreover, their ranges being found to be finite, one can scale each result, with respect to the respective sums. This allows for the definitions of a posteriori "value probabilities", which can be interpreted as the "percentages of concentrations". These probabilities, denoted as p i to avoid confusion with y i in Equations (1) and (3), can be incorporated into the definitions of S and HHi to obtain: which represents the classical formulation of Shannon entropy, and which is also the standard way of measuring competitive balance through the "Standard Deviation of Win Percentages" (Trandel, 2011). It is important to note that ∑ i p i = 1 in these equations. Statistical Characteristics A summary of the (rounded) main statistical characteristics of the time and place indicator distributions, with the notations defined here above, for the 2022 Tour de France and the 2023 Tour of Oman, for their respective number of competing teams is given in Table 1. It is observed from the Table that both investigated competitions are on a quite different level from a sportive point of view: e.g., the number of teams and the number of stages are quite different. From a purely statistical perspective, it is worth noting that the time or place measures, both with and without the constraint of considering only the finishing riders in the final team value or team performance measures, cover a wide range of scales. Further comments on this topic will be discussed in the next section Section 4. Firstly, it is important to note that A L . This clear trend emphasizes the significance of the constraint on the "finishing riders only" in obtaining a more objective measure of team value and team performance at the conclusion of a multi-stage race. Discussion The time and place distributions are displayed in Figures 1-4. The rank-time laws for the Tour de France, as shown in Figure 1, exhibit a well-defined cubic form with a high coefficient of determination (R 2 ≥ 0.985). It is worth noting that only the integer values on the x-axis hold significance in this context. On the other hand, for the Tour of Oman (ToO), which is a shorter race, the variations in team performance are relatively weak for most teams, except for the last three. As a result, fitting an empirical law is less meaningful. However, for the sake of completeness, it should be mentioned that a similar cubic fit with R 2 ≥ 0.985 can be obtained if the weakest three teams (HPM, TSG, ONT) are excluded from the fitting process. Interestingly, nevertheless, one can observe that aside from distinguishing two subdistributions in the ToO results, observing again the data for TdF, Figure 3, one sees four possible sub-distributions, -in fact, related to a team's "UCI quality level". Concerning the entropy data, see . Best fits to simple polynomials can be attempted. A best-fit curve to an empirical cubic function of the team entropy derived from the final time T L and adjusted final time A L , both defined here above, distributions of the 22 teams having competed on the Tour de France 2022 is displayed on Figure 5. A similar fit for the team entropy, but derived from the final place P L and adjusted final place A L measures, is shown on Figure 6. Concerning the the team HHi values, they can also be displayed according to the team rank, both in increasing order. The best-fit curve to an empirical cubic function for the team HHi derived from the final time T L and adjusted final time A L , both as defined here above, distributions of the 22 teams having competed on the Tour of France 2022 and for the 18 teams having competed on the Tour of Oman 2023 are found in Figure 9 and in Figure 10, respectively. The corresponding R 2 is reported. The corresponding displays for the HHi-rank distributions derived from the final place distributions, P L and adjusted final place B L , as defined here above, for the 22 teams having competed on the Tour de France in 2022 and for the 18 teams having competed on the Tour of Oman in 2023 are found in Figure 11 and in Figure 12, respectively. The best-fit curve to an empirical cubic function is given with the corresponding R 2 . These smooth variations are markedly different from classical rank-size laws analyses based on y r = a r −α [28]. However, one could admit that other functions with a small number of parameters could be used in appropriately managing the xand y-axes scales and range. Using the two-parameter (κ, χ) form would be an example of this. On a semi-log plot, Equation (10) with χ ≤ 0, gives a flat N-shape "noid" function near its inflection point, allowing for various convex and concave data display shapes. Some rewriting [28,29], defining u = r/(N + 1), leads to which, in the case of φ > 0 and ψ < 0, is the Feller-Pareto function [30], which is associated with self-organized processes [31,32]. Exploring these features for different scenarios involving the number of stages, teams, and riders would be an interesting avenue for future investigation. It is clear that such races exhibit both endogenous and exogenous self-organization dynamics. Finally, a deduction can stem from observing different hierarchies through the various indicators: one can imagine various ways to motivate teams through financial rewards, or media publicity, based on the above indicators. Conclusions In this paper, the author proposes the use of the intrinsic entropy (S) as an indicator of "teams sporting value" or "performance" and the Herfindahl-Hirschman index (HHi) as a measure of "teams competitive balance" in the context of professional cyclist multi-stage races. The motivation for this study stems from the observation that there are relatively few papers linking the concept of entropy to sport competition results, with only a few notable exceptions. This is surprising considering that sport results can be easily transformed into probabilities, which have implications for forecasting and betting purposes. However, such analysis requires extensive computer work, including the downloading of appropriate results and scaling them according to the number of teams, number of stages, stage difficulties, stage lengths, and other relevant factors. Moreover, even though many papers have been concerned with the concept of competitive balance in sports, recall the review in [25], the present study differs from HHi-based theoretical, empirical, or simulation studies, as in the very recent [26]. Most of these papers take into account different leagues' competitions, in different season lengths with different competitions rounds. Some relation can be tied to the present paper studying two different lengths and difficulties of multi-stages races. However, there are major differences with respect to other works on HHi in sport stem in the type of competitions which are examined. The papers in the literature mainly look at duel competitions. This is very different from professional cyclist races which involve many teams in a single event. As a second contribution aimed at objectively measuring team value and team performance, a logical constraint is introduced for the classical measures based on the time or place of riders in a team. It is mandated that these measures be solely based on the riders who finish the multi-stage race. This constraint leads to the introduction of new indicators, which are compared to the classical ones. The classical measures aggregate values from various riders in different stages but fail to capture the crucial contributions of the best race-finishing team members. In the present study, we utilize the 2022 Tour de France and the 2023 Tour of Oman as numerical illustrations and for discussion purposes. The former is a renowned long race featuring top professional teams, while the latter is a more modest race involving teams and riders who may be less well-known. Despite these differences, both races exhibit similar final characteristics. The numerical values are derived not only from new ranking indices that measure the teams' "final time" and "final place" based on the top three riders in each stage but also from the aggregated final time or final place of the team's best three finishing riders at the end of the race. The distributions of these values reveal distinct team levels through sub-distributions. Furthermore, these distributions are reasonably well-fitted by cubic (empirical) functions, reminiscent of the Feller-Pareto distribution, which suggests the presence of constrained self-organizational processes. A conclusive final point pertains to the empirical findings: the intrinsic Shannon entropy (S) appears to be a bona fide indicator of "teams sporting value" (or "performance"), beyond its traditional role in measuring disorder. Furthermore, the analysis reveals an unexpected yet understandable phenomenon of "clustering of teams" through data observation. On the other hand, the Herfindahl-Hirschman index (HHi) provides a clear indication of "teams competitive balance" processes. Overall, these empirical findings highlight the effectiveness of these indicators in capturing important aspects of team performance and competitive dynamics. Acknowledgments: Comments by reviewers and editors have much forged the structure and content of this paper. Conflicts of Interest: The author declares no conflict of interest.
2023-06-21T01:16:36.969Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "3527f7c9532344e18174b21de2748181442b7dc2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/25/6/955/pdf?version=1687173086", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20b4e229c207daf776a924f311a03fbeeeb1055d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Physics" ] }
8516443
pes2o/s2orc
v3-fos-license
Carboxyl-Terminal Residues N478 and V479 Required for the Cytolytic Activity of Listeriolysin O Play a Critical Role in Listeria monocytogenes Pathogenicity Listeria monocytogenes is a facultative intracellular pathogen that secretes the cytolysin listeriolysin O (LLO), which enables the bacteria to cross the phagosomal membrane. L. monocytogenes regulates LLO activity in the phagosome and minimizes its activity in the host cytosol. Mutants that fail to compartmentalize LLO activity are cytotoxic and have attenuated virulence. Here, we showed that residues N478 and V479 of LLO are required for LLO hemolytic activity and bacterial virulence. A single N478A mutation (LLON478A) significantly increased the hemolytic activity of LLO at a neutral pH, while no difference was observed at the optimum acidic pH, compared with wild-type LLO. Conversely, the mutant LLOV479A exhibited lower hemolytic activity at the acidic pH, but not at the neutral pH. The double mutant LLON478AV479A showed a greater decrease in hemolytic activity at both the acidic and neutral pHs. Interestingly, strains producing LLON478A or LLOV479A lysed erythrocytes similarly to the wild-type strain. Surprisingly, bacteria-secreting LLON478AV479A had barely detectable hemolytic activity, but exhibited host cell cytotoxicity, escaped from the phagosome, grew intracellularly, and spread cell-to-cell with the same efficiency as the wild-type strain, but were highly attenuated in virulence in mice. These data demonstrate that these two residues are required for LLO hemolytic activity and pathogenicity in mice, but not for escape from the phagosome and cell-to-cell spreading. The finding that the nearly non-hemolytic LLON478AV479A mutant grew intracellularly indicates that mutagenesis of a virulence determinant is a novel approach for the development of live vaccine strains. inTrODUcTiOn The Gram-positive, facultative intracellular bacterium Listeria monocytogenes is the causative agent of listeriosis, a severe foodborne infection with a high mortality rate (1). This pathogen is ubiquitous, and it has been isolated from humans and animals, as well as from raw and ready-to-eat foods (2,3), and it is capable of invading a wide variety of eukaryotic cells, including endothelial cells and macrophages (4,5). Each step of a successful infection established by L. monocytogenes is highly dependent upon the production of virulence-associated factors (1,6,7). Among these virulence factors, listeriolysin O (LLO, encoded by the hly gene) plays a central role in the escape of bacteria from the phagosomal compartment, and it is also involved in cell-to cell spreading (8), thus making LLO an essential determinant of L. monocytogenes pathogenesis. LLO belongs to the family of cholesterol-dependent cytolysins (CDCs) that are secreted by many pathogenic Gram-positive bacteria (9), and CDCs are the largest family among the bacterial pore-forming toxins (10,11). Other members include more than 20 pore-forming toxins produced by different bacterial species, like anthrolysin O, streptolysin O, perfringolysin O (PFO), and pneumolysin. All CDCs are secreted as soluble monomers by their cognate bacteria and are characterized by their ability to bind to the cholesterol of host membranes and form large pores (8). The most heterogeneous and better studied region of CDCs is the amino (N)-terminal sequence, which harbors distinct functions for some of the family members (11). L. monocytogenes LLO is composed of 529 residues and possesses at its N-terminus a 25-residue-long typical signal peptide. To maintain its intracellular niche, L. monocytogenes must restrict the pore-forming activity of LLO to the phagosomal compartment and prevent perforation of the plasma membrane. Correct compartmentalization of LLO activity requires a 26-amino-acid sequence located in the extreme N-terminus of the protein, which resembles a eukaryotic PEST-like sequence that is rich in the amino acids proline, glutamate, serine, and threonine (12,13). Mutants lacking the PEST-like region exhibit higher intracellular LLO levels and increased cytotoxicity, and, therefore, result in a higher permeability of the host plasma membrane, which consequently decreases the virulence of the bacteria because they are no longer able to evade host extracellular defenses (4,13,14). Recently, Koster and colleagues determined the crystal structure of natively produced L. monocytogenes LLO, which showed that the N-terminal PEST-like sequence forms a lefthanded polyproline type II helix that is involved in intra-and intermolecular interactions (11). Among the pore-forming toxin family members, LLO is the only cytolysin that is made by an intracellular pathogen, and it has a pronounced acidic pH optimum (pH 5.5) (15), which is attributed to the sophisticated regulation of its activity via a pH sensor consisting of the three acidic residues E247, D208, and D320 (the "acidic triad") (16). However, PFO does not have a pronounced acidic pH optimum, but rather is active at both acidic and neutral pHs. Moreover, the high cytotoxicity of secreted PFO that strongly permeabilizes the host cell is caused in part by the lack of a PEST-like sequence, which targets LLO for phosphorylation and degradation in the cytosol (12). As a result, L. monocytogenes expressing PFO in place of LLO is capable of escaping from a host vacuole in vitro, but is unable to grow intracellularly and is non-virulent (17). Thus, LLO is unique, not in its ability to mediate vacuolar escape, but in its lack of host-cell toxicity. Interestingly, the level of LLO hemolytic activity does not correlate with the efficiency of Listeria escape from the vacuole within host cells. A previous study employed an intracellular genetic selection to isolate mutants in PFO that supported the intracellular growth of L. monocytogenes, and it identified several PFO mutants containing a single amino acid change that had low or undetectable hemolytic activity. Nevertheless, these non-hemolytic mutants were still capable of escaping from the phagocytic vacuole of J774 macrophages, albeit less efficiently than the wild-type strain (18). In comparison to the thoroughly studied N-terminus of CDCs, little is known about the carboxyl (C)-terminus of this protein family. So far as we know, all CDCs contain a highly conserved undecapeptide (ECTGLAWEWWR) at their C-terminus, which was originally thought to be critical for cholesterol-mediated membrane recognition, as mutations in it abolished pore formation (19). However, the mechanistic contribution of this domain is unclear. Recently, it was demonstrated that the undecapeptide was not responsible for cholesterol binding. Instead, a threonineleucine pair in the C-terminal part of the protein was important (20). In fact, the conserved undecapeptide was shown to be a key structural element that allows the correct conformation of the cholesterol-binding motif (21). In the present study, we used site-directed mutagenesis to identify a double-residue mutant (LLON478AV479A) close to the undecapeptide region of LLO that rendered LLO almost inactive, yet, still mediated vacuolar escape, intracellular growth, and cell-to-cell spreading with an efficiency close to that of the wild-type strain, even though it was completely non-virulent in mice. These results demonstrate that these two residues are active sites that are required for LLO hemolytic activity and, therefore, are essential for Listeria infections of mice. Here, we addressed the role of this region in the biological activities of LLO and concluded that the virulence, but not the intracellular fate, of L. monocytogenes directly correlated with LLO hemolytic activity. Furthermore, the attenuated virulence of the LLON478AV479A mutant suggests that it has great potential as a live vaccine vehicle. resUlTs residues n478 and V479 within llO are required for hemolytic activity We previously generated an L. monocytogenes mutant strain for intracellular antigen-presenting containing a 12-amino-acid inframe deletion (472-GNARNINVYAKE-483) at the C-terminus of LLO that could form an LPXTG motif ( Figure 1A), a characteristic C-terminal sorting signal known to direct covalent anchoring to the peptidoglycan of Gram-positive bacteria. Surprisingly, its ability to lyse erythrocytes was completely impaired, suggesting that the deleted 12 residues might be required for LLO cytolytic activity, which prompted us to further investigate which single Comparison of the hemolytic activity of the LLO mutants relative to wild-type LLO. Erythrocytes incubated with 1% Triton X-100 or phosphate-buffered saline (PBS) served to determine the maximum (100%) and minimum (0%) hemolytic activity, respectively. (e,F) Hemolytic activity of the identified LLO mutants, LLON478A, LLOV479A, and LLON478AV479A, at various concentrations (0-4 ng/µL) at pH 5.5 (e) and 7.4 (F). Erythrocytes incubated with 1% Triton X-100 or PBS served to determine the maximum (100%) and minimum (0%) hemolytic activity, respectively. Data in C, D, E, and F are expressed as means ± SDs of three independent experiments. amino acid among these 12 residues plays a decisive role in modulating the LLO activity. Using site-directed mutagenesis, we generated a series of different combinations of double and triple mutant LLO proteins, which were then expressed as C-terminally histidine-tagged recombinant proteins in Escherichia coli and purified to homogeneity by nickel-affinity chromatography to analyze their hemolytic activity ( Figure 1B). As expected, we found that among all the LLO mutants, only the mutant proteins LLON478AV479AY480A and LLON478AV479A completely lost their hemolytic activity, while the others lysed erythrocytes as efficiently as wild-type LLO (Figure 1C), suggesting that residues N478 and V479 are critical for LLO activity. Based on this observation, we generated two single-amino-acid mutants (LLON478A and LLOV479A) to determine, which residue was essential for controlling the cytolytic activity of LLO. Strikingly, the mutation of either N478A or V479A had little effect on the hemolytic activity of LLO at a concentration of 1 ng/µL, which was sufficient for wild-type LLO to completely perforate erythrocyte membranes and led to full hemolytic activity ( Figure 1D). Previous studies established that pore formation by LLO is pH sensitive and concentration dependent at the host temperature (37°C), with LLO being more active at an acidic pH (16). Therefore, the N478A, V479A, and N478AV479A mutants at various concentrations were further tested for hemolysis at pH 5.5 and 7.4. As shown, the LLON478A mutant had slightly higher LLO hemolytic activity at a neutral pH, while no difference in activity was observed at the acidic pH optimum of this cytolysin, compared with wild-type LLO (Figures 1E,F). Conversely, the mutant LLOV479A exhibited lower hemolytic activity at the acidic pH, but not at the neutral pH. In addition, the LLON478AV479A mutant showed a 100% decrease in hemolytic activity compared with wild-type LLO at both the acidic and neutral pHs at a low concentration (less than or equal to 1 ng/µL) (Figures 1E,F). However, to our surprise, the native hemolytic activity of LLON478AV479A was fully restored by increasing its concentration to 2 and 4 ng/µL at pH 5.5 and 7.4, respectively (Figures 1E,F). LLON478AV479A can be considered as a non-hemolytic LLO mutant mainly for two reasons. One is the fact that wild-type LLO exhibits very efficient hemolytic activity in vitro and results in complete erythrocyte lysis at pH 5.5 at a very low concentration of approximately 0.1 ng/µL (~0.25 ng/μL in our study) (11). Another reason is that a similar mutation (LLOE262K) rendered LLO virtually inactive at low concentrations, but relatively active at a higher concentration of 1 ng/µL (11). Moreover, LLO and its active mutants have a pronounced acidic pH optimum and remain relatively inactive at neutral pH. Collectively, these results showed clearly that residues N478 and V479 are required for LLO hemolytic activity. L. monocytogenes expressing llO n478aV479a lacks hemolytic activity but is cytotoxic to host cell Membranes Having established that the LLON478AV479A mutation significantly impaired the activity of purified LLO, we further complemented the Δhly mutant strain with wild-type LLO, LLON478A, LLOV479A, or LLON478AV479A under the natural hly promoter using the Listeria integrative plasmid, pIMK2. As shown by western blotting (Figure 2A), the four resulting mutant strains, CΔhly, CΔhlyN478A, CΔhlyV479A, and CΔhlyN478AV479A, were capable of expressing and secreting LLO or its mutant forms in comparable amounts to wild-type LLO in the culture supernatant, suggesting that these amino acid substitutions did not affect LLO synthesis and secretion. Moreover, we found that these mutations also had no effect on bacterial growth in vitro (Figure 2A). Furthermore, the hemolytic activities recorded in the supernatants of the mutants CΔhly, CΔhlyN478A, and CΔhlyV479A were comparable to that of the wild-type EGD-e strain, whereas the CΔhlyN478AV479A mutant did not exhibit any detectable hemolytic activity, similar to the Δhly mutant ( Figure 2B). Consistent with the hemolytic data obtained from the recombinant LLO variants, we conclude that L. monocytogenes expressing LLON478AV479A is The L. monocytogenes wild-type EGD-e Δhly strains, and the complemented strains CΔhly, CΔhlyN478A, CΔhlyV479A, and CΔhlyN478AV479A were inoculated intraperitoneally into ICR mice at 5 × 10 6 CFU. Animals were euthanized 24 and 48 h after infection and organs (liver and spleen) were recovered and homogenized, and the homogenates were serially diluted and plated on brain-heart infusion (BHI) agar. The numbers of bacteria colonizing the liver and spleen are expressed as means ± SDs of the log10 CFU per organ for each group. indeed non-hemolytic. To monitor directly the cytotoxicity of the different LLO constructs, we detected the release of a host cytosolic enzyme, lactate dehydrogenase (LDH), into the tissue culture medium from infected J774 macrophages. At early timepoints during the infection (2 and 4 h) with any of these strains, very little detectable LDH was released either in the presence or absence of gentamicin (Figures 2C,D). After 6 h of infection, the amount of released LDH from all the infected cells, except for the avirulent Δhly strain, increased significantly, especially in the absence of gentamicin ( Figure 2C). However, to our surprise, the amount of LDH released from macrophages infected with the non-hemolytic CΔhlyN478AV479A strain after 6 h of infection was comparable to those of the hemolytic strains, including the wild-type EGD-e, CΔhly, CΔhlyN478A, and CΔhlyV479A strains (Figure 2C), indicating that L. monocytogenes expressing LLON478AV479A lacks hemolytic activity but exhibits normal cytotoxicity to host cell membranes. As expected, when J774 cells were incubated in the constant presence of 50 µg/mL gentamicin, much less LDH was released during a 6-h infection by any strain compared with those in the absence of gentamicin, which was presumably due to the fact that permeabilization of the cells allowed the influx of gentamicin, which then killed the intracellular bacteria ( Figure 2D) and prevented further permeabilization and LDH release. The n478V479 Mutation of llO reduces Virulence in Mice The virulence of the mutant strains was evaluated in a murine listeriosis model. ICR mice were inoculated intraperitoneally with ~10 6 bacteria, and their survival was monitored for up to 7 days after infection. To our surprise, all the mice infected with the bacteria synthesizing LLON478AV479A survived, similar to those infected with the avirulent Δhly strain (Figure 3A). In contrast, infection with the same number of the complemented strain synthesizing wild-type LLO led to 60% mortality (P < 0.001), which was comparable to that of the parental strain that exhibited 70% mortality ( Figure 3A). Interestingly, the other two complemented strains expressing LLON478A and LLOV479A resulted in 40 and 60% mortality, respectively, albeit with a relatively low lethal efficiency ( Figure 3A). Moreover, the number of colony-forming units (CFU) recovered from the spleens and livers of infected mice after 24 and 48 h of infection was significantly lower (~2-3 orders of magnitude) for the CΔhlyN478AV479A mutant compared with the strain expressing wild-type LLO (Figure 3B), indicating that the CΔhlyN478AV479A mutant was severely attenuated for virulence, and that mice infected with this mutant exhibited significantly lower bacterial burdens compared with the mice infected with the wild-type strain. As expected, no detectable bacteria were recovered from the organs of mice infected with the avirulent Δhly strain, and the virulence of this mutant was fully restored by complementing it with wild-type LLO, LLON478A, or LLOV479A. Taken together, these results establish a critical role for these two residues of LLO in the virulence of L. monocytogenes. The llO n478V479 Mutation Does not affect the efficiency of intracellular growth in Macrophages We investigated the ability of wild-type L. monocytogenes and the LLO mutants to grow intracellularly in murine-derived macrophages J774 and RAW264.7 cells. In this assay, adding the antibiotic gentamicin to the culture medium kills extracellular bacteria, but has no measurable effect on the growth of intracellular bacteria. During the infection process, intracellular proliferation of the LLO-deleted avirulent Δhly strain was almost completely impaired compared with the wild-type strain ( Figure 4A). Such compromised bacterial cell proliferation of the Δhly strain was finely restored in the complemented strains CΔhly, CΔhlyN478A, or CΔhlyV479A, which secrete the same amount of active LLO as the wild-type EGD-e strain ( Figure 4A). Unexpectedly, the Δhly bacteria complemented with an inactive LLO (CΔhlyN478AV479A) also grew well within J774 macrophages, with an efficiency that was nearly identical to that of the wild-type, CΔhly, CΔhlyN478A, and CΔhlyV479A strains ( Figure 4A). To further confirm these data, we conducted this experiment in another murine-derived macrophage cell line, RAW264.7, and the results were completely consistent with those obtained in J774 macrophages ( Figure 4B). Therefore, the fact that the mutant strain CΔhlyN478AV479A, which had barely detectable hemolytic activity, grew intracellularly with the same efficiency as strains with wild-type levels of activity suggests that the LLO N478V479 mutation does not affect the efficiency of intracellular growth in macrophages, and, more interestingly, that the level of hemolytic activity does not correlate with the efficiency of proliferation within macrophages. L. monocytogenes expressing llO n487V479 is not Defective in cell-to-cell spreading Listeriolysin O plays an essential role in the escape of L. monocytogenes from both the primary phagosome and the secondary double-membrane-bound vesicle formed during cell-to-cell spreading (22). The data above did not directly address whether the LLON487V479 mutation affects bacterial cell-to-cell spreading. Therefore, we examined the capability of these mutants to spread from cell to cell by measuring the diameter of plaques formed in L929 fibroblast monolayers after 3 days of infection in the presence of a low concentration of gentamicin. As indicated in (Figure 5A), no visible plaques were found from the cells infected with the avirulent Δhly strain. Moreover, the compromised cell-to-cell spreading of the Δhly strain was restored in the hemolytic strains CΔhly, CΔhlyN478A, and CΔhlyV479A, and also in the non-hemolytic strain CΔhlyN478AV479A ( Figure 5A). Interestingly, these complemented strains, which exhibited the same spreading efficiency, showed a slight defect in terms of their plaque diameters, compared with the wild-type EGD-e strain, indicating that their compromised spreading ability was not fully complemented. Thus, the results suggest that L. monocytogenes expressing LLON487V479 is not defective in cell-to-cell spreading. This was further confirmed in an actin-tail formation assay where the LLO mutant strains were able to associate with F-actin and formed long actin tails with an efficiency that was comparable to that of the wild-type EGD-e strain and the complemented strain CΔhly in human epithelial Caco-2 cells (Figure 5B), while deletion of hly completely compromised the capability to spread from cell to cell, as expected ( Figure 5B). Therefore, we conclude that LLON487V479 is fully capable of mediating cell-to-cell spreading and escape from the double-membraned vesicle. DiscUssiOn The pore-forming protein LLO is a primary determinant of L. monocytogenes pathogenesis that is important for bacterial vacuolar escape into the host cytosol. Tight restriction of its activity in the internalization vacuole appears to be important for infection. Undoubtedly, uncontrolled expression of LLO could lead to perforation of organelles and the host plasma membrane from the inside of the cell, causing cell death and destruction of the L. monocytogenes intracellular niche, thereby exposing the bacteria to the host immune system (8,14). Various mechanisms responsible for tightly restricting the activity of LLO within the host cytoplasm have been investigated, such as its ubiquitylation and proteasomal degradation (22). The best studied strategy, however, is its sensitivity to pH and temperature. Replacement of LLO by pH-insensitive CDCs such as PFO from Clostridium perfringens or anthrolysin O from Bacillus anthracis allows phagosomal escape of L. monocytogenes, but leads to decreased infection (17,23). In fact, the cytolytic activity of LLO does not correlate strictly with the capability of Listeria to escape from the vacuole. In this study, for the first time, we identified a novel mutant, LLON478AV479A, which has barely detectable hemolytic activity, but which escaped, grew intracellularly, and spread cell-to-cell with the same efficiency as a strain secreting wildtype LLO. However, like bacteria lacking LLO, L. monocytogenes strains synthesizing LLON478AV479A were completely attenuated in virulence in mice. Thus, the results strongly suggest that these two residues at the C-terminus of LLO are key sites that are required for the hemolytic activity of LLO and essential for bacterial pathogenicity in mice, but they are not necessary for L. monocytogenes intracellular survival and cell-to-cell spreading. The stability of LLO in the host cell cytosol is impacted by proteolytic degradation mechanisms that affect the ability of L. monocytogenes to cause infection. While sufficient levels of LLO are required to infect host cells and promote cell-to-cell spreading, abnormally high levels of LLO are linked to cellular toxicity and clearance of extracellular bacteria by innate immune mechanisms (24). Therefore, we firmly believe that the hemolytic activity and cytotoxicity of LLO must be limited to the lowest level to establish a successful infection within host cells. To achieve this, L. monocytogenes has evolved multiple mechanisms to finely modulate the cytolytic activity of LLO. L. monocytogenes expressing PFO was significantly impaired in its ability to grow intracellularly because the secreted PFO permeabilizes the host cell and allow gentamicin into the cell to kill intracellular bacteria. However, bacteria expressing mutated PFOs (T490I, G461D, and R468K), which have very low or undetectable hemolytic activity, were still capable of escaping from the phagocytic vacuole of macrophages, while their intracellular growth was slightly slower than that of a wild-type strain (17,18). In contrast, previous studies have shown that deletion of the N-terminal PEST-like sequence does not affect LLO hemolytic activity, but it does cause a 1,000-fold decrease in intracellular growth and attenuate bacterial virulence, which is mainly ascribed to the fact that infection of host cells with LLOΔPEST-producing bacteria resulted in 90% of the maximal LDH release, compared with only 2% for wild-type bacteria. More importantly, bacteria synthesizing the PFO protein fused with the PEST motif can replicate intracellularly and are much less toxic to their host macrophages than bacteria producing the PFO protein without the PEST sequence, indicating that this motif restricts LLO activity to the host cell vacuole, thereby preserving the intracellular niche of L. monocytogenes (12). However, one puzzling observation showed that an LLOL461T mutant, which displays high activity at neutral pH, still requires acidification of the vacuole to promote Listeria escape, suggesting the involvement of other factors in conjunction with LLO (25). In the present study, we demonstrated that the hemolytic activity of L. monocytogenes expressing a novel mutated LLO (N478AV479A) was completely impaired, but the bacteria were still able to grow intracellularly with a comparable efficiency to that of bacteria expressing wild-type LLO. Together with previously published data, these findings firmly support the view that L. monocytogenes has evolved multiple sophisticated mechanisms to minimize harm to host cells by regulating the activity of LLO. To establish a successful infection and achieve maximal virulence, this pathogen must maintain an equilibrium between producing LLO that is sufficiently cytolytic to escape from the vacuole, yet not overly toxic to infected host cells (25). LLO also has functions that are not linked to its pore-forming activity. Importantly, LLO activity is not unrestricted. Otherwise, host cells would not be able to survive infection by hundreds of bacteria, which is observed routinely. Instead, LLO activity is highly regulated by both L. monocytogenes and host cellular processes. The LLON478AV479A mutant was capable of mediating vacuolar escape, growing intracellularly, and spreading cell-to-cell with an efficiency close to that of the wild-type strain. Moreover, the cytotoxic level of this mutant was very similar to that of the wild-type strain, indicating that this mutant form of LLO was toxic to the host cell at a normal level and, thus, was able to grow intracellularly both in epithelial cells and murine macrophages. Based on these results, we hypothesized that the virulence of the LLON478AV479A mutant would be comparable to that of its parent strain expressing wild-type LLO. Surprisingly, this mutant strain was completely non-virulent in mice, as was same the strain lacking LLO. A possible explanation for this result is that while the absence of the pore-forming ability of LLO may be an important protective mechanism, LLO may also have other properties that govern its function within a vacuole during host infection. Importantly, L. monocytogenes is able to infect both phagocytic and nonphagocytic cells, which results in potent innate and adaptive immune responses in an infected host that are required for the clearance of the organism (26). This ability to efficiently induce diverse, complex immune responses using multiple, simultaneous, and integrated mechanisms of action underlies the development of this bacterium as an antigen delivery vector to induce protective cellular immunity against cancer or viral infection (27). L. monocytogenes infection has been long known to also induce type I interferons (IFNs), IFN-α and IFN-β, which are usually associated with antiviral immune responses and essential for the immune system to clear many viral pathogens. In contrast to IFN-γ, type I IFNs are beneficial to Listeria infection (28)(29)(30). Induction of type I IFNs by L. monocytogenes requires escape of intracellular bacteria into the cytosol (31). Therefore, the highly attenuated L. monocytogenes mutants we identified were still capable of escaping into the cytosol, growing intracellularly, and inducing type I IFNs, potentially making these bacteria a promising tool for protecting against viral infection. However, L. monocytogenes has been shown to harness its ability to deliver foreign antigens efficiently to both the major histocompatibility class I and II presentation machinery and induce robust T-cell responses to Listeria-delivered antigens, thus making it a powerful vaccine vector for tumor immunotherapy. In a therapeutic setting, detoxified LLO with a completely impaired cytolytic activity acts as a potent adjuvant, enabling it to serve as a powerful antigen fusion partner to create antigen-adjuvant proteins (27). More recently, several studies have shown that tumor antigens genetically fused to detoxified LLO exhibit enhanced ubiquitin-proteasome-mediated processing and presentation by antigen-presenting cells for the activation of antigen-specific cytotoxic T lymphocytes, and stimulated the necessary proinflammatory responses for effective antitumor adaptive immune responses (32,33). In the present study, we identified a novel, nontoxic LLO (N478AV479A) that still enables the bacteria to efficiently mediate vacuolar escape and survive intracellularly, while exhibiting attenuated virulence, which could provide a new adjuvant fusion partner with a cognate antigen for tumor immunotherapy. In conclusion, we have shown that residues N478 and V479 are required for the cytolytic activity of LLO and essential for L. monocytogenes pathogenicity in mice, but not for intracellular infection, which will provide new insights that increase our understanding of the current and future development of Listeria-based antigen delivery vectors to induce protective cellular immunity against tumors or infections. Bacterial strains, Plasmids, and culture conditions Listeria monocytogenes EGD-e was used as the wild-type strain. E. coli DH5α was employed for cloning experiments and as the host strain for plasmids pET30a(+) (Merck, Darmstadt, Germany), pIMK2 and pKSV7. E. coli Rosetta (DE3) was used for prokaryotic protein expression. Listeria strains were cultured in brain-heart infusion (BHI) medium (Oxoid, Hampshire, England). E. coli strains were grown at 37°C in Luria-Bertani broth (LB) (Oxoid). Stock solutions of ampicillin (50 mg/mL), erythromycin (50 mg/mL), kanamycin (50 mg/mL), or chloramphenicol (50 mg/mL) were added to media where appropriate. All chemicals were obtained from Sangon Biotech (Shanghai, China), Merck or Sigma-Aldrich (St. Louis, MO, USA) and were of the highest purity available. All primers used in this study are listed in Table S1 in Supplementary Material. L. monocytogenes gene in-Frame Deletion The temperature-sensitive pKSV7 shuttle vector was used for creating mutations from L. monocytogenes strain EGD-e background. Genomic DNA was extracted as described previously (34,35). A homologous recombination strategy with SOE-PCR procedure was used for in-frame deletion to construct hly deletion mutant (36). Specifically, the DNA fragments containing homologous arms upstream and downstream of hly were obtained by PCR amplification of EGD-e DNA templates using the SOE primer pairs hly-a/hly-b and hly-c/hly-d (Table S1 in Supplementary Material). The obtained fragment was then cloned into the vector pKSV7 and electroporated into the competent EGD-e cells. Transformants were grown at a non-permissive temperature (41°C) in BHI medium containing chloramphenicol to promote chromosomal integration and the recombinants were passaged successively in BHI without antibiotics at a permissive temperature (30°C) to enable plasmid excision and curing (37). The recombinants were identified as chloramphenicol-sensitive colonies, and the mutagenesis was further confirmed by PCR and DNA sequencing. complementation of gene Deletion Mutant To complement the L. monocytogenes Δhly strain, we constructed a complemented strain by using the integrative plasmid pIMK2. The complete open reading frame (ORF) of hly along with its native promoter region was amplified using the primer pairs CΔhly-e/CΔhly-f (Table S1 in Supplementary Material) and cloned into pIMK2 following restriction to cut off the Phelp region with enzymes. The resulting plasmid was then electroporated into L. monocytogenes Δhly strain. Regenerated cells were plated on BHI agar containing kanamycin (50 µg/mL). The complemented strain was designated as CΔhly. Overexpression and Purification of his-Tagged llO Proteins from E. coli Recombinant proteins used in this study were expressed as fusion proteins to the N-terminal His-tag using pET30a(+) as the expression vector (38). E. coli Rosetta (DE3) was used as the expression host. The full-length ORF of the interest gene from EGD-e genome was amplified with the primer pair (Table S1 in Supplementary Material) and then inserted into the pET30a(+) vector, and finally transformed into Rosetta competent cells. E. coli cells harboring the recombinant plasmids were grown in 500 mL LB medium supplemented with 50 µg/mL kanamycin at 37°C until the cultures reached 0.8-1.0 at OD600 nm. Isopropyl β-D-1-thiogalactopyranoside (IPTG) was then added to a final concentration of 0.4 mM to induce expression of the recombinant proteins for additional 4 h under the optimized conditions. The His-tagged fusion proteins were purified using the nickel-chelated affinity column chromatography. Preparation of Polyclonal antibodies against the recombinant Proteins Purified recombinant protein was used to raise polyclonal antibodies in New Zealand white rabbits according to a standard protocol (39). Briefly, rabbits were initially immunized via subcutaneous injection of 500 µg protein with an equal volume of Freund's complete adjuvant (Sigma). After 2 weeks, rabbits were boosted by subcutaneous injection of 250 µg protein each in incomplete Freund's adjuvant (Sigma) three times at 10-day intervals. Rabbits were bled ~10 days after the last injection. site-Directed Mutagenesis Single site-directed mutants (N478A and V479A) and the double site-directed mutant (N478AV479A) of LLO were generated using the original vector template, pET30a-LLO or pIMK2-LLO, and the QuikChange Site-Directed Mutagenesis kit (Agilent Technologies, Palo Alto, CA, USA) with the primer pairs described in Table S1 llO-Mediated hemolytic activity Measurement Measurement of LLO-associated hemolytic activity was done as described previously (40). L. monocytogenes wild-type and mutant strains were grown for 16 h with shaking in BHI broth at 37°C. All cultures were adjusted to OD600 nm of 1.0 before supernatants were collected. Hemolytic activity was measured based on lysis of sheep red blood cells (SRBCs) of secreted LLO from culture supernatants. Specifically, culture supernatant (50 µL) was diluted in hemolysis buffer (10 mM PBS, pH 5.5 or 7.4, 150 mM NaCl, 1 mM DTT) in final volumes of 50 µL and equilibrated to 37°C for 10 min. Next, 50 µL PBS-washed intact SRBCs (5%) were added to each sample and incubated at 37°C for 30 min. Samples were centrifuged and supernatants analyzed for hemoglobin absorption at 550 nm. For hemolysis determination of recombinant proteins, purified LLO or its mutant protein (LLON478A, LLOV479A, and LLON478AV479A) expressed in E. coli was serially diluted in hemolysis buffer, then mixed with an equal volume of 5% SRBC and the hemolytic activity determined as described above. The values corresponding to the reciprocal of the dilution of culture supernatant required to lyse 50% of HRBCs were used to compare the hemolytic activities in the different supernatants. Erythrocytes incubated with 1% Triton X-100 or PBS served to determine the maximum (100%) and minimum (0%) hemolytic activity, respectively. cell Fractionation and Protein localization of llO Western blotting was employed to analyze the changes in expression of LLO. Bacterial overnight cultures of L. monocytogenes were diluted into 200 mL fresh BHI broth, and bacteria were grown to the stationary phase. For secreted proteins isolation: the fractionation procedure was described by Lenz and Portnoy (41), with minor modifications. Briefly, the bacteria cells were pelleted by centrifugation at 13,000 g for 20 min at 4°C, and the resulting culture supernatant collected and then filtered through a 0.22 µm polyethersulfone membrane filter (Thermo Fisher Scientific, Lafayette, LA, USA). Trichloroacetic acid (TCA) was added to the supernatant to reach a final concentration of 10% TCA. Proteins were TCA-precipitated on ice overnight and washed with ice-cold acetone. The washed precipitates of supernatant proteins were re-suspended in SDS-PAGE sample buffer (5% SDS, 10% glycerol, and 50 mM Tris-HCl, pH 6.8). Samples were boiled for 6 min and stored at −20°C before electrophoresis. For total cell proteins isolation: the previous method was applied (42). Specifically, the bacterial pellets were re-suspended in 1 mL of extraction solution (2% Triton X-100, 1% SDS, 100 mM NaCl, 10 mM Tris-HCl, 1 mM EDTA, pH 8.0). One gram of glass beads (G8772, Sigma-Aldrich) was added and samples lysed by using the homogenizer Precelly 24 (Bertin, Provence, France) at 6,000 rpm for 30 s with intermittent cooling for 30 s (3 cycles in total) and then centrifuged at 12,000 rpm for 15 min. Supernatant was retained as the whole cell extract. The method for total cell proteins isolation was used as described above. Protein samples were separated through a 12% SDS-PAGE and immunoblotted with α-LLO or α-GAPDH antisera. GAPDH was used as an internal control. Virulence in the Mouse Model The L. monocytogenes wild-type strain EGD-e, mutant strain Δhly, and complemented strains CΔhly, CΔhlyN478A, CΔhlyV479A, and CΔhlyN478AV479A, were tested for recovery in liver and spleen sections of ICR mice (female, 18-22 g, purchased from Zhejiang Academy of Medical Sciences, Hangzhou, China) as previously described (39). The mice (8 per group) were injected intraperitoneally with ~10 6 CFU of each strain. At 24 and 48 h postinfection, mice were sacrificed, and livers and spleens removed and individually homogenized in 10 mM PBS (pH 7.4). Surviving Listeria cells were enumerated by plating serial dilutions of homogenates on BHI agar plates. Results were expressed as means ± SD of recovery bacterial number (Log10 CFU) per organ for each group. For animal survival experiments, mice injected intraperitoneally with 1 × 10 7 CFU listeria were monitored for up to 7 days after infection. Survival curves were calculated by using the Kaplan-Meier method and differences in survival were determined by using the Log-rank test. Proliferation in raW264.7 Macrophages Bacteria survival or proliferation murine macrophages RAW264.7 was conducted as previously described (43 and incubated for an additional hour in DMEM containing gentamicin at 50 µg/mL for 30 min to kill extracellular bacteria. Infected cells were lysed by adding 1 mL of ice-cold sterile distilled water. The lysates were 10-fold diluted for enumeration of viable bacteria on BHI agar plates that were considered as the 0 h numbers invading into the cells. For intracellular proliferation, cells were subjected to further incubation for 6, 12, or 18 h in 5% CO2 at 37°C. Viable bacteria were enumerated by serial dilution and colony counting on BHI agar plates. intracellular growth in J774 Macrophages Intracellular growth was performed as described previously for monolayers of J774 macrophages (44). Specifically, overnight cultures of L. monocytogenes strains were washed three times and re-suspended in PBS, and then J774 cells infected with bacteria at an MOI of 0.05. Following 30 min incubation, cells were washed, and extracellular bacteria were killed by adding complete medium containing 50 µg/mL of gentamicin for an additional 30 min incubation. At 2, 6, or 12 h postinfection, cells were washed with PBS and finally lysed in ice-cold sterile distilled water. The number of viable intracellular L. monocytogenes cells was calculated by serial dilution and colony counting on BHI agar plates as mentioned above. Plaque assay The plaque assay was carried out by conventional methods (45,46). Briefly, mouse L929 fibroblast cell monolayers were maintained in high-glucose DMEM medium (Thermo Fisher Scientific) plus FBS (Hyclone) and 2 mM l-glutamine. Cells were plated at 1 × 10 6 cells per well in a six-well dish and infected at an MOI of 1:50 with L. monocytogenes at 37°C with 5% CO2 for 1 h. Extracellular bacteria were killed with 100 µg/ mL gentamicin, and the cells washed three times with 10 mM PBS (pH 7.4) and then overlaid with 3 mL of medium plus 0.7% agarose and 10 µg/mL gentamicin. Following a 72-h incubation at 37°C, cells were fixed with paraformaldehyde (4% in PBS for 20 min) and stained with crystal violet. The diameter of plaques was measured by Adobe Photoshop software for each strain. The plaque size of wild-type strain EGD-e was set as 100% and data are shown as means ± SDs. Phagosomal escape assay in caco-2 cells Phagosomal escape assay was conducted according to the previous work (39). Specifically, human intestinal epithelial Caco-2 cells were infected at MOI of 10:1 at 37°C with 5% CO2 for 1 h. Extracellular bacteria were then killed with 50 µg/mL gentamicin for 1 h and incubated for an additional 6 h. Cells were washed gently with PBS (10 mM, pH 7.4), fixed with 4% paraformaldehyde and then permeabilized with 0.5% Triton X-100. The bacterial cells were stained with polyclonal antibodies to L. monocytogenes for 1 h at 37°C, washed twice with PBS, and probed with Alexa Fluor 488-conjugated donkey anti-rabbit antibody (Santa Cruz) for 1 h at 37°C. F-actin was then stained with phalloidin-Alexa Fluor 568 (Thermo Fisher Scientific). DAPI (4′,6-diamidino-2-phenylindole) (Thermo Fisher Scientific) was used to stain the nuclei. Actin tails recruited by the bacteria were visualized under a ZEISS LSM510 confocal microscope (Zeiss Germany, Oberkochen, Germany). cytotoxicity Detection Cytotoxicity was detected based on LDH release from J774 macrophages following bacterial infection by using the CytoTox 96 non-radioactive cytotoxicity assay kit according to the manufacturer's instructions (Promega, WI, USA), as previously described by Decatur and Portnoy (12). Overnight cultured L. monocytogenes was deposited onto J774 cells at a MOI of 10 at 37°C with 5% CO2 for 30 min, after which the culture medium with or without 50 µg/mL gentamicin was added. To determine maximum LDH release, 100 µL of lysis buffer was added to triplicate infected wells 45 min prior to LDH measurement. At the indicated infection times (2, 4, and 6 h), cells were centrifuged at 250 × g for 5 min, and the supernatant was removed and used for the LDH assay.
2017-11-01T17:05:04.155Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "8ffb589e309af8d5e37e1e285dbc2a824f508912", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01439/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ffb589e309af8d5e37e1e285dbc2a824f508912", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
240008912
pes2o/s2orc
v3-fos-license
Genome-Wide Transcriptomic and Proteomic Exploration of Molecular Regulations in Quinoa Responses to Ethylene and Salt Stress Quinoa (Chenopodium quinoa Willd.), originated from the Andean region of South America, shows more significant salt tolerance than other crops. To reveal how the plant hormone ethylene is involved in the quinoa responses to salt stress, 4-week-old quinoa seedlings of ‘NL-6′ treated with water, sodium chloride (NaCl), and NaCl with ethylene precursor 1-aminocyclopropane-1-carboxylic acid (ACC) were collected and analyzed by transcriptional sequencing and tandem mass tag-based (TMT) quantitative proteomics. A total of 9672 proteins and 60,602 genes was identified. Among them, the genes encoding glutathione S-transferase (GST), peroxidase (POD), phosphate transporter (PT), glucan endonuclease (GLU), beta-galactosidase (BGAL), cellulose synthase (CES), trichome birefringence-like protein (TBL), glycine-rich cell wall structural protein (GRP), glucosyltransferase (GT), GDSL esterase/lipase (GELP), cytochrome P450 (CYP), and jasmonate-induced protein (JIP) were significantly differentially expressed. Further analysis suggested that the genes may mediate through osmotic adjustment, cell wall organization, reactive oxygen species (ROS) scavenging, and plant hormone signaling to take a part in the regulation of quinoa responses to ethylene and salt stress. Our results provide a strong foundation for exploration of the molecular mechanisms of quinoa responses to ethylene and salt stress. Introduction Quinoa (Chenopodium quinoa Willd.), a dicotyledonous plant in the Chenopodiaceae family, originated from the Andean region of South America and has been cultivated for about 7000 years [1]. Quinoa is an allotetraploid plant (2n = 4X = 36), deriving from the genome fusion of two related parent species in the same genus [2]. Quinoa has five major ecotypes depending on its origin centers, including Highlands originating from Peru and Bolivia; Inter-Andean valleys originating from Bolivia, Colombia, Ecuador, and Peru; Salares originating from Bolivia, Chile and Argentina; Yungas originating from Bolivia; and Lowlands originating from Chile [3]. Quinoa has been considered as a pseudo-cereal because of its grain characteristics [4]. Consumption of seeds is the most common use of quinoa. In recent years, quinoa seeds have been reported to have an exceptional balance between oil, protein, and carbohydrate [4,5]. The absence of gluten in its starch allows the development of specific foods for celiac patients [1]. Quinoa seeds are also good sources of vitamins, oil with high linoleate and linolenate content, natural antioxidants, dietary fiber, and minerals [6]. As a result, consumption of quinoa in human diet leads to lower weight gain, improved lipid profile, decreased blood glucose, and increased antioxidant intake [7,8]. In addition to nutritional value in its seeds, quinoa also has resistance to multiple abiotic stresses, including drought, cold and salinity, which allows quinoa to be cultivated in Transcriptome Sequencing and Data Analysis In this research, three independent biological replicates were used, and at least three whole quinoa seedlings were mixed in each replicate. Total RNA was extracted and purified using poly-T oligo-attached magnetic beads. cDNA was synthesized, and adaptors with hairpin loop structures were ligated to prepare for hybridization. The samples were then clustered on a cBot cluster generation system using TruSeq PE Cluster Kit v3-cBot-HS (Illumia). After cluster generation, the library preparations were sequenced on an Illumina Novaseq platform by Novogene Bioinformatics Technology Co. Ltd. (Beijing), and 150 bp paired-end reads were generated. The raw data of FASTQ format were uploaded to the NCBI Sequence Read Archive (SRA), and the SRA accession number is PRJNA726352. The reference genome was downloaded from the website https://www.ncbi.nlm.nih. gov/genome/?term=quinoa (accessed on 30 June 2022), and the paired-end clean reads were aligned to the reference genome using Hisat2 v2.0.5. Fragments per kilobase of transcript sequence per million (FPKM) of each gene were calculated to estimate gene expression levels based on the length of the gene and reads count mapped to the gene. The genes with corrected p-value < 0.05 and absolute fold change ≥2 were considered as significant DEGs. The DEGs were then analyzed by GO enrichment analysis and KEGG enrichment analysis to predict their functions. Protein Extraction, TMT Labeling, and Proteomics Analysis In this research, three independent biological replicates were used for protein analysis, and at least three whole quinoa seedlings were mixed in each replicate. The proteomics analyses were performed by Novogene Bioinformatics Technology Co. Ltd. (Beijing, China). In detail, total proteins were extracted by the cold acetone method, labeled by TMT tags, and fractionated using a C18 column on a Rigol L3000 HPLC system. Shotgun proteomics analyses were then performed using an EASY-nLC TM 1200 UHPLC system (Thermo Fisher, Waltham, MA, USA) coupled with a Q Exactive TM HF-X mass spectrometer (Thermo Fisher) operating in the data-dependent acquisition (DDA) mode. The resulting spectra from each run were searched separately against the 733788-X101SC20124467-Z02-Chenopodium quinoa Willd.-NCBI.fasta database by Proteome Discoverer 2.4 (PD 2.4, Thermo). Peptide spectrum matches (PSMs), with credibility of more than 99%, were identified, and the identified PSMs and proteins were retained and performed with FDR no more than 1.0%. Proteins with fold changes in a comparison >1.2 or <0.83 and unadjusted significance level p < 0.05 were considered as DEPs. The DEPs were then analyzed by GO and KEGG enrichment analyses. The mass spectrometry proteomics data were deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org (accessed on 24 May 2021)) via the iProX partner repository [34] with the dataset identifier PXD026210. Correlation Analysis between Proteomic and Transcriptomic Results The DEGs and the DEPs were separately counted, and the Venn diagrams were plotted according to the counted results. Correlation analysis was performed for the differential multiples of DEGs or DEPs identified in both transcriptomic analysis and proteomic analysis by R (version 3.5.1). Quantitative Real Time PCR (qRT-PCR) Analysis In this research, 12 DEGs were selected randomly for qRT-PCR verification, and the coding sequence (CDS) of these selected 12 DEGs are listed in the Supplementary Material Table S1. CqACTIN was used as the endogenous control. The primers were designed using Primer Premier 5.0 (Premier) and are listed in Supplementary Material Table S2. Total RNA was extracted by the CTAB method and subjected to DNase treatment (Takara, Japan). The first-strand cDNA was synthesized using M-MLV reverse transcriptase (Takara, Japan) with oligo d(T)18 primer. The qRT-PCR program contains a preliminary step of 2 min at 50 • C, 10 min at 95 • C, followed by 40 cycles of 95 • C for 60 s, 56 • C for 20 s, and 72 • C for 15 s. Three independent biological replicates and three technical replicates were used. The primer efficiency was tested by generating standard curves, and the data were analyzed by the comparative ∆∆CT method. Physiological Indexes Detection In this research, 4-week-old quinoa seedlings of 'NL-6 treated with water, 300 mM NaCl, or 300 mM NaCl with 100 µM ACC for 2-3 d were used for examination of physiological indexes. The nitrogen content and relative level of total chlorophyll were measured by PJ-4N plant nutrition analyzer, and the relative permeability of cell membrane, damage rate of leaves, malondialdehyde (MDA) content, soluble sugar level, and SOD activity were analyzed as previously described [35]. Three independent biological replicates and three technical replicates were used in the experiments. Statistical Analyses Statistical analyses were performed by SAS, and the statistical significance of the difference was evaluated by ANOVA. Means followed by the same letter were not significantly different at the α = 0.05 level. Gene Identification and DEGs Analysis in Transcriptome To investigate ethylene-regulated salt responses in quinoa, the 4-week-old H 2 Or, SALTr, and ACCr samples were used for transcriptomic analysis. The principal component analysis (PCA) showed the differences among different treatments and confirmed the reliability of the sequencing results (Supplementary Material Figure S1A). A total of 60,602 genes were identified, and the genes with corrected p-value < 0.05 and absolute fold change ≥2 were considered as significant DEGs. The DEGs between SALTr and H 2 Or were recognized as the components in salt responses of quinoa. The DEGs between SALTr and ACCr were recognized as functioning in ethylene-regulated salt responses of quinoa, and the DEGs between ACCr and H 2 Or were thought to be involved in ethylene responses or salt responses of quinoa. The DEGs in these three comparisons are presented in the Supplementary Material Excel S1. The heat maps with hierarchical clustering, which show relative expression of the DEGs in these comparisons, are presented in Supplementary Material Figures S2-S4. Protein Identification and DEPs Analysis The H 2 Op, SALTp, and ACCp samples were analyzed by proteomics. The PCA analysis showed the differences among different treatments and confirmed the reliability of the proteomic analysis results (Supplementary Material Figure S1B). A total of 9672 proteins were identified, and the proteins with fold change >1.2 or <0.83 and unadjusted significance level p < 0.05 were considered as DEPs. Similar to the analysis of DEGs, the DEPs between SALTp and H 2 Op were recognized as the components of quinoa salt responses, and the DEPs between SALTp and ACCp were recognized as functioning in ethylene-regulated salt responses of quinoa, and the DEPs between ACCp and H 2 Op were thought to be involved in ethylene responses or salt responses of quinoa. The DEPs in these three comparisons are presented in Supplementary Material Excel S3. The heat maps with hierarchical clustering, which show relative expression of the DEPs in these comparisons, are presented in Supplementary Materials Figures S6-S8. DEPs Analysis in Ethylene Regulated Salt Responses In order to study the DEPs in ethylene-regulated salt responses in quinoa, multiple comparisons of DEPs among SALTp-vs-H 2 Op, SALTp-vs-ACCp, and ACCp-vs-H 2 Op were conducted, and nine DEPs were overlapped in the comparisons and may play roles in ethylene-regulated salt responses of quinoa ( Correlation between the Proteomic and Transcriptomic Results Correlation analysis between the transcriptomic data and the proteomic result identified 184 genes/proteins, which were differentially expressed when treated with salt in Correlation between the Proteomic and Transcriptomic Results Correlation analysis between the transcriptomic data and the proteomic result identified 184 genes/proteins, which were differentially expressed when treated with salt in quinoa ( The correlation analysis between the transcriptome and proteome also detected 117, 113, and 69 proteins differentially expressed in the comparisons between SALT and H 2 O samples, between ACC and H 2 O samples, and between ACC and SALT samples, respectively, but no expression difference was detected in their transcript levels, suggesting a possible presence of post-transcriptional modification in the proteins (Figure 3, Supplementary Material Excel S5). In contrast, it was found that 3934, 2791, and 804 genes were detected differentially expressed at the transcript level but not at the protein level in the comparisons between SALT and H 2 O samples, between ACC and H 2 O samples, and between ACC and SALT samples, respectively ( Figure 3, Supplementary Material Excel S5), suggesting that stressregulated molecules are more likely altered at the transcript level when challenged. Verification of RNA-seq Results by qRT-PCR In order to verify the results obtained from the quinoa transcriptomic and proteomic analysis in ethylene-regulated salt responses, 12 DEGs were randomly selected, and their relative expression levels were examined in the quinoa seedlings treated with water, NaCl, and NaCl with ACC for 0, 3, 6, 9, 12, 24, and 36 h. The expressions of the reference genes under the different treatments are shown in Supplementary Material Excel S6. The results showed that the expressions of CqNRT2.1 and CqACO1 were increased to a peak after 6 h of salt treatment and then began to decrease, while the expressions of CqCSI, CqPER12, CqFK, and CqPDP were increased to a peak after 12 h of salt treatment and then began to decrease (Figure 4). The expression of CqABCB kept decreasing in SALT and ACC samples ( Figure 4). Together, the expressions of these 12 DEGs were obviously affected by salt and ethylene, suggesting that they may play important roles i. Physiological Alterations by Ethylene and Salt Stress In order to examine the physiological changes in the H 2 O-, SALT-, and ACC-treated samples ( Figure 5A), the nitrogen content, SPAD value, relative permeability of cell membrane, damage rate of leaves, MDA content, soluble sugar level, and SOD activity were detected in these samples. The results indicated that salt treatment rendered higher relative permeability of cell membranes, damage rate of leaves, MDA, and soluble sugar levels, while ethylene treatment reduced them ( Figure 5B-E). The SOD activity was activated by salt treatment, which was enhanced by ethylene ( Figure 5F). However, the relative content of total chlorophyll denoted by the SPAD value, and the N content were reduced due to salt treatment ( Figure 5G). The effects of salt on the SPAD value and N content were alleviated by ethylene treatment, although their contents in the ACC sample were still lower than in the untreated sample ( Figure 5G). Taken together, it was concluded that ethylene may regulate salt responses in different ways in quinoa. , between ACC and H2O samples (C), and between ACC and SALT samples (E), respectively; all_tran represents all the genes obtained from the transcriptome, diff_tran represents the DEGs identified by transcriptome, all_prot represents all the proteins identified by proteome, and diff_prot represents the DEPs identified by proteome. (B,D,F) Scatter plots of expression correlation between SALT and H2O samples (B) between ACC and H2O samples (D) and between ACC and SALT samples (F), respectively. The abscissa is the differential multiple of proteins, the ordinate is the differential multiple of corresponding genes, and the correlation coefficient and p value of the transcriptome and proteome are also shown in the figures. The points represent proteins/genes: the blue points represent non-differential proteins/genes, and the green points represent DEPs/DEGs. and between ACC and SALT samples (F), respectively. The abscissa is the differential multiple of proteins, the ordinate is the differential multiple of corresponding genes, and the correlation coefficient and p value of the transcriptome and proteome are also shown in the figures. The points represent proteins/genes: the blue points represent non-differential proteins/genes, and the green points represent DEPs/DEGs. Physiological Alterations by Ethylene and Salt Stress In order to examine the physiological changes in the H2O-, SALT-, and ACC-treated samples ( Figure 5A), the nitrogen content, SPAD value, relative permeability of cell levels, while ethylene treatment reduced them ( Figure 5B-E). The SOD activity was activated by salt treatment, which was enhanced by ethylene ( Figure 5F). However, the relative content of total chlorophyll denoted by the SPAD value, and the N content were reduced due to salt treatment ( Figure 5G). The effects of salt on the SPAD value and N content were alleviated by ethylene treatment, although their contents in the ACC sample were still lower than in the untreated sample ( Figure 5G). Taken together, it was concluded that ethylene may regulate salt responses in different ways in quinoa. Discussion Quinoa, an ancient crop native to South America, has high nutritional value and health-promoting phytochemicals in seeds and has received increasing world-wide attention in the past decade [8,9,36]. Quinoa is resistant to multiple abiotic stresses including drought, cold, and salinity [9,10]. Salt stress is a major abiotic stress and affects ~6.5% of the total land of the world [9]. The effects of salt stress on plants are mainly di- Discussion Quinoa, an ancient crop native to South America, has high nutritional value and health-promoting phytochemicals in seeds and has received increasing world-wide attention in the past decade [8,9,36]. Quinoa is resistant to multiple abiotic stresses including drought, cold, and salinity [9,10]. Salt stress is a major abiotic stress and affects~6.5% of the total land of the world [9]. The effects of salt stress on plants are mainly divided into two components, the nonspecific osmotic stress that causes water deficit, and the specific ion effects that provoke the accumulation of toxic ions. Quinoa plants shows significant salt tolerance, but the research on quinoa responses to salt stress is still limited. High-throughput transcriptomic analysis provides a way to excavate molecular mechanism of the quinoa salt responses at the genomic scale [10,21,30,31]. Considering the importance of differentially expressed proteins in most biological processes, examination of the protein level change is more valuable and practical. Unfortunately, little is known about the molecular regulation of quinoa at the proteomic scale. In this research, 4-week-old quinoa seedlings treated with water, NaCl, and NaCl with ACC were analyzed by transcriptional sequencing and proteomics. The identified DEGs and DEPs were analyzed by GO and KEGG, and their correlation analyses was conducted. The study provides a strong foundation for further research on the molecular regulation of quinoa to ethylene and salt stress. Plant Hormones Play Regulatory Roles in Quinoa Responses to Ethylene and Salt Stress Plant hormones play important roles in various stress responses. For instance, ABA is thought to be essential for plant responses to abiotic stresses in many plant species including wheat, rice, and Magnolia wufengensis [19,25,37]. For example, it was reported that the rice Osnced5 mutant reduced the ABA level and decreased salt tolerance, while OsNCED5 overexpression increased the ABA level and enhanced salt tolerance, indicating the importance of ABA to the salt tolerance of rice [37]. In quinoa, the gene encoding for 9-cis-epoxycarotenoid dioxygenase (NCED) in ABA biosynthesis was strongly induced after salt treatment [30,38]. Several PP2Cs in ABA signaling were detected highly upregulated in quinoa under salt stress by transcriptional sequencing [21]. In the present study, only one abscisic acid receptor, PYL4 (LOC110710755), was detected functioning in ethylene-regulated salt responses, one abscisic acid receptor PYL4 (LOC110714604) detected playing roles in non-ethylene-regulated salt responses, and one abscisic acid receptor PYL4 (LOC110715607) detected playing roles in ethylene responses but not in salt tolerance of quinoa ( Table 2, Supplementary Materials Figure S5, Excels S2-S4, S7 and S8), suggesting that crosstalk between ABA and ethylene may exist in the quinoa stress responses. Other hormones were also reported to be involved in quinoa stress responses. TIFY 10A, a JA response repressor, was strongly induced in responses to salt, while the GA 3-betadioxygenase (GA3OX4) in gibberellin biosynthesis was strongly inhibited under salt treatment [21]. In this study, three jasmonate-induced proteins (JIPs; LOC110715081, LOC110711071, and LOC110733576) were detected. In addition, it was reported that one ethylene receptor, ETR1, and one ethylene responsive factor (ERF) were strongly induced by salt treatment in quinoa [21,30]. In the present study, three ERFs (LOC110729845, LOC110730331, and LOC110719638) were detected in ethylene-regulated salt responses of quinoa (Table 2, Supplementary Materials Figure S5, Excels S2-S4). These findings broaden our understanding of the phytohormone-mediated regulations in the quinoa stress responses. In the present study, we also found the other novel genes/proteins responding to salt and ethylene in quinoa. For instance, one auxin response factor (ARF; LOC110719716), one auxin binding protein (ABP; LOC110715799), and one cytokinin riboside 5'-monophosphate phosphoribohydrolase (LOG; LOC110738584) were detected ( Table 2, Supplementary Materials Figure S5, Excels S2-S4). ARF and ABP function in the auxin signaling pathway, and LOG functions in the release of a ribose 5'-monophosphate from a cytokinin nucleotide to form a biologically active cytokinin [39]. Our results indicated that auxin and cytokinin may play roles in ethylene-regulated salt responses of quinoa, and these genes/proteins may be important for the crosstalk of plant hormones. The auxin efflux carrier (LOC110691454 and LOC110736434), auxin transporter (LOC 110706251), ARF (LOC110736906, LOC110715765, and LOC110714183), and ABP (LOC110691560) were detected from this study, and they may be involved in non-ethylene-regulated salt responses (Supplementary Material Excel S7). In contrast, the detected auxin response factor (LOC110714183), auxin efflux carrier (LOC110691454 and LOC110736434), auxin transporter (LOC110706251), and ARF (LOC110736906 and LOC110715765) may be involved in ethylene responses but not in salt tolerance of quinoa (Supplementary Material Excel S8). Taken together, these results suggest that the plant hormone auxin may play diverse roles in quinoa. ROS Scavenging Enzymes Function in Quinoa Responses to Ethylene and Salt Stress Salt stress causes ROS accumulation and oxidative stress aggravation [11]. ROS damage nucleic acids, oxidize proteins, and cause lipid peroxidation, while the antioxidant enzymes including GST, SOD, POD, and CAT neutralize the salt-induced ROS accumulation to protect plants from destructive oxidative reactions [12,17,18]. In detail, SOD dismutates O 2 − into H 2 O 2 , which is decomposed into water and oxygen by CAT in the peroxisomes. POD mainly catalyzes substrate oxidation with H 2 O 2 as an electron acceptor in vacuoles and cell walls in plants [40][41][42]. In plants, GSTs are multifunctional enzymes existed in different classes (Phi, Tau, Zeta, Theta, and others) and play important roles in cellular detoxification of xenobiotic protection against oxidative stress as well as diverse ligand-binding activities [43]. In this research, 9 GSTs (LOC110724460, LOC110696392, LOC110724461, LOC110728060, LOC110711174, LOC110711174, LOC110739278, LOC110713696, and LOC110727188) and 16 PODs (LOC110682117, LOC110682546, LOC110685850, LOC110692926, LOC110699378, LOC110724764, LOC110735668, LOC110694635, LOC110735670, LOC110681844, LOC11068 7369, LOC110690635, LOC110727528, LOC110699380, LOC110684661, and LOC110704239) were detected in the quinoa responses to ethylene and salt stress ( Table 2, Supplementary Materials Figure S5, Excels S2-S4). The ROS scavenging enzymes, POD5, had been reported to be functioning in salt responses of quinoa by RNA-seq [21]. In this study, two POD5 including LOC110692926 and LOC110727528 were detected in ethylene-regulated salt responses of quinoa. On the other hand, 16 PODs including LOC110683143, LOC110729735, and LOC110699379 may play roles in non-ethylene-regulated salt responses (Supplementary Material Excel S7), and 23 PODs including LOC110685846, LOC110704240, LOC110707569, LOC110711884, and LOC110704238 are probably involved in ethylene responses but not in salt responses in quinoa (Supplementary Material Excel S8). The PODs were also detected as core salt-responsive genes in both salt-tolerant quinoa and salt-sensitive quinoa by a previous RNA-seq research [21]. In this study, the SOD activity was also detected and activated by salt treatment in quinoa ( Figure 5F). In addition, it was shown that ethylene enhances the SOD activity in salt responses of quinoa ( Figure 5F), providing evidence that ethylene may mediate ROS to regulate salt tolerance of quinoa. Osmotic Adjustment Is Important for Quinoa Responses to Ethylene and Salt Stress High concentrations of NaCl in salt stress generate K + and H + fluxes in quinoa roots to the apoplast, so leaf osmoregulation, K + retention, Na + exclusion, and ion homeostasis confer quinoa salt tolerance [44]. In addition to these inorganic ions, the accumulation of organic substances including protein, sugars, proline, and total phenolics is also attributed to improve the quinoa salt tolerance [15,16]. The high affinity K + transporters (HKT1.2) play a key role in Na + load into bladder cells in quinoa, and the Na + in bladder cells is then collected in the bladder hairs and washed off by rain [45]. Quinoa plants were reported to accumulate more Na + than K + under salinity stress, because the K + /Na + ratio was detected to be decreased with increasing of salt concentration [44]. The cell anion channel (SLAH), nitrate transporter (NRT), and chloride channel protein (C1C) were also activated by salt stress, indicating their possible functions in salt responses of quinoa [10]. Cell Wall Structural Proteins Respond to Ethylene and Salt Stress in Quinoa The levels of principal structural component of the plant cell wall such as lignins, pectins, celluloses, and hemicelluloses are affected by salt stress, which induce the alteration of cell wall elasticity [48,49]. Previously, it was reported that transcriptional changes of the genes involved in cell wall organization could been detected by RNA-seq after salt treatment of quinoa seedlings [30]. The genes involved in suberin and cutin biosynthesis, photosynthesis, and chloroplast were also reported to be significantly changed due to salt treatment in the bladder cells of quinoa [10]. TBLs encode the cell wall polysaccharide specific O-acetyltransferases and are probably involved in maintaining esterification of pectins [50]. In Arabidopsis, the functional study of the cellulose synthesis in salt tolerance had been previously reported [51]. In this research, 2 TBLs (LOC110715157 and LOC110685228) were detected differentially expressed ( Table 2, Supplementary Materials Figure S5, Excels S2-S4), and 5 CESs (LOC110715976, LOC110717430, LOC110689768, LOC110689717, and LOC11072 1870) in cellulose synthesis were detected. In addition, two BGLUs (LOC110739769 and LOC110724275), two BGALs (LOC110682558 and LOC110685863), and four glycine-rich cell wall structural proteins (GRPs) (LOC110732550, LOC110730178, LOC110730179, and LOC110732549), which may be involved in cell wall structure and elasticity in quinoa, were detected (Table 2, Supplementary Material Figure S5, Excels S2-S4). All these findings strongly support the importance of cell wall structure and elasticity in the quinoa stress responses. Secondary Metabolism-Associated Proteins Respond to Ethylene and Salt Stress in Quinoa Betalain is a tyrosine-derived, red-violet, and yellow pigment in quinoa with antioxidant activity, which plays important roles in salt responses [52]. For example, CqCYP76AD1-1 was reported in the betalain biosynthesis process in quinoa [53,54]. In this research, one CqCYP76AD1 (LOC110731693) was detected in the ethylene-regulated salt responses, although its molecular mechanism in the responses is unclear. The methyltransferases (MTs), GTs, and GPATs are transferases that transfer methyl, glucosyl, and acyl groups from one compound to another, respectively. The CHSs condense a phenylpropanoid CoA ester with three acetate units from malonyl-CoA molecules and cyclize the resulting intermediate to produce a chalcone, which is the precursor of diverse flavonoids [55]. The GELPs have high potential to be used in the hydrolysis and synthesis of important ester compounds [56]. It was reported that ectopic expression of Arabidopsis glycosyltransferase UGT85A5 enhances salt tolerance in tobacco, but knock down of the corresponding genes decreases salt tolerance at seedling and reproductive stages of rice [57,58]. Early Response Genes and Late Response Genes in Quinoa It was reported that many genes are divided into two categories, namely early response genes and late response genes, depending on their different activation patterns in response to stimuli. The early response genes, which are also called primary response genes, are induced without de novo protein synthesis, while the late response genes, which are also called secondary response genes, require de novo protein synthesis and are induced more slowly because that synthesis needs signaling molecules or cytokines [59,60]. In this research, the correlation between proteome and transcriptome was analyzed, and the results are shown in Figure 3, Table 2, and Supplementary Materials Figure S5, Excels S2, S4 and S5. For example, the genes/proteins (GST (LOC110739278) and GLU1 (LOC110717177)) differentially expressed in both transcript and protein levels, belong to early response genes, and their proteins had already been synthesized and could be detected at early times. The genes/proteins that were differentially expressed in transcript levels but not protein levels belonged to late response genes. Their protein levels did not accumulate within 24 h of treatment. It was suggested that their protein levels could be changed in later hours. In this research, the genes/proteins including JIPs (LOC110715081, LOC110711071, and LOC110733576), GSTs (LOC110724460, LOC110696392, LOC110724461, LOC110728060, and LOC110711174), and PODs (LOC110682546, LOC110685850, LOC110692926, LOC110699378, LOC110724764, LOC110735668, LOC110694635, LOC110735670, LOC110681844, LOC11068 7369, LOC110690635, LOC110727528, LOC110699380, and LOC110684661) were differentially expressed in transcript levels but not in protein levels, suggesting that these genes may belong to the late response genes. Conclusions The proposed molecular mechanism of ethylene-regulated salt responses in quinoa is complex. Under salt stress, ROS scavenging enzymes including GSTs and PODs; transporters and solutes in osmotic adjustment including HKT, PT, Na + /metabolite cotransporter, high-affinity Na + transporters, cation/H + antiporter, Na + /Ca 2+ exchanger, aquaporin, bidirectional sugar transporters, polyol transporter, and sucrose synthases; cell wall structural proteins including GLCs, β-GALs, CESs, TBLs, and GRPs; and secondary metabolism-associated proteins including GTs, GPATs, CHSs, GELPs, CYPs, and MTs are activated in responses to ethylene and salt stress in quinoa. Plant hormones, including AUX, ABA, JA, and CK, also play important roles in the responses. Considering the large number of transporters in osmotic adjustment identified in the ethylene-regulated salt responses in quinoa, it is concluded that osmotic adjustment is probably one of the main regulations for quinoa when challenged by salt stress. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/plants10112281/s1. Figure S1: The PCA analysis in transcriptomic (A) and proteomic analysis (B), Figure S2: The heat map with hierarchical clustering of DEGs in comparisons between SALTr and H 2 Or, Figure S3: Supplementary Material S6: The heat map with hierarchical clustering of DEGs in comparisons between SALTr and ACCr, Figure S4: The heat map with hierarchical clustering of DEGs in comparisons between ACCr and H 2 Or, Figure S5: The heat map of candidate proteins/genes in ethylene and salt responses of quinoa, Figure S6: Supplementary Material S11: The heat map with hierarchical clustering of DEPs in comparisons between SALTp and H 2 Op., Figure S7: The heat map with hierarchical clustering of DEPs in comparisons between SALTp and ACCp, Figure S8: The heat map with hierarchical clustering of DEPs in comparisons between ACCp and H 2 Op, Table S1: The sequence templates of randomly selected DEGs in qRT-PCR confirmation, Table S2: Oligonucleotide primers used in qRT-PCR confirmation, Excel S1: The summary of DEGs in single comparisons, Excel S2: The DEGs annotation in the ethylene and salt responses of quinoa, Excel S3: The summary of DEPs in single comparisons, Excel S4: The DEPs annotation in ethylene and salt responses of quinoa, Excel S5: The genes/proteins annotation in correlation analysis, Excel S6: The expression of the reference gene CqACTIN under the different treatments in this research, Excel S7: The genes/proteins playing roles in non-ethylene-regulated salt responses, Excel S8: The genes/proteins playing roles in ethylene responses but not related with salt tolerance in quinoa.
2021-10-28T15:17:34.268Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "5fd98d684a7794245f781b6b0687ca5cb699f47d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/10/11/2281/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35883c47aa7d2f7f13d99b48e31a37b68d775c44", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18809851
pes2o/s2orc
v3-fos-license
Multiple nodules on the left cheek represented pseudolymphomatous folliculitis Key Clinical Message Pseudolymphomatous folliculitis (PLF) is a rare lesion. Sometimes, the clinical appearance is characterized by multiple large, firm violaceous nodules. In cases with multiple lesions, such biopsy should be performed on one lesion, and once PLF is determined, monitoring for the remained tumor is considered to be the best treatment. Introduction Pseudolymphomatous folliculitis (PLF) is a rare lesion of which few reported cases exist. The clinical appearance is characterized by solitary or multiple large, firm violaceous nodules that occur mainly on the face, scalp, and upper trunk [1]. Typically, rapid growth is seen over a period of several weeks or months [1], making distinguishing PLF from malignant lymphoma clinically difficult. We treated a case of subcutaneous PLF on the left cheek and herein present the characteristic pathological findings. Case Report A 51-year-old male visited our hospital with multiple rapidly enlarging masses in the left cheek that had been developing for 1 month. He had a history of cerebral infarction, and clopidogrel sulfate had been administered to prevent another infarction. He also had diabetes, for which he was under medical treatment. Physical examination revealed three firm, red, dome-shaped subcutaneous nodules on the left cheek, two of which were located between the ear and the jaw, with the remaining one near the nose. The three nodules measured 7 9 10, 6 9 5, and 5 9 5 mm, respectively. None of the masses were adhered to the connective tissue (Fig. 1A). Based on our suspicion of inflamed epidermal cysts, surgical excision of the two lesions between the ear and the jaw was performed under local anesthesia 1 month after the initial examination. With consideration of the influence of inflammation, those two tumors were removed along with fat in the immediate area surrounding the tumors. We did not remove all three tumors at the same time because of the risk of hemorrhage due to the anticoagulant medication. After confirming that postsurgical bleeding was controlled, the remaining tumor, located near the nose, was removed 12 days after the first operation, under local anesthesia (Fig. 1B). Pathological findings of the tumors excised at the first operation showed a nodular dense lymphoid cell infiltration from the dermis to the subcutaneous tissue. The cells had been enlarged and the hair follicles had been destroyed by infiltration of the dense, diffuse, and small well-differentiated lymphocytes ( Fig. 2A-C). Furthermore, immunohistochemical examination revealed that a large number of lymphocytic cells were positive for CD3, S-100, and CD1a (Fig. 2D, F and G), while few were positive for CD20 and few were positive for PD-1 ( and H) [2]. Based on these findings, we made a diagnosis of PLF. However, in a pathological examination of the excised whole tumor at the second operation, mild lymphocyte infiltration around the hair follicles was found, which was not considered to be a PLF-specific finding (Fig. 3). Thus, from a pathological point of view, the lesion was considered to be nonspecific inflammation. Taking all findings into account, the patient was diagnosed as having mild or regressed PLF. Six months after the second operation, healing progressed and no recurrence was noted (Fig. 4). Discussion PLF is characterized by a dense lymphoid infiltration accompanied by hyperplastic hair follicles, and was first described as a distinct variant of pseudolymphoma in 1986 [3]. Thereafter, only approximately 10 reports on PLF, including 76 cases, have been published; thus, it is considered to be a rare entity. According to those past reports, typical PLF is presented as a solitary, erythematous or violaceous, dome-shaped or flat elevated nodule on the face, especially on the nose, cheek, eyelid, or forehead [4,5]. In many cases, the eruption is isolated [5], whereas our patient showed a relatively rare pattern of eruption, as the PLF manifested as multiple lesions. PLF occurs in a wide age range and both sexes [5]; however, its tendency is yet to be elucidated because of the low number of reports. To the best of our knowledge, no cases of spontaneous regression without any treatment (operation, biopsy, medication, etc.) have been reported. A variety of differential diagnoses, including epidermal cyst, chronic folliculitis, granuloma, basal cell carcinoma, sarcoidosis, and insect bite reaction must be considered [1,5], with malignant lymphoma being one of the most important. Thus, careful histological and immunohistochemical examinations are needed for accurate diagnosis of PLF [1]. Arai et al. [4] proposed histological criteria for diagnosis, including nodular dense lymphocytic infiltration from the dermis to subcutis, befitting the term pseudolymphoma, in which lymphocytes surround and infiltrate the pilosebaceous unit, and deform the walls. Kazakov et al. [5] presented 42 cases of PLF in which hair follicles were enlarged and often distorted, with a dense diffuse lymphoid infiltration composed mainly of small well-differentiated lymphocytes, occupying the whole dermis, though sparing of the epidermis was also observed. Immunohistologically, PLF can be divided into four groups: predominance of B cells, predominance of B cells with fairly numerous T cells, predominance of T cells with fairly numerous B cells, and predominance of T cells [4]. Recently, Goyal et al. [2] reported that PD-1+ T cells and CD1a are useful for differentiation of PLF. In our case, distorted hair follicles and diffuse lymphocyte infiltration with no dysplasia were found ( Fig. 2A-C), as well as sheets of CD3+ T lymphocytes and scattered CD20+ B lymphocytes (Fig. 2D and E). Furthermore, a large number of lymphocytic cells were positive for S-100 and CD1a (Fig. 2F and G), whereas few lymphocytic cells were positive for PD-1 (Fig. 2H). Based on these findings, we made a diagnosis of PLF. In contrast to lymphomas, lymphocytes in PLF show no bias with regard to their j-/k-chain-positive B-cell ratio or CD4+/8+ T-cell ratio. In addition, the results of polymerase chain reaction assays for clonal T-cell receptor and immunoglobulin heavy chain gene rearrangements typically reveal negative findings in PLF cases [6]. Since we already had a diagnosis, we did not perform additional immunohistological examination. On the other hand, in the specimen obtained in the second operation, we found only mild lymphocyte infiltration around the hair follicles and common findings of inflammation; therefore, a diagnosis of PLF could not be made from that evidence alone. The first choice of treatment for PLF is excisional biopsy, with the use of corticosteroid injections recommended if the excision is not complete [1]. In cases with multiple lesions, such a biopsy should be performed on one lesion, and once PLF is determined, monitoring of any remaining tumors is considered to be the best treatment. There are a few reports of PLF nodules regressing spontaneously within 6 months after excisional biopsy [7,8]. In cases of PLF diagnosed by partial biopsy, it is recommended to follow-up for half a year, and wait for spontaneous improvement. Conclusion Careful examination of patients with PLF is required because distinguishing PLF from malignant lymphoma is difficult. In such cases, an excisional biopsy is the most useful method for making an accurate diagnosis. In consideration of the difficulty of diagnosis, an excisional biopsy is valuable; however, once PLF is diagnosed in one of the multiple tumors by excisional biopsy, such a biopsy is thought to be unnecessary for the remaining tumors. Conflicts of Interest There are no conflicts of interest to report.
2018-04-03T02:53:29.802Z
2016-05-04T00:00:00.000
{ "year": 2016, "sha1": "0f745b0441223eb878b8e185d768fdaafd562303", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/ccr3.571", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f745b0441223eb878b8e185d768fdaafd562303", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2354268
pes2o/s2orc
v3-fos-license
The predictive value of methylene blue dye as a single technique in breast cancer sentinel node biopsy: a study from Dharmais Cancer Hospital Background Axillary lymph node dissection (ALND) has been the standard treatment of breast cancer axillary staging in Indonesia. The limited facilities of radioisotope tracer and isosulfan or patent blue dye (PBD) have been the major obstacles to perform sentinel node biopsy (SNB) in our country. We studied the application of 1% methylene blue dye (MBD) alone for SNB to overcome the problem. Methods This prospective study enrolled 108 patients with suspicious malignant lesions or breast cancer stages I–III. SNB was performed using 2–5 cc of 1% MBD and proceeded with ALND. The histopathology results of sentinel nodes (SNs) were compared with axillary lymph nodes (ALNs) for diagnostic value assessments. Results There were 96 patients with invasive carcinoma from July 2012 to September 2014 who were included in the final analysis. The median age was 50 (25–69) years, and the median pathological tumor size was 3 cm (1–10). Identification rate of SNs was 91.7%, and the median number of the identified SNs was 2 (1–8). Sentinel node metastasis was found in 53.4% cases and 89.4% of them were macrometastases. The negative predictive value (NPV) of SNs to predict axillary metastasis was 90% (95% CI, 81–99%). There were no anaphylactic reactions, but we found 2 cases with skin necrosis. Conclusions The application of 1% MBD as a single technique in breast cancer SNB has favorable identification rates and predictive values. It can be used for axillary staging, but nevertheless the technique should be applied with attention to the tumor size and grade to avoid false negative results. Background Breast cancer is the most common malignancy, accounting for 31.2% of all cancers and 26.5% as the cause of cancer death among women in our hospital [1]. In Asia-Pacific region, 12% of breast cancer incidence rates and 17% of its death occur in Indonesia [2]. The information of axillary lymph node (ALN) metastasis is one of the most important prognostic factors in breast cancer treatments. It is conventionally determined by axillary lymph node dissection (ALND) [3,4]. This procedure gives morbidities such as lymphedema, loss of sensory, limited mobility, and seroma formation which will decrease the quality of life [4][5][6]. Nowadays, breast cancer treatments have moved towards conservation therapies, and sentinel node biopsy (SNB) has been introduced as a part of the minimal invasive breast surgery [3,7]. Unfortunately, ALND is still the standard procedure for axillary staging in Indonesia. The limitation to provide sophisticated technologies for SNB has been our mainstay issue. The work by Morton et al. [8] in cutaneous melanoma was the turning point of the acceptance of the sentinel node (SN) concept. It was soon adopted to breast cancer patients by using isosulfan blue dye or radioisotope tracer alone to find SNs. Initially, the reported identification rates of SNs ranged between 65 and 98% and false negative rates between 0 and 5% [9][10][11][12]. In developed countries, the optimal technology for SNB uses isosulfan or patent blue dye (PBD), preoperative lymphoscintigraphy, and radioisotope tracer, which are used as a single or combination technique [9][10][11][12][13]. As an alternative to these devices, several studies have been conducted to validate 1% methylene blue dye (MBD) for SNB. Simmons [14] was the first surgeon who reported the successful application of 1% MBD in breast cancer SNB. The other studies also supported its use because of the favorable results in identification and false negative rates, fewer allergic complications, and lower cost [15][16][17][18][19][20][21][22]. Limited access to PBD and radioisotope tracer is the main problem to perform SNB in Indonesia. Not to mention our geographic distribution of the population, the availability and cost to provide nuclear medicines, or gamma probes in every hospital have contributed to the difficulty for administering SNB. Recently, we have started to use 1% MBD alone, and the initial results from 24 patients were favorable with the identification rates of 95.8% [23]. As we have moved towards better breast cancer care, it is important for us to conduct a study to overcome the limitation to perform SNB. The primary objective of the study is to evaluate the identification rates and negative predictive value (NPV) of SNs to predict axillary metastasis by using 1% MBD alone. Participants In this study, 108 consecutive patients with diagnosis of breast cancer or suspicious malignancy were enrolled prospectively at Dharmais Cancer Hospital, Bogor City General Hospital, and Mochtar Riady Comprehensive Cancer Center (MRCCC) Siloam Hospital between July 2012 and September 2014. There were five surgeons participating in the research. SJH, RK, and WG had more than 5 years of experience, while BB and BA had more than 3 years of experience in breast cancer surgery including ALND. All surgeons had less than 10 cases in performing MBD technique alone prior to the study. BB, RK, and SJH were also the surgeons who were working and undertaking SNB in the other participating hospitals besides Dharmais Cancer Hospital. We included patients with any tumor size (T) without palpable ALNs (cNo) and had performed core needle or fine needle aspiration (FNA) biopsy. Patients without final pathological results of invasive breast cancer or had a pregnancy were excluded from the study. The Institutional Review Board at Dharmais Cancer Hospital approved the study, and all patients were provided informed consent. Sentinel node biopsy and axillary lymph node dissection SNB was performed using 1% MBD. It was injected in a subareolar or peritumoral area with the dose of 2 until 5 cc. We did a peritumoral injection in all cases with previous excisional biopsy at the upper outer quadrant of the breast or according to the surgeon's preferences. A breast massage was done for 5 min after the injection. In a standard breast conserving or oncoplastic breast conserving surgery (BCS), a separate incision in the lower axillary hairline was made to find SNs before lumpectomy or quadrantectomy. When the patients underwent mastectomy, SNB was undertaken through the same mastectomy incision before removing the breast. Sentinel nodes were defined as blue nodes or lymph nodes with a lymphatic blue channel. All procedures proceeded to ALND levels I-II. Axillary lymph node dissection level III was done when there were suspicious lymph node metastases at level II. If a frozen section were available, it would be used to assess an intraoperative SN metastasis. Histopathological results of all ALNs were collected after the surgery. Pathological examination The sentinel nodes were surgically removed at the beginning of the surgical procedure and sent for a standard pathological assessment or frozen-section examination if available. The sentinel nodes were sectioned no thicker than 2 mm and parallel to the long axis. An intraoperative analysis was categorized to positive or negative for metastases. The rest of SNs were formalin fixed and paraffin sectioned with hematoxyline-eosin staining. The tumors were histologically classified according to the World Health Organization (WHO) Histological Classification of Breast Tumors, and grading was defined according to Elston and Ellis modification [24]. All specimens were reviewed in Dharmais Cancer Hospital by two pathologists (RIP and LS). Only 2 patients who underwent surgery at MRCCC hospital were not reviewed due to the patients' preference and thus analyzed by using the original histopathology report from the local pathologist. Molecular subtypes for invasive cancer were classified as luminal A (ER+ and/or PR+, HER2−, and histological grade either 1 or 2), luminal B (ER+ and/or PR+, HER2+; or ER+ and/or PR+, HER2− and grade 3), triple negative (ER−, PR−, HER2−), and HER2+ (ER−, PR−, HER2+) [25]. The nodal involvement was classified according to the 6th edition of the American Joint Committee on Cancer (AJCC) manual. Macrometastasis (MAC) is defined as tumor deposits larger than 2 mm, micrometastasis (MIC) if tumor deposits between 0.2 and 2.0 mm, and isolated tumor cells (ITCs) if there are cell clusters or a single cell no larger than 0.2 mm. Serial sections and immunohistochemistry (IHC) for cytokeratin were performed when there was some doubt to define ITCs. The rest of ALNs were also examined in a similar manner. The histopathology of SNs was compared to the final examination of ALNs for the presence of metastases [24,26]. Statistical analysis Descriptive data were presented in the table of frequency. Sensitivity (Se), specificity (Sp), positive predictive value (PPV), and negative predictive value (NPV) were calculated using CATmaker. Diagnostic values were reported with 95% confidence of interval (CI). We used SPSS version 16.0 to manage the data. Patient characteristics We prospectively enrolled 108 patients from July 2012 to September 2014. Twelve patients with FNA biopsy result of suspicious breast cancer were excluded because the frozen section and final pathological results were not invasive carcinoma. There were 87 (90.6%) patients from Dharmais Cancer Hospital and 9 (9.4%) from the other hospitals. Of the 96 patients who were included in the final analysis, the median age was 50 years (range 25-69 years). There were 9 (9.4%) patients in stage I, 64 (66.7%) in stage II, and 23 (23.9%) in stage III. The median pathological tumor size was 3 (1-10) cm. Invasive carcinoma of no special type (NST) was the most common result which accounted for 71 (74%) patients and invasive lobular carcinoma (ILC) in 11 (11.5%) patients. We classified breast cancer molecular profile based on IHC examination. Thirty-eight (39.6%) patients were classified as luminal A breast cancer, 24 (25%) as luminal B, 10 (10.4%) as HER2+ type, and 24 (25%) as triple negative (TNBC). Mastectomy was the most common surgical procedure which was done in 60 (62.5%), meanwhile BCS in 36 (37.5%) patients. Table 1 summarizes the characteristic of patients. Sentinel node biopsy and pathological examination We could identify SNs in 88 patients. Therefore, the SNs identification rate was 91.7%. Peritumoral injections were done in 29 (30.2%) and subareolar in 67 (69.8%) cases. The median number of SNs that could be identified was 2 (1-8) and the median of ALNs was 11 . In this group where SNs were identified, the number of SNs without metastases was 41. Four of these patients were found to have metastases in non-sentinel nodes (NSNs), and so the total patients without lymph node metastases were 37 (42%). There were 47 (53.4%) cases with SN metastases and 42 (89.4%) of them had MAC. The number of SN metastases which was only found in 1-2 SNs was 43 (91.5%), whereas 4 (8.5%) metastases were identified in more than 2 SNs. We discovered 25 (53.2%) cases with additional metastatic deposits in NSNs. Therefore, in 22 (46.8%) patients, the metastases only occurred in SNs. Table 2 describes the cases with positive SNs. The SNs detected metastases in 47 of 51 cases, resulting in a Se of 92% (95% CI, 85-100%), and there were 4 NSN metastases in the SN negative group which resulted in a NPV of 90% (95% CI, 81-99%). All 4 cases that failed to predict ALN metastases had a median pathological tumor size of 4 cm, 2 patients in stage IIB and the others were in stage IIIA. Three (75%) patients were grade 3 invasive carcinoma. Figure 1 and Table 3 show the recruitment of patients and results of diagnostic value. Table 4 describes the false negative patients Unidentified sentinel nodes The SNs could not be found in 8 patients. The median age of the patients was 54 years old (range 36-67 years) with the median tumor size of 2.8 (1.5-5.0) cm. There were 2 (25%) grade 1, 3 (37.5%) grade 2, and 3 (37.5%) grade 3 invasive carcinoma. Two (25%) patients had SNs sentinel nodes, NSNs non-sentinel nodes lymph node metastases and the rest were negative. Table 5 describes the characteristic of the unidentified SN group. Complications Two patients experienced skin necrosis around the injection site after 5 cc of peritumoral injection. They were mastectomies cases one of whom had a breast reconstruction. These patients successfully underwent conservative wound treatment. We found no systemic anaphylactic reactions among all patients. Discussion The paradigm of early breast cancer management has changed toward conservation treatments, and SNB has replaced ALND in terms of axillary staging [27,28]. In comparison with developed countries, the majority of breast cancer cases in our country are in locally advanced stages [29]. This is the reason why ALND has become a common practice among our surgeons. Nowadays, we have been expecting to treat patients in early stages since the improvement in our national health care insurance and this condition will motivate us to promote SNB. Although the standard for lymphatic mapping supports the combination technique [13,30], limited access to radioisotope tracers, PBD, and nuclear medicine facilities have become our obstacles. Our population is distributed across islands and not every hospital has sophisticated technologies for SNB. Therefore, we try to overcome this problem by applying 1% MBD alone for SN identification. The issue of PBD limitation was solved by several authors with the utilization of 1% MBD which had favorable results [17,21,31]. The identification rate of 92% from our research was acceptable when it was compared with the other studies that used MBD [14][15][16][17][19][20][21]. Another research which supported our result was confirmed by Liu et al. in their randomized controlled study in cutaneous melanoma. They found that MBD was as effective as isosulfan blue dye to identify SNs [32]. The median SN number, which was 2 nodes from our study, was equal with the studies that suggested to find 2 until 3 SNs to minimize the false negative rate [33][34][35][36][37]. In the identified SN group, 42% of the cases were lymph node negative for metastases. It means that there were many cases which were not supposed to receive ALND and we could have saved a lot of patients from having the risk of lymphedema and other morbidities. We believe if our surgeons can apply this SNB technique instead of routine ALND, we will make a better quality of life after the surgery and overall reduce the cost of breast cancer treatment in Indonesia and its associated surgical morbidities. The next important findings from our study were the facts that 53% of metastatic foci were found in SNs and nearly half (47%) of them were only confined in SNs. The early publications of SNB in breast cancer have also reported that approximately 50% patients with SN metastases did not have positive NSNs [38,39]. In this case, the utility of a nomogram to predict NSN metastases [40][41][42][43] would become a valuable tool for us. The Z0011, IBCSG 23-01, and AMAROS studies have given new perspectives to omit ALND after positive SNs [44][45][46]. According to the studies, patients with a smallsized tumor, plans for BCS, and whole breast radiation are the suitable indications. These selection criteria did not match with the majority of our patient characteristics because it had been shown in this study that we had bigger median tumor size, 24% cases were in stage III, 89% MAC in SNs, and mastectomy was more common than BCS. As we had 91% patients with 1 until 2 metastases in SNs, the POSNOC trial is expected to give us a better evidence for omitting ALND after positive SNs [47], particularly in mastectomy which represents the majority of our cases. The reported NPV in this study was 90%, and a randomized study from Canavese et al. nearly had the same result (91.1%) [48]. We realized that our NPV was lower than the other studies (92.3 and 96.1%) [49,50]. It might have been due to the 4 false negative cases which had bigger median tumor size (4 cm) and higher tumor grade (75% in grade 3). So, there were possibilities that tumor size more than 3 cm and high grade tumors had higher risks of volume nodal metastases and blockage of the lymphatic system to SNs and alternates to false SNs [48,51]. However, when we analyzed separately by excluding stage III patients (data not shown), the NPV would be 95% (95% CI, 80-100%). Therefore, surgeons must be cautious when performing SNB with MBD alone in a patient with grade 3 and more than 3 cm tumor size. Under these circumstances, looking for additional non blue suspicious lymph nodes is suggested to minimize false negative result. In this study, SNs could not be found in 8 patients. There are some related factors with the failures to find SNs. The age, body mass index (BMI), tumor size, location, grade, type of previous biopsy, SNB technique, and surgeon's experience have been reported in literatures as the factors that influence SN identification [52][53][54]. The median age of the unidentified SN group was 54 years and this older condition could have been one of the factors which accounted for the unsuccessful identification in the final result. The increased fatty tissue in the breast among older patients may decrease lymphatic flow and failures to identify SNs [52,53]. The surgeon's experience is another important factor for localizing SNs, especially if blue dye alone is used as the method of choice. Some literatures have explained that identification of SNs will be reduced by less experienced surgeons and the use of blue dye alone technique [53,54]. Our failure to find SNs might have been explained by these factors as well because in this study, the application of MBD alone was a relatively new technique for us and we did not have many experiences regarding this technique prior to the study. Higher tumor grade has been known as a negative factor for SN identification in univariate analysis [53]. In our result, grades 2 and 3 tumors constituted about 75% of the cases. Although tumor grade has not been proven as an independent factor for the failure [53], we think it could have contributed to the negative finding in our study. We experienced two skin necroses around the injection site. Local skin irritation or necrosis after MBD injection was reported by other authors [22,55]. The toxic effects are due to the formation of aldehydes and a reduction in oxidation products which initiate inflammatory reactions [56]. Although we were not really sure if the skin necrosis was caused by MBD or skin flap necrosis after mastectomy, we decided to lower the dose of injection until 2 cc and we did not have skin necrosis thereafter. We did not find anaphylactic reactions in our cases. The incidence of allergic reactions following PBD was between 0.06 and 2.7% [56]. Whereas anaphylactic reactions following MBD injection was very rare, there were several related serious effects after intrauterine injection [57][58][59][60] and pulmonary edema had also been reported after breast cancer SNB in two series [61,62]. Although MBD can be used safely for lymphatic mapping because of its very rare effects in allergic reactions, we suggest that the operating team should be aware and prepared for the potential anaphylactic reactions of MBD that could happen. This study had several limitations. First, we only included clinically node negative patients but we did not perform ALN biopsy if the axillary ultrasound found suspicious lymph nodes. Ultrasound-guided axillary lymph node biopsy will select patients with true negative lymph nodes before surgery. Second, blue nodes or non-blue nodes with lymphatic blue channels were the only criteria for SNs. We did not try to find the non-blue suspicious nodes as SNs. These could have reduced our NPV results, especially in cases with high grade and bigger tumor size that could have alternated MBD into the false SNs. Conclusions This study has proven that SNB in breast cancer can be performed with 1% MBD alone. It can be done in clinical settings with limited access to perform the standard combination technique or when PBD is not available. The important factors that should be considered are the following: first, in high grade and bigger tumor size, surgeons must not be satisfied when they only find the blue nodes. The non-blue suspicious lymph nodes must be searched in order to reduce false negative results. Second, a better understanding of the SN anatomic location in the axilla is the key point to increase the identification rate when applying MBD alone technique.
2018-03-23T21:15:53.086Z
2017-02-07T00:00:00.000
{ "year": 2017, "sha1": "02774a04389f6133e3ff71b18fe793f0efe0216b", "oa_license": "CCBY", "oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-017-1113-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02774a04389f6133e3ff71b18fe793f0efe0216b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212552523
pes2o/s2orc
v3-fos-license
Persistent sodium current blockers can suppress seizures caused by loss of low‐threshold D‐type potassium currents: Predictions from an in silico study of Kv1 channel disorders Abstract Objective Ion channels belonging to subfamily A of voltage‐gated potassium channels (Kv1) are highly expressed on axons, where they play a key role in determining resting membrane potential, in shaping action potentials, and in modulating action potential frequency during repetitive neuronal firing. We aimed to study the genesis of seizures caused by mutations affecting Kv1 channels and searched for potential therapeutic targets. Methods We used a novel in silico model, the laminar cortex model (LCM), to examine changes in neuronal excitability and network dynamics associated with loss‐of‐function mutations in Kv1 channels. The LCM simulates the activities of a network of tens of thousands of interconnected neurons and incorporates the kinetics of 11 types of ion channel and three classes of neurotransmitter receptor. Changes in two types of potassium currents conducted by Kv1 channels were examined: slowly inactivating D‐type currents and rapidly inactivating A‐type currents. Effects on neuronal firing rate, action potential shape, and neuronal oscillation state were evaluated. A systematic parameter scan was performed to identify parameter changes that can reverse the effects of the changes. Results Reduced axonal D‐type currents led to lower firing threshold and widened action potentials, both lowering the seizure threshold. Two potential therapeutic targets for treating seizures caused by loss‐of‐function changes in Kv1 channels were identified: persistent sodium channels and NMDA receptors. Blocking persistent sodium channels restored the firing threshold and reduced action potential width. NMDA receptor antagonists reduced excitatory postsynaptic currents from excessive glutamate release related to widened action potentials. Significance Riluzole reduces persistent sodium currents and excitatory postsynaptic currents from NMDA receptor activation. Our results suggest that this FDA‐approved drug can be repurposed to treat epilepsies caused by mutations affecting axonal Kv1 channels. | INTRODUCTION Voltage-gated potassium (K v ) channels are highly expressed in the brain. 1 They limit neuronal excitability by contributing to membrane repolarization and hyperpolarization. K v 1 channels are an important subgroup of the K v family. They are primarily expressed on axons, 2 where they are responsible for determining resting membrane potentials, shaping action potentials, and modulating action potential (AP) frequency during repetitive neuronal firing. 3 Mutations in genes encoding K v 1 channels and related proteins cause several different epilepsy phenotypes. These genes include LGI1, KCNA1, and KCNA2, which encode the leucine-rich gliomainactivated 1 protein, K v 1.1 and K v 1.2 channels, respectively. LGI1 mutations are associated with autosomal dominant temporal lobe epilepsy, KCNA1 mutations can cause episodic ataxia 1, usually associated with seizures, and both KCNA1 and KCNA2 mutations have been associated with epileptic encephalopathy. Associated epilepsy phenotypes can be refractory to existing antiepileptic medications, often with devastating sequelae. LGI1 encodes a protein that regulates the expression and function of K v 1 channels and AMPA receptors. [4][5][6][7][8][9][10] In LGI1 knockout models, the expression of K v 1.1 and K v 1.2 channels is reduced by more than 50%. 5 Depletion of leucine-rich glioma-inactivated 1 protein also increases the release of glutamate 10,11 and significantly reduces the expression of AMPA receptors. 6,8,9 These changes have mixed effects on the excitability of neurons, and the mechanisms by which LGI1 mutations cause epilepsy remain elusive. K v 1.1 and K v 1.2 potassium channels activate rapidly at relatively low voltage (<−40 mV). 11 Most of these channels inactivate slowly and contribute to long-lasting D-type currents. However, when they co-assemble with K v 1.4 or auxiliary K v β1 subunits, they display rapid inactivation, contributing to transient A-type currents. Hence, loss of K v 1.1 or K v 1.2 channels reduces both D-type and A-type currents. In the present paper, we decided to study seizure genesis in epilepsies associated with loss-of-function mutations in K v 1 channels using computer simulations based on the laminar cortex model (LCM). 12,13 The LCM is a computational framework designed to simulate the activities of a thalamocortical network comprising tens of thousands of interconnected neurons. The model incorporates a realistic synaptic connection map, thalamocortical architecture, and 11 neuron types, with distinct action potential firing behaviors, into an integrated simulation framework. Neuron behaviors incorporate the kinetics of 11 types of ion channel as well as shortterm synaptic plasticity. These features allow us to model the effects of changes in ion channel properties associated with gene defects realistically. We use the LCM to examine the effects of KCNA1, KCNA2, and LGI1 mutations on neuronal excitability and network dynamics. To search for potential therapeutic targets, we performed a systematic parameter scan to identify those that can be tuned to reverse the effects of the gene mutations. PROCEDURES In this section, we briefly introduce the architecture of the LCM and outline the parameters used to describe ion channel kinetics. | Ion channel kinetics In the LCM, a neuron consists of several connected segments, which are modeled as a small cylindrical compartment with a set of ion channels (see Figure 1). The membrane potential of a segment is driven by ion channel currents and postsynaptic currents, stated as where V m,i and V m,j are the membrane potentials of the segments i and j, respectively; c m,i = C m A i is the total membrane capacitance for a segment with surface area A i and specific membrane genetic epilepsy, KCNA, LGI1, voltage-gated potassium channel Key Points • Reductions in axonal D-type currents and not in axonal A-type currents led to a lower firing threshold and wider action potentials. • A 25-50 reduction in persistent sodium currents can compensate the neuronal changes caused by a 50 reduction in K v 1 channels. • Seizure threshold decreases by >60 when D-type currents are halved; widening of action potentials lowered seizure threshold by ~30. • Riluzole can reduce persistent sodium currents by ~25, which can restore the lowered seizure thresholds caused by K v 1 channel loss capacitance C m , and is set to 0.9 μF/cm 2 ; 14 g IC is the total conductance of ion channel IC; E A is the reversal potential of the corresponding ion; g sy and E sy are the total conductance and reversal potential of synapse sy, respectively; and g ij is the intracellular conductance between segment i and j. The last summation in the right hand of Equation (1) is performed over all segments that are connected to segment i. The LCM simulates one type of leaking currents and eleven types of voltage-gated ion channel for three ion species (sodium, potassium, and calcium). 15 The kinetics of the ion channels are modeled using the Hodgkin-Huxley method. The currents passing through an ion channel are given by: where g is the temporally varying conductance, g is its peak conductance; 0 ≤ m, h ≤ 1 are the activation and inactivation probabilities, respectively, with h = 1 for non-activating ion channels; n = 1, 2, 3, or 4 is the power of activation probability; 0 ≤ G≤1; is the gating probability, dependent on mechanisms other than membrane potential, and is set to 1 for ion channels that are only activated and inactivated by membrane potentials. The activation and inactivation probabilities are voltage-dependent, and follow the Hodgkin-Huxley first-order differential equations, where m ∞ and h ∞ are steady-state values, and τ m and τ h are time constants. Ion channels incorporated in the LCM and the notation for their conductance are listed below: 1. Leaking conductance (g L , passive); 2. Sodium currents: (a) fast activating and inactivating transient conductance (g NaT ); (b) slowly activating and noninactivating persistent conductance (g NaP ); 3. Potassium currents: (a) low-voltage and fast-activating D-type conductance (g KD ); (b) fast activating and inactivating A-type transient conductance (g KA ); (c) Kv2 conductance by Kv2-like channels (g Kv2 ); (d) Kv3.1 conductance by Kv3.1 like channels (g Kv3_1 ); (e) muscarinic sensitive M-type conductance (g KM ); (f) intracellular calcium-dependent conductance (g SK ); 4. Calcium currents: (a) low-threshold transient inactivating conductance (g CaLVA ); (b) high-threshold non-inactivating conductance (g CaHVA ); 5. Non-selective anomalous rectifier (AR) conductance by the hyperpolarization-activated cyclic nucleotide-gated channels (g AR ). The voltage dependence of the activation and inactivation probabilities is adopted from the work of Hay et al 16 and listed in the Supplementary Information ( Figure S1 and Data S1). An iteration equation of membrane potentials is drawn from Equation (1): (2) g = gm n hG, where t n is the time at iteration step n, ∆t n is the size of the iteration step, and V ss and τ M are the steady value and time constant of the membrane potential with values given by: In the LCM, Equations (4) and (5) are used to update membrane potentials of segments repetitively in discrete time steps. The LCM adopts variable time steps, and the size of an iteration step depends on the second-order derivative of membrane potential, Δt = √ 0.01∕2V �� m . The time step is capped within 0.02 and 0.1 ms. For calcium-dependent potassium currents (g K(Ca) and g K(AHP) ), the intracellular calcium concentration is modeled using. where i Ca is the calcium current, γ is a coefficient characterizing concentration change caused by the currents, set to 0.002 μmol/ L·cm 2 /(ms·μA), 17 and Cai is the ion concentration recovery time constant, set to 80 ms. 17 | Synaptic transmission and shortterm plasticity Short-term synaptic plasticity of the PSC is incorporated into the model. Three types of neurotransmitter receptors are simulated: AMPA, NMDA, and GABAA. Postsynaptic currents triggered by a spike are given by: where N sy is the number of synapses, E sy is the reverse potential, set to 0 mV for AMPA and NMDA receptors and to −80 mV for GABAA receptors, R(t) is the time course of postsynaptic currents, modeled using a bi-exponential function (see Du et al 12 ), and g sy is temporally varied conductance of the receptors, determined using: where g sy is the peak conductance of a synapse, n sy is the occupancy of the neurotransmitter pool in the presynaptic terminals and has a value between 0 and 1, p sy is the portion of the neurotransmitter released upon the arrival of presynaptic spikes, f(V m,post ), is a factor describing the PSC dependency on the membrane potential of the postsynaptic neurons (V m,post see below). Occupancy of the neurotransmitter pool V sy is reduced by presynaptic spikes and recovers with time and is described by: where τ sy is the time constant characterizing the neurotransmitter pool recovery speed, δ(t-t 1 ) is the delta function, which is 1 when t = t j and 0 otherwise and t j is the arrival time of presynaptic spikes. The conductance of AMPA and GABAA receptors is assumed to be dependent on postsynaptic membrane potential V m,post , (ie, f(V m,post ) in Equation (7)), and the conductance of NMDA receptors has a sigmoidal relationship with V m,post , that is: with NMDA = −20.53 mV and k NMDA = 16.13 mV 18 . | Local field potential computations Local field potentials at the center of the stimulated cortical area are computed using: where σ is the conductivity of the cerebrospinal fluid, which is set to 1.56 S/m, 19 l i are the total currents generated by a segment including leaking currents, ionic channel currents, and postsynaptic currents, r i is the distance between the local field potentials (LFP) measurement location and the segment, and the summation runs over all the segments of all neurons in the model. The LFPs computed using Equation (11) are dominated by the activities of a small number of neurons around the electrode. To measure the overall network activity, we manually set r i to 100 µm whether they are smaller than 100 µm. | Computer simulation The simulation program was written using the C++ language and compiled with the Intel C++ compiler (http:// softw are.intel.com/intel-compi lers/, version 19.03). The program was compiled and executed on the Tinaroo computing facilities provided by the Research Computing Center at the University of Queensland. OpenMP (http:// www.openmp.org), a shared-memory parallel programming library, was used to parallelize the code to speed up program execution. The authors wish to provide the program for the purpose of validating the results reported here. | Laminar cortex model with ion channel kinetics The features of the LCM are summarized in Figure 1. A conductance-based model was used to simulate neuronal membrane potentials. The model incorporated the following ion currents: passive leaking currents (I Pas ), transient (I NaT ) and persistent (I NaP ) sodium currents, transient low-voltage activated T-type (I CaLVA ) and long-lasting high-voltage activated L-type (I CaHAV ) calcium currents, calcium-dependent potassium currents (I SK ), and the non-selective anomalous rectifier (I AR or I h ), and five types of voltage-dependent potassium currents-slowly inactivating D-type currents (I KD ), transient rapidly inactivating A-type currents (I KA , see below), M-type currents (I KM ), currents conducted by K v 2-like channels (I Kv2 ) and by K v 3-like channels (I Kv3 ). The ion channel activation and inactivation information for currents other than A-type currents were adopted from the publications of the Blue Brain Project. 16,20,21 We distinguish D-type currents conducted by K v 1 from those conducted by K v 2 and K v 3 channels, because D-type K v 1 channels activate at much lower voltage (~−43 mV) than K v 2 or K v 3 channels (~−20 and 18 mV, respectively) and inactivate at lower voltage (~−67 mV) than K v 2 channels (~−58 mV). Two types of A-type currents were modeled: axonal A-type currents conducted by rapidly inactivating K v 1 channels (I KA1 ), and somatodendritic A-type currents conducted by K v 4 channels (I KA2 ). Channel activation and inactivation data of the two A-type currents were drawn from the work of Roeper et al 22 and Mendonca et al 23 I KA1 activates at a higher voltage than I KA2 , and unlike I KA2 , I KA1 has two inactivation processes with time constants around 8 and 40 ms 22 The activation and inactivation thresholds and time constants of the ion channels are shown in Figure 2A, and additional details are provided in Supplementary Information ( Figure S1 and Data S1). A weighted segment model was used to reduce the computational complexity of iteratively updating membrane potentials. A segment in the LCM incorporates most of the features of the compartment model implemented in the NEURON platform, 24 including passive electrical properties, ion channel kinetics, and inter-segment conductance. Two additional features of each segment are a dendrite field and a weight factor. The dendrite field is a cylindrical space in which synapses are distributed around the segment. We assumed that the dendrites are purely passive, allowing their effects on postsynaptic currents to be modeled by an exponential decay function. 25 A segment may connect to several segments with identical biophysical properties. To avoid simulating multiple identical segments, we introduced a weight factor to the segment (see Figure 1A). This controls the conductances between segments so that a segment with a weight of n is equivalent to n identical segments when connecting to another segment (see Figure 1A and Supplementary Information Figure S1 and Data S1). The simplified neuron shapes used in the LCM are shown in Figure 1D. Cortical neurons display several firing patterns, such as regular-spiking (RS), fast-spiking (FS), and intrinsic bursting (IB). To mimic these firing patterns, we configured each neuron class with a range of ion channel conductances and stimulated it with a 500-ms current injection into the soma. We tested a series of current amplitudes from 0 to 1000 pA to determine the relationship between neuronal firing rate and current amplitude (ie, F-I curve). We measured five quantities for each test: (a) the slope of the F-I curve, determined using a linear regression; (b) the firing rheobase, the minimum current required to elicit a spike; (c) inter-spike intervals (ISI), (d) AP height and (e) AP width at −20 mV. Typical firing behaviors and F-I relations of neurons are shown in Figure 2B,C. Firing behaviors aligned with experimental observations. 26-28 | Impact of K v 1 loss on neuronal excitability We first examined how the loss of K v 1.1 and K v 1.2 channels affected neuronal excitability. We considered two changes, decreases in D-type conductance (g KD ) and decreases in axonal A-type conductance (g KA1 ). Effects on neuronal excitability were tested in the absence of synaptic inputs in five neuron types: pyramidal neurons in layer II/III (P2/3), IV (P4), V (P5), and VI (P6), and spiny stellate neurons in layer IV (SS4). The results are shown in Figure 3. Decreases in axonal A-type conductance, g KA1, did not significantly affect the excitability of all the tested neurons. They only slightly F I G U R E 3 Characteristics of neuronal firing with reduced D-type (K D ) and axonal A-type (K A1 ) potassium currents. Shown are the slopes of F-I curves (A), firing rheobases (B), action potential widths (C), and firing rates with 0.5 nA of currents injected into the soma (D) with reduced peak conductance for D-type (g KD ) and axonal A-type (g KA1 ) potassium currents, and the F-I relationship (E) with reduced g KD produced in the five excitatory cortical neuron types: pyramidal neurons in layer II/III (P2/3, the first column), IV (P4, the second column), V (P5, the fourth column), and VI (P6, the fifth column), and spiny stellate neurons in layer IV (SS4, the third column). The conductance is shown as percentages of the corresponding "normal" values. The inserted figure in (B) displays the spontaneous firing rate for P5 neuron, and the inserted figures in (E) are the magnified pictures for the corresponding indicated regions changed the firing rates and action potential widths when I KD was extremely low (<70% of its normal value). However, reducing g KD dramatically increased the excitability of all neurons and, g KD modulated the F-I slopes of the neurons. In all neuron types except P5, F-I slopes increased by approximately 10%-20% when D-type conductance was halved, and further reductions decreased the F-I slopes in pyramidal neurons, but not in SS4 neurons. Reducing I KD also lowered firing rheobases. These decreased by 13% in the P2/3, P4, and P6 neurons and by 33% in the SS4 neuron when g KD was halved. The P5 neuron, which is configured to have a low firing rheobase (<10 pA), fired spontaneously with reduced g KD . The spontaneous firing rate (f 0 ) increased as the conductance decreased. Reducing g KD also widened APs. A 50% reduction in g KD increased AP width by about 20% in all neurons. When g KD was gradually decreased, the firing behaviors of the pyramidal neurons did not change in a continuous fashion. Small to medium reductions in g KD (< 60%) displayed a dominant effect of lowered firing rheobase and increased AP width, whereas increases in firing rate were relatively small. Large g KD reductions displayed more significant effects on firing rates than on firing rheobases or AP width. Suppression of seizures caused by the loss of K v 1.1 and K v 1.2 channels requires the effects of reduced g KD to be counteracted. To search for parameters with the potential to reverse the effects of reduced g KD , we systematically varied the conductance of ion channels while decreasing g KD . Reducing persistent sodium conductance (g NaP ) compensated for the effects of reduced g KD . Figure 4 shows neuron firing characteristics when both g KD and g NaP are reduced. F-I slopes decreased for g NaP reductions up to 50%, with further reductions increasing the slopes (refer to Figure 4B). Reduced g NaP also dramatically increased firing rheobases, and significantly reduced the width of APs in all neurons. Herein, AP widths increased with small I NaP reductions and decreased for large I NaP reductions. As such, we estimated the reduction in g NaP required to restore the neuronal changes caused by reductions in g KD . For example, to compensate for a 50% reduction in g KD , less than 25% reduction in g NaP was sufficient to restore the F-I slopes to baseline values, and around 25% reduction was required to restore the rheobases. A further 25% reduction was necessary to restore AP widths in most neuron types except SS4. The required reductions in g NaP varied significantly across neuron types. It was much higher in SS4 neurons, which have the highest density of D-type channels, than in other neuron types. | Impacts of K v 1 loss on network dynamics We incorporated the neuronal changes related to reduced D-type current into the LCM to examine effects on neuronal network dynamics. We first reduced the peak D-type conductance in the axon initial segment of all neurons by 25%, 50%, and 75%. The LCM automatically incorporates the changes in F-I slope, firing rheobase, and firing rate. To simulate effects of widened APs on neurotransmitter release, we also increased the value of neurotransmitter release probability (p sy ) in excitatory synapses following the arrival of an AP from 0.6 to 0.8 and 1. The LCM was then used to simulate 20 000 neurons in a 0.5 x 0.5 mm 2 cortical area, reflecting a neuron density similar to that of the cerebral cortex. 29 Local field potentials in the center of the region were generated using the LCM and used to quantify neuronal oscillations. We defined seizure-like activity as LFPs with a mean power spectrum density (PSD) in the 2-20 Hz frequency band exceeding 2.0 µV/Hz; an example is shown in Figure 5B. We measured seizure threshold by systematically varying the conductance of inhibitory synapses. Based on estimates from previous experiments, we set the "normal" conductance to 0.5 nS for AMPA receptors, 30 to 0.4 nS for NMDA receptors, 30 and to 0.8 nS for GABAA receptors. 31 The seizure threshold was defined as the amount of reduction in GABAA receptor conductance (g GABAA ) required to induce seizure-like activity. For a neuronal network with "normal" ion channel and receptor function, seizure-like activity was observed when g GABAA was reduced to 0.38 nS, that is, the seizure threshold was 0.42 nS. Reductions in g KD significantly lowered the seizure threshold to 0.36 nS for a 25% reduction, to 0.14 nS for a 50% F I G U R E 4 Characteristics of neuronal firing with reduced D-type potassium currents (K D ) and persistent sodium currents (Na P ). Shown are the firing rheobases (A), the F-I slopes (B), and action potential width (C) obtained with reduced peak conductance for D-type (g KD ) potassium currents and persistent sodium currents (g NaP ) in the three excitatory cortical neuron types: pyramidal neurons in layer II/III (P2/3), spiny stellate neurons in layer IV (SS4), pyramidal neurons in layer V (P5). The conductance is shown as percentages of the corresponding "normal" values reduction, and to almost zero for a 75% reduction. Increases in p sy also lower the seizure threshold but only modestly. The seizure threshold decreased from 0.42 nS to 0.36 nS when p sy was increased from 0.6 to 0.8, and to 0.28 nS when p sy was set to 1. We additionally examined two mechanisms that are potentially capable of compensating for the effects of reduced D-type currents: blocking NMDA receptors (g NMDA ) or persistent sodium conductance (g NaP ). We tested the effects of reducing g NMDA and g NaP in a neuronal network with g KD halved. We found that the seizure threshold increased by only 0.14 nS when g NMDA was decreased from 0.4 to 0.1 nS, but blocking persistent sodium currents significantly increased seizure threshold. A 25% decrease in g NaP was enough to restore seizure threshold to the "normal" value (~0.4 nS), and further g NaP reductions continuously increased seizure threshold. | DISCUSSION We studied seizure generation in epilepsies associated with mutations affecting K v 1 channels using the LCM simulation platform. K v 1.1 and K v 1.2 are the most abundant potassium channels in the axon. They activate rapidly at a relatively low voltage compared to other potassium channels, allowing them to be an important determinant of firing thresholds in neurons. Our simulations suggest that decreases in D-type currents lead to lower firing rheobase, higher firing rate, and wider APs. Thereby, decreases in D-type currents are the most important contributing factor to seizure generation in epilepsies associated with loss of function in K v 1.1 and K v 1.2 channels. Previous studies on synaptic function in LGI1 knockout mice have yielded conflicting results with both enhanced excitatory transmission and reduced AMPA receptor function being reported. 6,[8][9][10][11] Our simulations suggest that enhanced excitatory transmission is likely to be the consequence of a higher level of neurotransmitter release caused by widened APs. Though the loss of the leucine-rich glioma-inactivated 1 protein is associated with reduced expression of AMPA receptor GluA1 subunits, resulting in smaller postsynaptic currents, 4 this effect is outweighed by increases in postsynaptic currents via NMDA receptors. Because inactivation of NMDA F I G U R E 5 Local field potentials (LFPs) and seizure threshold under changes associated with K v 1 and LGI1 mutations and reductions in g NaP . Shown are the examples of "normal" and seizure-like LFPs and their frequency power spectrum density (A and B, scale horizontal bar: 500 ms, vertical bars: 0.1 µV), the GABA thresholds for reduced D-type conductance (g KD , C), increased excitatory neurotransmitter release ratio (p E , D), reduced conductance of NMDA receptors (g NMDA , E), and persistence sodium conductance (g NaP , F). The conductance is shown as percentages of the corresponding "normal" values. The + sign indicates the seizure threshold higher than the maximum tested value (0.8 nS), and the "−" sign indicates the seizure threshold close to 0 receptors is much slower than that of AMPA receptors, the net effect of LGI1 loss-of-function mutations is likely to be to lengthen PSCs. While the effects of manipulations in D-type currents have been studied intensively in many neuron types using the antagonist α-dendrotoxin, K v 1-related A-type currents have been less extensively studied. Previous experiments have found that K v 1.1 and K v 1.4 channels may conduct A-type currents when co-assembling with auxiliary subunits K v β1 or K v β2. 5,32 Although our simulations suggest that changes in A-type currents do not significantly affect neuronal excitability, further investigation is required to understand their role in neurons. Based on our findings, we can identify two potential therapeutic targets for treating seizures caused by loss of K v 1.1 and K v 1.2 channels: persistent sodium channels and NMDA receptors. Blocking persistent sodium channels can reverse the effects of diminished D-type currents and restore the firing rheobase and AP width, and NMDA receptor antagonism reduces the changes in excitatory postsynaptic currents due to increased glutamate release. In our model, both these effects can suppress seizures caused by loss of K v 1.1 and K v 1.2 channel functions. We propose that riluzole could be repurposed to treat epilepsy caused by LGI1, KCNA1, or KCNA2 loss-of-function mutations. By blocking persistent sodium currents 33,34 but not transient sodium currents, riluzole does not have the same side effect profile as other antiepileptic drugs acting on the sodium channel, such as carbamazepine. Besides, riluzole may increase D-type currents by dramatically slowing down the inactivation of K v 1.4 channels. 35,36 At around 100 µmol/L concentrations, the inactivation time constant of K v 1.4 channels is prolonged to more than 680 ms and converts the A-type currents to D-type currents. Furthermore, riluzole also blocks NMDA receptors noncompetitively 37 and increases Ca 2+ -activated potassium currents. 38 The four effects are all desirable for treating epilepsy associated with loss of K v 1.1 and K v 1.2 channels. Our simulation suggests that a 25% reduction in persistent sodium currents is sufficient to restore the seizure threshold of neuronal network with halved D-type currents. A previous experiment found 10 µmol/L of riluzole almost completely blocked the persistent sodium currents, and a 25% reduction requires about 1 µmol/L of riluzole, 35 which corresponds to the plasma concentrations of riluzole achieved at the suggested therapeutic dose (2 × 50 mg/d). 39 Sodium currents in neurons are known to contain many components with distinctive biophysical features. Our simulations modeled both transient and persistent sodium currents. Rapid-activating, rapid-inactivating transient sodium currents account for >90% of the total sodium currents in neurons. They are responsible for the depolarization phase of action potentials and are thus a determinant of firing thresholds and action potential shapes. Rapid-activating, slow-inactivating persistent sodium currents comprise up to 10% of the total sodium currents. They typically activate at subthreshold membrane potentials and enhance repetitive action potential firing and synaptic transmission. 40 Another type of sodium current is the "resurgent current" which is found in ~20, predominantly inhibitory, neuron types. Resurgent currents appear when neurons are repolarized after prolonged depolarization. They are reported to promote spontaneous firing and high-frequency firing of inhibitory neurons and may be a promising antiepileptic therapeutic target. 41,42 A computer model that includes the unique activation features of resurgent currents is still to be developed. Seizures caused by loss of D-type currents may also be treated using activators of voltage-gated potassium channels. A group of compounds shown to enhance potassium currents through K v 1.1 channels by delaying inactivation of the channels may be a potential treatment for K v 1 channel-related epilepsy. 43 Further work investigating specific pharmacological activators or inhibitors of D-type currents that can be potentially tailored for this application is imperative. Activators of M-type currents could also be used to treat seizures. M-type currents, which are conducted by K v 7.2 and K v 7.3 channels, share many characters with D-type currents. For example, they both activate at a relatively low voltage (around −45 mV), M-type currents are non-inactivating, and D-type currents are slowly inactivating (with a time constant of 1 second), and they are both abundant in the axon initial segment. Agents that enhance M-type currents include flupirtine and its analogue retigabine. [44][45][46][47][48] Both negatively shift the activation threshold of K v 7.2 and K v 7.3 channels, 49,50 leading to significant increases in M-type currents during resting and depolarization states. 50
2020-01-16T09:04:08.443Z
2020-01-22T00:00:00.000
{ "year": 2020, "sha1": "7fb25deac98a066f0373382a7976f146a4455656", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/epi4.12379", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6c7b28772ea0e49794111681ce8179deb72e1f2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237563303
pes2o/s2orc
v3-fos-license
Radiation Hydrodynamical Simulations of the Birth of Intermediate-Mass Black Holes in the First Galaxies The leading contenders for the seeds of $z>6$ quasars are direct-collapse black holes (DCBHs) forming in atomically-cooled halos at $z \sim$ 20. However, the Lyman-Werner (LW) UV background required to form DCBHs of 10$^5$ \Ms\ are extreme, about 10$^4$ J$_{21}$, and may have been rare in the early universe. Here we investigate the formation of intermediate-mass black holes (IMBHs) under moderate LW backgrounds of 100 and 500 J$_{21}$ that were much more common at early times. These backgrounds allow halos to grow to a few 10$^6$ - 10$^7$ \Ms\ and virial temperatures of nearly 10$^4$ K before collapsing but do not completely sterilize them of H$_2$. Gas collapse then proceeds via Ly$\alpha$ and rapid H$_2$ cooling at rates that are 10 - 50 times those in normal Pop III star-forming haloes but less than those in purely atomically-cooled haloes. Pop III stars accreting at such rates become blue and hot, and we find that their ionizing UV radiation limits their final masses to 1800 - 2800 \Ms\, at which they later collapse to IMBHs. Moderate LW backgrounds thus produced IMBHs in far greater numbers than DCBHs in the early universe. DCBHs form when primordial halos grow to masses 10 7 M ⊙ and virial temperatures of ∼ 10 4 K without having previously formed a star because they are immersed in strong Lyman-Werner (LW) UV backgrounds that destroy their H 2 (e.g., Latif et al. 2014b;Agarwal & Khochfar 2015;Agarwal et al. 2019) or in supersonic baryon streaming flows that prevent star formation even if H 2 is ★ E-mail: latifne@gemail.com present (Tseliakhovich & Hirata 2010;Stacy et al. 2011;Greif et al. 2011;Schauer et al. 2017). Temperatures of 10 4 K activate Ly cooling that triggers rapid baryon collapse at up to 1 M ⊙ yr −1 (Wise et al. 2008;Regan & Haehnelt 2009). Stellar evolution calculations have shown that such flows can create stars with masses 10 5 M ⊙ that collapse to DCBHs via the general relativistic instability (Hosokawa et al. 2013;Umeda et al. 2016;Woods et al. 2017;Haemmerlé et al. 2018a,b). DCBHs are the leading candidates for the seeds of the first quasars because they form in dense environments in halos that can retain their fuel supply even when it is heated by the BH (Whalen et al. 2004;Alvarez et al. 2009;Whalen & Fryer 2012;Johnson et al. 2013;Smith et al. 2018;Zhu et al. 2020). The accretion rates of 0.1 -1 M ⊙ yr −1 required to build up 10 5 M ⊙ stars can only be sustained by atomic cooling, not H 2 , and the LW backgrounds required for the complete extinction of H 2 in halos are extreme, as much as a few 10 4 J 21 where J 21 = 10 −21 erg s −1 cm −2 Hz −1 sr −1 (Sugimura et al. 2014;. But there is a growing body of work that suggests that massive stars can form even in the presence of minute amounts of H 2 shielded deep in the cores of halos exposed to more modest LW fluxes. Safranek-Shrader et al. (2012) examined the collapse of a 3 × 10 7 M ⊙ halo in a LW background of 100 J 21 and found that cooling at its center was still governed by H 2 and produced a sink particle with a mass of ∼ 1100 M ⊙ . Latif et al. (2014c) also found that 100 -10,000 M ⊙ primordial stars could form in halos immersed in moderate LW fluxes (see also . Until recently, simulations of baryon collapse in atomically-cooled halos ignored radiative feedback from stars at their centers because stellar evolution models predicted that they were likely cool and red, and therefore not strong sources of ionising UV flux capable of halting accretion onto the star (although pressure due to outflowing Ly cooling radiation may could affect flows deep in the core of the halo; Smith et al. 2017). The first models to incorporate stellar feedback found it had little effect on the growth of the star on AU scales over times of a few years Ardaneh et al. 2018). Chon et al. (2018) and Regan & Downes (2018b) followed the collapse of halos in LW backgrounds of 100 & 1000 J 21 for a few hundred kyr and also found that radiation from stars did not suppress accretion. However, the presence of even small mass fractions of H 2 in halos in LW backgrounds of these magnitudes can reduce infall rates by 1 -2 orders of magnitude, down to 0.005 -0.03 M ⊙ yr −1 . Stars growing at the low end of this range have been found to become blue and hot in stellar evolution models, with ionising UV fluxes that could at least partially quench accretion (Haemmerlé et al. 2018a). How such feedback governs the final masses of stars in these backgrounds is not yet known. The original simulations of atomically-cooled halos that proceeded from cosmological intial conditions did not exhibit fragmentation or the formation of dense clumps that could later become multiple massive stars (although see Bromm & Loeb 2003). Later studies at higher resolution revealed that atomically-cooled gas fragmented on AU scales but could only follow the evolution of the clumps for a few tens of years and could not determine if they later became stars or were subsumed by the disc at later times (Becerra et al. 2015(Becerra et al. , 2018. Regan & Downes (2018a,b) and Chon et al. (2018) studied fragmentation at somewhat lower resolutions out to a few hundred kyr and found that some clumps persisted for long times but could not determine their final fates. Also Regan & Downes (2018a) simulated a single rare halo of ∼ 10 7 M ⊙ at z=24.7 while we here explore typical halos forming in high z universe. Suazo et al. (2019) followed the collapse of atomically cooled halos at intermediate resolutions in moderate LW backgrounds for ∼ 600 kyr, longer than previous studies but still well short of the collapse of the stars. Inflow rates lasting for the times required to actually form DCBHs have only recently been confirmed to occur in numerical simulations Regan et al. 2020;Patrick et al. 2020). Although binary and multiple DCBH systems formed in all three studies, which holds important implications for the detection of DCBH mergers by the Laser Interferometer Space Antenna (LISA) in coming decades, they did not include radiation from the stars and may not ultimately be self-consistent. Here, we investigate the prospect of massive BH seeds formation in typical primordial halos ranging in mass from 1.5 × 10 6 M ⊙ to 2.3 × 10 7 M ⊙ in LW backgrounds of 100 J 21 and 500 J 21 , which were much more common in the primordial universe than those required for complete photodissociation of H 2 . Our simulations are evolved for up to ∼ 900 kyr, approximately three times longer than in comparable studies, and include radiative feedback from the stars coupled to hydrodynamics and primordial gas chemistry. We describe our simulations in Section 2, present our results in Section 3, and conclude in Section 4. NUMERICAL METHOD Our simulations were performed with the Enzo adaptive mesh refinement (AMR) cosmology code (Bryan et al. 2014). We initialize our runs with cosmological initial conditions generated by MUSIC (Hahn & Abel 2011) at =150 with cosmological parameters taken from Planck 2016 data: Ω M = 0.308, Ω Λ = 0.691, Ω b = 0.0223 and ℎ = 0.677 (Planck Collaboration et al. 2016). Our simulation volume is 1 cMpc ℎ −1 on a side with a top grid resolution of 256 3 and two additional nested grids each with a resolution of 256 3 that span 20% of the top grid. We place the halo of interest at the center of the box and allow up to 16 additional levels of refinement during the runs to achieve resolutions of up to ∼ 300 AU. We split dark matter (DM) particles in this region into 13 child particles, which produces an effective DM resolution of 5 M ⊙ /h. Beginning at = 150, we refine on Jeans length, baryonic overdensity and particle mass resolution, as in and . The Jeans length is resolved by at least 32 cells in our models. We introduce sink particles that represent Pop III stars in cells where the maximum refinement level is reached, typically at densities of ≥ 10 −16 g/cm 3 . Our criteria for sink particle formation is based on SmartStars (Regan & Downes 2018b). A sink particle is allowed to form in a grid cell when meeting the following criteria: I) the grid cell is at the maximum refinement level; (II) the cell density is higher than the Jeans density; (III) the flow is convergent; (IV) the cooling time is shorter than the free-fall time; (V) the cell is at a local minimum of the gravitational potential. It can accrete gas from a radius of 4 cells and merges with the most massive particle within the accretion radius. Our recipe for accretion is based on the mass influx at the accretion sphere and we use the averaged accretion rate over 1 kyr as the actual accretion rate. Stars are treated as red and cool if their accretion rates exceed 0.04 M ⊙ yr −1 and blue and hot if they accrete below this rate. We assign a 10 5 K blackbody spectrum to blue stars M ⊙ (Schaerer 2002) and a 5500 K blackbody spectrum to red stars (Hosokawa et al. 2013), assuming in both cases that their luminosities scale with their masses. Consequently as stars grow in mass they become more luminous and produce strong feedback. Photons from stars are propagated throughout the simulation volume with the MORAY raytracing radiation transport package, which is self-consistently coupled to hydrodynamics and non-equilibrium primordial gas chemistry in Enzo (Wise & Abel 2011). Each star is a point source of both ionising and dissociating radiation whose spectrum is partitioned into five energy bins: 2.0 eV and 12.8 eV, which can destroy H − , H 2 and H + 2 , and three ionising energies, 14.0 eV, 25.0 eV, 200.0 eV. We use SEDOP to populate the last three bins which allows us to compute the optimum number of energy bins required to model radiation above the hydrogen ionisation limit (Mirocha et al. 2012). The energy fractions of 0.3261, 0.1073, 0.3686, 0.1965, 0.0 are used in bins 1-5, respectively which are determined from table 4 of Schaerer (2002). We employ a non-equilibrium primordial chemistry solver (Abel et al. 1997) to evolve the nine primordial species in our runs: H, H + , H − , He, He + , He ++ , H 2 , H + 2 , e − . Our simulations include H 2 cooling, collisional excitation and ionisation cooling by H and He, bremsstrahlung cooling and recombinational cooling. Uniform LW backgrounds of 100 or 500 J 21 are turned on at = 30 to suppress star formation in minihalos and self-shielding of H 2 is approximated by fits from Wolcott-Green et al. (2011). Such flux is higher than the expected background at this redshift but could be provided by star forming galaxies in the vicinity of the halo. In such scenarios the LW flux becomes almost uniform in the target halo, also see Agarwal et al. (2017). We ignore radiation pressure due to photoion- isations, which could facilitate H II region breakout by imparting momentum to the surrounding gas. Whalen & Norman (2008) examined the impact of radiation pressure at lower densities and found that they amount to at most 20% of the ionised gas pressure. In fact, we performed a test simulation by including radiation pressure and and found the stellar masses to be similar to the case without radiation pressure. We therefore conclude that radiation pressure will not strongly affect our results. Halo properties We have simulated six halos, labelled H1, H2, H3, H4, H5 and H6, whose masses and collapse redshifts are listed in Table 1. Our halos form in different environments with DM overdensities of 10-100 times the cosmic mean and also have a variety of merger histories. We compute the DM overdensity by calculating the mean DM density in a radius that is 10 times the virial radius of the halo and compare it to the cosmic mean in a volume of 1 (Mpc/h) 3 . This allows us to estimate the environment of the halo. Initial Collapse Baryon collapse is suppressed by LW radiation in all six halos until they reach masses of at least 10 6 M ⊙ , and in most cases 10 7 M ⊙ ( files in Figure 1, once collapse begins it is mediated by both Ly and H 2 cooling. At the onset of star formation (SF) atomic cooling dominates down to radii of a few pc, where the gas is at temperatures of ∼ 8000 K due to the dissociation by external UV radiation shown in the Fig. 3, but H 2 cooling, which produces temperatures of 300 -1000 K, dominates at radii less than 1 pc. This can also be seen in the H 2 mass fractions, which can exceed 10 −3 below 1 pc because they are self-shielded from the external background. The H 2 -cooled core of the halo is at higher temperatures than those in normal Pop III star-forming halos, which are typically 200 -300 K, because they are subject to higher mass loading from atomic cooling in the surrounding gas. Overall, H5 has the higher H 2 mass fraction and lower temperature than all halos because of the larger cooling rate. In comparison with H5, H6 has a lower H 2 fraction and higher temperature. The two-phase temperature structure in our halos is due to the LW background, which allows them to grow to larger masses and higher virial temperatures before collapsing but does not completely sterilize them of H 2 . Because their cores are at higher temperatures than those of normal Pop III star-forming halos, H 2 cooling and formation rates, which peak at 1000 -2000 K, are much higher there (O'Shea & Norman 2007). This leads to central accretion rates that are much higher than those in normal minihalos but less than those in isothermal atomically-cooled halos in much higher LW backgrounds. Disc Evolution / Star Formation Discs in our simulations remain stable most of the times with Toomre Q > 1 but occasionally become unstable and lead to star formation in Jeans unstable cells at the minima of local gravitational potential. Overall, we find that multiple stars form in almost all simulations but they get merged on short timescales of a few kyr in agreement with previous studies exploring such conditions (Safranek-Shrader et al. 2012;Latif et al. 2014c). As shown in Figure 2, collapse leads to the formation of thick, rotationally flattened discs with initial radii of ∼ 0.1 pc and masses of a few thousand solar masses. They grow to ∼ 0.2 -0.3 pc in radius by the end of the runs at nearly 1 Myr. These discs are a factor of five smaller in radius than those in isothermal atomically-cooled flows at similar times because of lower infall rates due to less efficient H 2 cooling. At later times the discs develop irregular features created by ionising UV feedback, as we discuss in greater detail below. A single star particle with a mass of a few solar masses forms first at the center of each disc when it reaches densities of ∼ 10 −16 g cm −3 . Although new star particles can form within the accretion radius of this star at times and merge with it a few hundred or thousand years later, it mostly grows by accretion of dense gas. Secondary stars begin to appear by ∼ 250 kyr in H2, H5 and H6, at ∼ 500 kyr in H1 and at ∼ 700 kyr in H4. At the end of the run H1, H4 and H5 host four stars, H2 has three and H6 has two. H3 forms a few low-mass stars but they are subsumed into the central star and only it remains at the end of the run at 900 kyr. Most of the additional stars have masses of a few tens to hundreds of solar masses, as shown in Table 1. Radiative Feedback Because accretion onto the main star begins at rates of a few 10 −3 M ⊙ yr −1 in all six halos (on par with normal Pop III star-forming halos; Latif et al. 2013c;Hosokawa et al. 2016), it is initially treated as a hot, blue source of ionising photons. The recombination times in the vicinity of the stars are about 100 yr therefore the UV flux remains trapped deep in the disc. However, the energy deposition mainly by UV ionizing radiation from the central star creates dumbbell shaped centric outflows which clear the gas from some of the discs, creating the annular structures and cavities in the disks shown in the Fig. 2. Stellar feedback lowers the density in the surroundings (∼10 −20 g cm −3 ) where highly anisotropic H II regions can break out along the lines of sight with lower column densities. The temperatures and ionisation fractions in the HII regions around the central stars are shown in figures 4 and 5, they have temperatures of 2 × 10 4 K and HII fractions very close to 1.0. In H2, H5 & H6 HII regions break out around the central stars at 330 kyrs, 260 kyrs and 400 kyrs, respectively. The I-front blowouts evacuate gas from the vicinity of the main stars in these three halos and terminate their growth. In H3 the HII region forms around the primary star at 120 kyrs and gets detached from the star. In H1 & H4 compact HII regions around the central star get trapped (size of 0.05 pc) and consequently the mass accretion rate onto stars plummets momentarily but HII regions expand around secondary stars at 600 kyrs and 800 kyrs, respectively. In these halos overall energy deposition due to the stellar feedback only declines mass accretion onto the primary stars by an order of magnitude. For the central stars, sizes of the Stromgren radii (R S ) vary from 0.6 pc -3.0 pc due to the anisotropic gas distribution around them and the corresponding Bondi radii (R B ) are 1.7 -2.0 pc. Along the lines of sight with lower column densities R S becomes larger than R B and HII regions break out, see also Inayoshi et al. (2016). Conversely, at earlier times R S < R B (0.006-0.05 pc vs. 0.09-1.07 pc) for stars with masses of 40 -500 solar so the H II region cannot grow. Its later expansion is therefore mainly due to the rise in luminosity of the star as it grows by accretion. To better quantify accretion in the midst of radiative outflows from the discs we plot spherically averaged mass inflow (4 R 2 v − rad ) and outflow (4 R 2 v + rad ) rates through radial shells in each halo in Figure 6. Inflow rates average ∼ 0.01 M ⊙ yr −1 at 1 pc and fall to 10 −4 M ⊙ yr −1 in the vicinity of the main star. Outflows dominate inflows in H2, H5 and H6 due to I-front breakout from the disc. In H3 and H4, outflow rates exceed inflow rates at pc scales at later times. In general, outflow rates are either greater than or comparable to inflow rates within ∼ 1 pc of the star in most of the halos. They rise over time, can be quite intermittent, and vary from halo to halo. Inflow rates fall steeply in the 0.2 pc region around stars due to stellar feedback, therefore accretion onto stars is expected to be low at later times. In the midst of inflows, we estimated the outflow rates shown in Fig 6 based on 4 R 2 v + rad but the local expansion of the gas due to turbulence, shocks and even disk dynamics may contribute to outflows, although their contribution is an order of magnitude lower compared to the HII regions. We show spherically averaged profiles of density, enclosed mass, temperature and H 2 mass fraction at the end of our simulations in Figure 7. In halos H1, H3 & H4, the mean densities in the center of the halo have decreased by a factor of a few and the profile has become flattened in the central 0.5 pc because of the stellar feedback. In H2, H5 and H6 densities have fallen by about six orders of magnitude where shocked ionised flows in the H II region of the star have driven gas out of the core of the halo. Average temperatures in this region have likewise risen above 10 4 K because of photoionisations. H 2 mass fractions have fallen by a few orders in magnitude because of photodissociation and collisional dissociation by free electrons. The latter can be important at early stages of H II region formation. The anisotropic H II region of the 1168 M ⊙ star is at an average temperature of 7000 K and has densities and H 2 mass fractions that are a few orders of magnitude lower than the surrounding central 0.2 pc of the halo. Average temperatures in the rest of the halos are a few hundred Kelvin and H 2 abundances are a few 10 −3 due to episodic recombination of ionised gas. The small bumps in the H 2 profiles at ∼ 0.2 pc and 0.4 pc in H5 and H6 are due to rapid H 2 formation in the outer layers of the I-front. The inclusion of multiple ionising photon energies in the spectrum of the star leads to the broadening of the front, and its outer layers can fall to ionisation fractions of 10% and gas temperatures of 1000 -2000 K, which are prime conditions for H 2 formation in the gas phase via the H − channel (Ricotti et al. 2001;Whalen & Norman 2008). The larger bumps in H 2 abundance at ∼ 1 pc in H5 and H6 arise because the dense shell of gas swept up by the D-type I-front behind them partly shields them from LW radiation from the star. Approximately 1000 M ⊙ of gas resides within the central 1 pc in all six halos. Accretion Rates / Final Stellar Masses Masses and accretion rates for the main star and next most massive star in each halo are shown in Figure 8. Accretion rates begin at a few 10 −3 M ⊙ yr −1 , rise to a few 10 −2 M ⊙ yr −1 , and then fall by an order of magnitude as the stars grow in mass and luminosity. High densities, ram pressures, and short recombination times in the vicinity of the star confine its H II region close to its surface and allow accretion to proceed for a few hundred kyr in all six halos. In H1, an H II region never develops and the star continues to accrete at a few 10 −3 M ⊙ yr −1 until it reaches a mass of 2775 M ⊙ at the end of the run at 719 kyr. In H2 radiation from the central star terminates its growth 330 kyr after formation at a mass of 1770 M ⊙ . But another star forms at 100 kyr and grows through mergers with other star particles and accretion until it reaches a mass of 8670 M ⊙ at the end of the simulation at 354 kyr. The two stars produce the highly anisotropic H II region visible at 354 kyr in Figure 3. The star in H3 initially grows faster than those in the other halos, with peak accretion rates of 0.03 M ⊙ yr −1 over the first 20 kyr. The large drop in rate at about 100 kyr is due to strong radiativelydriven outflows. The mass of the star at the end of the run is 2743 M ⊙ , having accreted at an average rate of 0.003 M ⊙ yr −1 . Average accretion rates for the main star in H4 are similar, ∼ 0.003 M ⊙ yr −1 , and it reaches a mass of 2638 M ⊙ 890 kyr after formation. The sharp dips in rate at 400 kyr and 850 kyr correspond to strong outflows that drive dense gas away from the star, the latter of which is visible in the temperature image at 880 kyr in Figure 3. In H5 and H6, the initial growth of the stars is comparable to those in the other halos but at 270 kyr and 400 kyr the H II region breaks out of the disc and accretion onto the stars plummets. The final mass of the central star in H5 and H6 is 2079 M ⊙ and 1955 M ⊙ , respectively. Overall, accretion onto the stars is intermittent and falls by a few orders in magnitude during outflows. The most massive stars are 1700 -2800 M ⊙ and, given the flattening in all the mass profiles, are unlikely to grow beyond a few thousand solar masses. This is true of the stars in H1 and H4, which were evolved for twice as long as the others (719 kyr and 880 kyrs, respectively) but grew in mass by less than a factor of two after their profiles flattened out. The second most massive stars have typical masses of about 100 M ⊙ , are born in the last 100 -200 kyr and have accretion rates of ∼ 0.001 M ⊙ yr −1 . The masses of all stars in our simulations are listed in Table 1. The maximum resolution in our simulation is 300 AU so we cannot resolve protostellar discs around individual stars. However, even if fragmentation occurs on smaller scales the clump migration timescale is shorter than the Kelvin-Helmholtz timescale at higher densities. Clumps are therefore expected to migrate inwards and merge with the central object . We resimulated H1 with a maximum spatial resolution of 75 AU (four times that of the others) and found that the most massive star grew to 600 M ⊙ in 57 kyr. As shown in Figure 8, its accretion rate and growth track are quite similar to those in the original run so our results likely hold at even higher resolution. Comparison with previous studies Previous studies investigating fragmentation in massive primordial halos under a moderate LW flux either ignored radiative feedback from stars (Safranek-Shrader et al. 2012;Latif et al. 2014c;Regan & Downes 2018b) or found it not to be important (Regan & Downes 2018b, hereafter RD18). RD18 only considered a single peculiar halo at several resolutions and LW backgrounds (1, 100 and 1000 J 21 ) for 250 kyr (and 500 kyr in two cases), a small fraction of the lifetimes of the stars in their models. Hence their results are not applicable to typical halos forming at high redshift. The halo studied by RD18 has a mass of ∼ 10 7 M ⊙ and collapses at z= 24.7 while the halos in our study collapse at 13 < z < 18 except H3. To understand this difference, we plot a halo mass function at different redshifts in Fig. 9 using the analytical mass function of Warren et al. (2006). The halo mass function shows that halos like the halo in RD18 are very rare, 0.1 halo per 1 Mpc 3 while the halos in our study are a hundred times more abundant, about 10 halos per 1 Mpc 3 . Therefore, our halos represent typical halos forming at z=15 and the RD18 halo is an outlier. Such halos have triggered higher accretion rates in RD18 which resulted in larger stellar masses in their study. The average mass accretion rates onto stars in our simulations are 0.001 − 0.01 M ⊙ /yr for 100 & 500 J 21 an order of magnitude lower than in RD18. We also performed one simulation with 1000 J 21 and found similar results. In fact, theoretical estimates suggest that mass accretion rates scale with sound speed ∼ c 3 s /G ∼ 0.1 × (T/8000K) 3/2 M ⊙ yr −1 . The gas temperature in our halo centers is about 1000 K for which the expected accretion rates are 0.004 M ⊙ yr −1 , in agreement with the results of our study. Therefore we believe our accretion rates are realistic and in accordance with theoretical expectations. Stellar masses in our study are about an order of magnitude lower than those in RD18 due to the lower mass accretion rates observed in our simulations. At such accretion rates (< 0.04 M ⊙ yr −1 ) stars become blue and hot in our simulations and produce strong feedback which launches outflows on parsec scales and halts accretion onto them. In contrast, higher accretion rates as in RD18 result in red and cool stars which do not produce ionising radiation. Consequently, the mass accretion onto stars and stellar masses are an order of magnitude larger. RD18 switched off ionising radiation in their simulations, which results in more fragmentation and three-body interactions that eject stars from the disc, terminating their growth. We have simulated six distinct halos here that enable us to study variations from halo to halo and followed accretion onto the stars for up to 900 kyr, 4 times longer than RD18. These are thus the first simulations that have followed the accretion of the resulting objects for a relevant fraction of the lifetime of the stars in the presence of ionising radiation. They allow us to capture the quenching of accretion onto stars because of their ionising UV flux and hence obtain more accurate final masses. We found that the final stellar masses are mainly determined by the stellar feedback. Our larger ensemble of six halos therefore demonstrates that ionising UV radiation from the star sets its final mass in a wide range of environments. We have explored LW backgrounds of 100 − 500 J 21 and found that Pop III stars of a few thousand solar masses can form in them. Our findings suggest that moderate LW fluxes cannot induce the large accretion rates (≥ 0.04 M ⊙ /yr ) required for supermassive star formation in typical halos and massive Pop III stars of 1800 − 2800M ⊙ form under these conditions. DISCUSSION AND CONCLUSION We find that moderate LW backgrounds in the primordial universe led to the formation of 1800-2800 M ⊙ Pop III stars, intermediate in mass to those in minihalos before such backgrounds existed (30 -500 M ⊙ ; Hirano et al. 2014Hirano et al. , 2015 and those in atomically-cooled halos in the most extreme backgrounds (∼ 10 5 M ⊙ ; e.g. Woods et al. 2017). Intermediate LW backgrounds enabled primordial halos to grow to somewhat larger masses before forming stars while allowing some H 2 to survive in their cores. This led to a more massive gas reservoir in the center of the halo that was at higher temperatures than those normally associated with H 2 cooling. At the onset of collapse, Ly cooling dominated in the outer regions of the halo but H 2 cooling regulated the collapse of the core, but at rates that were 10 -50 times higher than those in minihalos because the higher virial temperatures were close to the peak in the H 2 cooling and formation rates. Supercharged H 2 cooling thus produced 1000 -3000 M ⊙ Pop III stars. We find that fragmentation in halos in moderate LW backgrounds tends to be mild so these stars are usually accompanied by a few normal Pop III stars. Our results suggest typical stellar masses of a few thousand solar for LW strengths of 100-1000 J 21 . Therefore, the IMF of Pop III stars is expected to be a top heavy with the masses up to a few thousand solar masses under such conditions. They opened a third channel of Pop III SF that led to the birth of intermediate mass black holes in primordial galaxies. We expect such conditions were likely common at high redshifts as the number density of these pristine halos exposed to a given LW background strongly Habouzit et al. 2016). Even a factor of a few difference in LW flux changes the abundance by two orders of magnitude. Our simulations were evolved for more than half the expected lifetimes of the stars (Schaerer 2002) and radiative feedback levels off their masses well before the end of the runs. These massive stars are expected to collapse to BHs via the photodisintegration instability (Heger & Woosley 2002;Heger et al. 2003) or He depletion in their cores without exploding (see Figure 4 of Woods et al. 2020). In the future stellar evolution calculations run inline with cosmological simulations will be required to determine at what masses the stars collapse to BHs. We find that radiation from the star plays a pivotal role in its evolution in intermediate LW backgrounds. Accretion rates in halos collapsing via Ly and H 2 cooling can be 100 times lower than those cooling by Ly alone, and are close to the limit below which the stars become blue and hot rather than cool and red (∼ 0.02 M ⊙ yr −1 ; Herrington et al. 2021). Thus, if a star happens to be born blue in such a halo its ionising UV radiation tends to keep it blue by curtailing accretion onto it. As the star becomes more massive it becomes more luminous and drives accretion rates even lower. None of the stars in our runs ever become red because they never reach accretion rates of 0.04 M ⊙ yr −1 . This rate is a little larger than those found in stellar evolution models to cause stars to become blue so the radiative feedback in our models should be taken to be an upper limit and the true masses of the stars may be somewhat higher. We have assumed here LW backgrounds of 100 & 500 J 21 due to nearby star forming galaxies. A stellar mass of a few times ≥ 10 6 M ⊙ is required to provide such flux for a Salpeter IMF with mass range of 10-100 M ⊙ Dijkstra et al. 2014;Habouzit et al. 2016;Chon & Latif 2017). We assume here T rad = 10 5 K and ignore photo-detachment of H − but the spectral temperatures of the first galaxies are expected to be between T rad = 10 4 − 10 5 K (Sugimura et al. 2014;Agarwal & Khochfar 2015). The LW source halos must be located at ∼10 kpc to avoid metal pollution (Dijkstra et al. 2014;Habouzit et al. 2016) and about 60 % of such halos are metal free at z = 15 . We have assumed here that the LW background is constant for the duration of our runs (about 200 Myr), and is provided by Pop II stars due to ongoing star formation in nearby halos. The impact of UV ionising radiation from LW source galaxies was investigated by Chon & Latif (2017), they found that most of it is absorbed by the filaments and dense clumps surrounding the source galaxy. In some cases, ionising radiation from the source galaxy actually promotes the collapse of the atomically-cooling halo (see also Johnson et al. 2014). The LW sources may also emit X-rays produced by the X-ray binaries or even massive stars. X-rays heat gas at low densities but also enhance ionisation fractions that catalyze H 2 formation. These effects have been investigated by Jeon et al. (2012), Inayoshi & Omukai (2012), and , who found that X-rays are attenuated at higher densities and do not impact the characteristic masses of stars (Hummel et al. 2015). Therefore, we expect that ionising UV and X-rays from nearby sources will not have much impact on our findings. Our accretion recipe is based on the mass influx through the accretion sphere and we assumed an accretion radius of 4 cells. Since the Jeans length must be resolved by at least four cells in order to avoid spurious fragmentation (Truelove et al. 1997), the accretion radius of sink particles must not be smaller than two grid cells, see also Federrath et al. (2010). We performed an additional simulation (shown in Fig. 8) with an accretion radius that was four times smaller and found similar stellar masses. Therefore, we do not believe that the choice of the accretion radius has a strong impact on our results. However, current computational constraints do not allow us to resolve flows all the way down to the surface of the star in cosmological simulations so our stellar masses should be taken to be upper limits. Instead of using an instantaneous accretion rate, we use accretion rates averaged over 1 kyr intervals and never cap the rates, but they never exceed 0.03 M ⊙ yr −1 in our runs. We selected six distinct halos and turned on LW backgrounds of strengths 100 J 21 for H1, H3 & H5 and 500 J 21 for H2, H4 & H6. This allowed us to robustly estimate the upper limit of stellar mass irrespective of their merger history and environments. Our results confirm that the upper limit of stellar mass is determined by its feedback irrespective of the strength of the LW flux and the halo mass range explored here. Therefore, if we turn on LW backgrounds of different strengths for the same halo, the results are expected to be similar. Less fragmentation occurs in our models than in simulations of normal Pop III star formation (Turk et al. 2009;Clark et al. 2011;Greif et al. 2012;Susa et al. 2014;Stacy et al. 2016;Hosokawa et al. 2016;Susa 2019;Sugimura et al. 2020). Even though SF in our halos is also regulated by H 2 cooling, the gas at the center of the disc is several times hotter so it is better supported by thermal pressure against fragmentation. In reality, most simulations of atomic collapse performed to date probably overestimate fragmentation because they ignore magnetic fields that likely arose in most primordial halos because of subgrid turbulent dynamos (Schober et al. 2012;Turk et al. 2012;Latif et al. 2013aLatif et al. , 2014aSharda et al. 2020). Such fields would tend to stabilize the disc and suppress fragmentation. It is not clear if the extremely massive Pop III stars in our simulations could later evolve into the first quasars because Smidt et al. (2018) found that they must be seeded by BHs of at least 10 5 M ⊙ at ∼ 20 to reach 10 9 M ⊙ by ∼ 7 in the cold accretion flows that are thought to fuel their growth (see also . If not, they could instead yield a population of less-massive, lowerluminosity quasars that are yet to be discovered. Synergies between JWST and Euclid or the RST could reveal the existence of these objects when they inaugurate the era of 5 < < 15 quasar astronomy in the coming decade.
2020-12-18T02:15:47.224Z
2020-12-16T00:00:00.000
{ "year": 2021, "sha1": "3090248a832bd39d0caa321bda88bff9cd08d733", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.09177", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3090248a832bd39d0caa321bda88bff9cd08d733", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240303000
pes2o/s2orc
v3-fos-license
SF3B1 modulators affect key genes in metastasis and drug influx: a new approach to fight pancreatic cancer chemoresistance Aim: Because mutations of splicing factor 3B subunit-1 (SF3B1) have been identified in 4% of pancreatic ductal adenocarcinoma (PDAC) patients, we investigated the activity of new potential inhibitors of SF3B1 in combination with gemcitabine, one of the standard drugs, in PDAC cell lines. Methods: One imidazo[2,1-b][1,3,4]thiadiazole derivative (IS1) and three indole derivatives (IS2, IS3 and IS4), selected by virtual screening from an in-house library, were evaluated by the sulforhodamine-B and wound healing assay for their cytotoxic and antimigratory activity in the PDAC cells SUIT-2, Hs766t and Panc05.04, the latter harbouring the SF3B1 mutations. The effects on the splicing pattern of proto-oncogene recepteur d’origine nantais (RON) and the gemcitabine transporter human equilibrative nucleoside transporter-1 (hENT1) were assessed by PCR, while the ability to reduce tumour volume was tested in spheroids of primary PDAC cells. Results: The potential SF3B1 modulators inhibited PDAC cell proliferation and prompted induction of cell death. All compounds showed an interesting anti-migratory ability, associated with splicing RON/ΔRON shift in SUIT-2 cells after 24 h exposure. Moreover, IS1 and IS4 potentiated the sensitivity to gemcitabine in both conventional 2D monolayer and 3D spheroid cultures, and these results might be explained by the statistically significant increase in hENT1 expression (P < 0.05 vs. untreated control cells), potentially reversing PDAC chemoresistance. Conclusion: These results support further studies on new SF3B1 inhibitors and the role of RON/hENT1 modulation to develop effective drug combinations against PDAC. INTRODUCTION Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers in the world. The survival rate has increased in recent years, and double-digit survival rates are increasingly seen, but epidemiological studies also report a rising incidence [1,2] . Thus, PDAC is projected to become the second leading cause of cancer-related death by 2030 [3] . This grim future has multifactorial causes. There are no tools for prevention, and early diagnosis of PDAC is complicated. Most patients are diagnosed when the tumour has already spread throughout the body due to the lack of early symptoms and specific biomarkers [4,5] . The treatment options for PDAC are also relatively limited. The only current curative treatment at the moment is surgical resection, which is possible in only 20% of patients. Moreover, this treatment has a high complication rate and recurrence is often seen [6] . The standard of care treatment is chemotherapy, using polychemotherapy regimens or gemcitabine monotherapy [7] . Gemcitabine, approved by the Food and Drug Administration in 1996, was the standard of care in the treatment of locally advanced and metastatic PDAC for over two decades. A better efficacy was found for various chemotherapy combinations such as FOLFIRINOX [5-fluorouracil, folinic acid (leucovorin), irinotecan, oxaliplatin] and gemcitabine plus nab-paclitaxel (GEM-NAB, Abraxane®) [8] . Most PDAC cases are characterised by inherent or acquired chemoresistance. This resistant behaviour is determined by multiple cellular-autonomous factors, such as reduced expression of key drug transporters, and/or by different components of the tumour microenvironment (TME) [9,10] . Recent studies suggest that alternative splicing (AS) deregulation plays a pivotal role in tumorigenesis and cancer drug resistance [11,12] . Aberrant splicing has been shown to occur in genes involved in drug metabolism, including transporters responsible for drug uptake. In this regard, a well-known example of aberrant splicing is the exon 13 skipping in the SLC29A1 gene (solute carrier family 29 member 1) which encodes the human equilibrative nucleoside transporter-1 (hENT1) [13] . This splicing alteration is due to an intronic mutation which leads to a reduction in the expression and uptake of another cytidine analogue, cytarabine [11,14] . Drug resistance is also associated with alterations in genes that regulate apoptosis, often generating proteins with antagonistic functions (e.g., BCL-X and MCL-1) or migration (e.g., RON), favouring invasion and metastasis. Noteworthy, the pre-mRNA splicing process is also involved in the regulation of the DNA damage repair, influencing with high probability the resistance to therapy [10,11,14,15] . Additionally, aberrations directly affect splicing regulation, and it has recently been demonstrated that somatic mutations of splicing factor genes are common in not only hematopoietic neoplasms but also solid tumours including PDAC [16] . The splicing factor 3B subunit-1 (SF3B1), which is involved in the branch site recognition during the pre-mRNA splicing process, is the most frequently mutated RNA splicing factor gene in cancer, and mutations in the HEAT domain of the SF3B1 gene have been detected in 4% of PDAC patients [12] . Against this background, and given the central role of AS in cancer, targeting this process is considered a potential therapeutic approach. Pre-clinical studies have shown potential in the modulation of splicing in cancer cells via small molecules targeting SF3B1 [11] , namely pladienolide B (PB), spliceostatin A and herboxidiene, which interfere with the splicing modulation [17] . Two synthetic analogues of PB, E7107 and H3B-8800 (orally available small molecule), are the only SF3B1 modulators in clinical trials [18] . Of note, one patient with acinar pancreatic carcinoma and hepatic metastases had a confirmed partial response lasting eight months during the phase I trial testing E7107, but severe ophthalmologic disturbances halted further clinical development of this drug [19] . As mentioned above, splicing modulation represents an innovative and interesting therapeutic strategy in the fight against cancer. Preclinical studies revealed that low-dose splicing modulators are synergistic in combination with conventional anticancer agents [20,21] . We previously demonstrated that modulation of splicing in cancer cells was an effective therapy in an in vivo model, both as a monotherapy with direct inhibitors of SF3B1 and in combination with other anticancer agents, with acceptable toxicity [11] . This combination could expand the therapeutic window of the splicing modulators. Therefore, investigations on new molecules that could target aberrant splicing in PDAC are warranted. In the present study, we performed structural computational studies and virtual screening of compounds available in an in-house molecular library, and we selected some indole derivatives to evaluate their antitumour activity in appropriate preclinical models of PDAC and their potential to fight molecular mechanisms underlying PDAC chemoresistance. In particular, we used the epithelial and mesenchymal cells SUIT-2 and Hs766t, as well as Panc05.04 cells, carrying the SF3B1 mutations p.Q699H and p.K700E. The promising results obtained in our previous studies concerning the anticancer properties of imidazo [2,1b][1, 3,4]thiadiazole and indole scaffolds [25,[37][38][39][40] prompted us to explore the biological activity of the selected compounds alone and in combination with gemcitabine. Gemcitabine still represents the cornerstone of PDAC treatment and in preclinical models of peritoneal mesothelioma we observed that our imidazo [2,1-b] [1, 3,4]thiadiazole derivatives potentiated its antiproliferative effects [40] . Since these results were associated with increased expression of hENT1, which plays a key role in the uptake and cytotoxicity of gemcitabine [41] , in the present study, we also focused on the effect of AS on hENT1 expression in order to bypass one of the most important mechanism involved in the resistance to gemcitabine. Ligand preparation and protein preparation Both ligands to be screened and co-crystallised ligands within the Protein Data Bank (PDB) structures were optimised using EpiK tool to energetically minimise their structure and generate protomers and tautomers at pH 7.4 ± 0.5 [42,43] . The 3D structures of the SF3B complex were downloaded from the PDB [44] and imported into the Schrödinger suite to optimise the structure by using the "Protein preparation" tool [42] . The bond orders for untemplated residues were assigned and hydrogens were added to the structure. Water molecules beyond 5.0 Å from any of the HET groups, including ions, were deleted. Finally, PROPKA [45] was run under pH 7.0 to optimise side chain states. Pharmacophore creation and screening LigandScout [46,47] software was employed to create the pharmacophore model and find the common feature between the two PDB structures by using the common pharmacophore map for virtual screening. The pharmacophore model was created, using the PDB coordinate of the ligand-protein complex (PDB IDs: 5ZYA and 6EN4). Starting from the two pharmacophore maps, only the common features were retained to be used for further studies. In the screening module, the "pharmacophore fit-score" was used as scoring function and "match all query features" was chosen as screening mode. The selected retrieval mode was "get best matching conformation". Docking The docking grid was generated using Glide software [48] . The scaling factor was set at 1.0 Å with a partial charge cut-off of 0.25, and the co-crystalised ligand was chosen as grid centroid. Molecular docking was carried out using Glide software [48] by Schrödinger (release 2018-4). The van der Waals radii scaling factor for ligands to be screened was set as 0.8, with a partial charge cut off by 0.15. The ligands were considered as flexible, and Epik state penalties were considered as docking score. The in-house compounds library was screened in standard precision mode. Molecules were then ranked based on the docking score. Cell lines and culture conditions The PDAC cell lines SUIT-2 and Hs766t were purchased from the American Type Culture Collection (ATCC, Manassas, VA), while the Panc05.04 cell line was a generous gift from Dr Eric Eldering (Department of Experimental Immunology, AMC, The Netherlands). SUIT-2 is a mesenchymal tumour cell line derived from a metastatic liver tumour of human pancreatic carcinoma. It produces and releases two tumour markers, carcinoembryonic antigen and carbohydrate antigen 19-9 (CA19-9), in culture in vitro and in nude mice in vivo [50] . Hs766t is an epithelial cell line isolated by R. Owens et al. [51] in 1973 from a pancreatic carcinoma metastatic to a lymph node (ATCC® catalogue number HTB-134™). Panc05.04 is a pancreatic adenocarcinoma epithelial cell line derived, in 1995, from a primary tumour resected from the head of the pancreas of a woman with PDAC (ATCC® catalogue number CRL-2557™). The PDAC-3 primary culture cells were obtained from patients undergoing pancreatoduodenectomy, as described previously [52] . Evaluation of cell growth inhibition by the sulforhodamine-B assay In vitro chemosensitivity was assessed with the sulforhodamine-B (SRB) assay, as reported previously [53,54] . The SUIT-2 and Hs766t cells were seeded in triplicate in 96-well flat bottom plates at their optimal seeding concentration of 3-5 × 10 3 cells in 100 µL/well for both cell lines. They were incubated overnight at 37 ℃ with 5% CO 2 to ensure cells adhesion creating a confluent monolayer. Cells were treated in triplicate with 100 µL of drugs dissolved in DMSO at different concentrations in the nano-and micro-molar range and incubated at 37 ℃ with 5% CO 2 for 72 h. Thereafter, the cells were fixed with 25 µL of cold 50% trichloroacetic acid for at least 60 min at 4 ℃. Then, the medium was removed, and the plates were gently washed five times with tap water, dried at room temperature overnight and stained with 50 µL of 0.4% (w/v) SRB solution in 1% acetic acid for 15 min at room temperature (RT). The plates were gently washed four times with 1% acetic acid and dried at RT for a minimum of 6 h. After adding 150 µL of tris(hydroxymethyl)aminomethane solution, the plates were gently mixed for 2-3 min at 350-400 rpm on a plate shaker. The optical density (OD) was spectrophotometrically read at wavelengths of 490 and 540 nm on a plate reader (BioTek Instruments Inc., Winooski, VT). Cell growth inhibition was calculated as the percentage of drug treated cells vs. vehicle-treated cells ("untreated cells or control") OD (corrected for OD before drug addiction, "Day 0"). The 50% inhibitory concentration of cell growth (IC 50 ) was calculated by non-linear least squares curve fitting (GraphPad PRISM, Intuitive Software for Science, San Diego, CA). Since gemcitabine is commonly used (in monotherapy or within polychemotherapy regimens) for the treatment of PDAC patients and our previous studies in preclinical models of mesothelioma showed that thiadiazole derivatives potentiated gemcitabine effects [40] , we evaluated the cytotoxic activity of the most promising compounds (IS1 and IS4) in combination with gemcitabine. For these studies, we used the above-described SRB assay exposing cells to IC 50 values of the experimental compounds, added to IC 25 values of gemcitabine, for 72 h, as described previously [40] . Evaluation of cell death by trypan blue assay The in vitro sensitivity to the most promising compounds (IS1 and IS4) was also assessed for the PDAC cell line Panc05.04 carrying two endogenous SF3B1 mutations: p.Q699H and p.K700E. Of note, these cells have a duplication time above 36 h and are therefore less suitable for the assessment of cytotoxic activity in 96well plates with the SRB assay. Therefore, we used a trypan blue assay, as described below. The Panc05.04 cells were seeded in a 6-well flat bottom plate in a volume of 1 mL at the density of 2 × 10 4 cells/well. They were incubated overnight at 37 ℃ with 5% CO 2 to create a confluent monolayer and treated with 1 mL of drug dissolved in DMSO at concentrations ranging from 0.1 to 10 µM. After 96 h of treatment, the old medium was removed and the cells were washed twice with phosphate-buffered saline (PBS). Cells were harvested with trypsin-EDTA and incubated for 15 min at 37 ℃ with 5% CO 2 . After the addition of the new medium, the cells were resuspended and 10 µL of the cell suspension was harvested into a sterile Eppendorf. Noteworthy, only dead cells are coloured, since healthy living cells exclude trypan blue and are not coloured in this assay. Specifically, trypan blue is unable to penetrate the intact cell membrane of living cells. On the contrary, dead cells have a peculiar blue colour due to the absorption of the dye that crosses the compromised cell membrane. Trypan blue (10 µL) and 10 µL of the mixture for each Eppendorf were transferred to a cell counting slide. The percentage of viable cells vs. non-viable cells was determined using the LUNA II™ Automated Cell Counter according to the manufacturer's protocol (Westburg, Leusden, The Netherlands). Analysis of cell migration by wound-healing assay The anti-migratory activity was determined with the in vitro scratch wound-healing assay. SUIT-2 cells were seeded in 96-well flat bottom plates, at the optimal density of 5 × 10 4 cells/well in 100 µL and incubated for 24 h. The scratch was performed with a 96-pin scratcher on confluent cell monolayers. After the removal of detached cells, the plate was washed twice with 200 µL of PBS and 100 µL of medium was added to all the wells. Thereafter, the experimental wells were treated with 100 µL of the drugs at concentrations of 4 × IC 50 and an additional 100 µL of the medium was added to the control wells. Images were taken immediately after scratching procedure, as well as 8 and 24 h after the exposure of the drugs by phase-contrast microscopy using the Leica DMI300B microscope (Leica Microsystems, Eindhoven, the Netherlands). The results were analysed with Scratch Assay 6.2 software (Digital Cell Imaging Labs, Keerbergen, Belgium), as described previously [53] . PCR assay to evaluate SF3B1 and hENT1 Real-time quantitative reverse transcription PCR (qRT-PCR) was performed to evaluate the gene expression of SF3B1 and hENT1 in the PDAC cell lines, using GUSB and GAPDH as housekeeping genes. The cells were seeded at 3-5 × 10 3 in a 6-well flat bottom plate with 2 mL medium per well and incubated with gemcitabine (IC 50 ) for 24 h. Thereafter, the medium was collected and cells were washed using 2.5 mL PBS. Trypsin-EDTA was then added, and, after 5 min incubation the detached cells were resuspended in the previously collected medium and centrifuged at 1500 rpm for 5 min. The pellets were either stored at -80 ℃ or used immediately for RNA extraction, using the RN-easy RNA isolation kit (Qiagen) following the manufacturer's instructions. One microgram of RNA was then used to synthesise complementary DNA (cDNA) in a volume of 20 µL of sterilised dH 2 O (Versilene® Fresenius, Fresenius Kabi France) for each sample, as described previously [55] . The resulting cDNA was amplified by quantitative-PCR using specific primers for SF3B1 and GUSB with the LightCycler® 480 Real-Time PCR System (Roche, Rotkreuz, Switzerland). The mRNA expression of hENT1 was evaluated using the specific kits for hENT1 and GAPDH with the ABIPRISM-7500 instrument (Applied Biosystems, Foster City, CA), as described previously [41] . To visualise the splicing modulation induced by the potential SF3B1 inhibitors on RON, we performed an end-point PCR assay followed by agarose gel electrophoresis. The SUIT-2 cells were seeded in 6-well flat bottom plates and incubated for 24 h with 20 µM of the two most promising compounds in 2 mL medium. RNA isolation and cDNA synthesis were performed according to the protocol described above. The primers for RON were designed considering the exons of this gene, as follows: Exon 10_ Forward (5'-CCT GAA TAT GTG GTC CGA GAC CCC CAG-3'); Exon12_ Reverse (5'-CTA GCT GCT TCC TCC GCC ACC AGT A-3'). PCR was performed as described previously [55] , at the annealing temperature of 62 ℃. Analysis of antitumour activity in multicellular spheroids of primary cells PDAC-3 spheroids were established seeding 20000 cells/mL in DMEM/F12 + GlutaMAX-I (1:1), in 24-well ultra-low attachment plates (Corning, NY, USA) according to manufacturers' protocol. Spheroids were generated for 3-7 days, and then harvested for growth inhibition studies in 96-well plates. After checking their growth rate and stability, the spheroids were treated at concentrations of 4 × IC 50 of gemcitabine, IS4 and their combination for 72 h. The cytotoxic effects were evaluated by measuring the size of spheroids compared to untreated controls, as described previously [38] . Statistical analysis All experiments were performed in triplicate and repeated at least twice. The percentages of cell migration were calculated taking into account at least nine scratches. Data were expressed as mean values ± SEM and analysed by Student's t-test or one-way ANOVA. The cut-off level of significance was P < 0.05. Selection of potential SF3B1 inhibitors To explore the binding mode and prioritise putative active compounds, preliminary computational studies were performed using the crystallographic structures of SF3B1 selected from the PDB. The crystallographic structure of two SF3B1 protein ensembles (PDB ID: 6EN4 [56] and PDB ID: 5ZYA [57] ) in complex with PB and its analogue E7107 were selected as a starting point for computational studies to build a structure-based pharmacophore and docking model [56] . The interaction map of the two protein-ligand complexes was compared as a guide for the crucial interactions to be accounted in our investigations. From the structural analysis of the two compounds compared, the common residues of the protein complex interacting with PB and E7107 were: V1078, V1110, V1114 and L1066 of the subunit SF3B1 and R38 and Y36 in the PHF5A subunit [57] . Starting from the two crystal structures, a pharmacophore map was created using LigandScout v.4.4 [46,47] , and geometrically common features were selected, thus removing two distal features. As a result, six common pharmacophoric features were found and the common pharmacophore was created [ Figure 1]. The common pharmacophore was then used for virtual screening studies to identify the molecular scaffolds of interest using both our in-house molecular library and commercially available molecular libraries. According to the binding mode with the amino acid residues of the common pharmacophore, the most suitable molecules were selected and then their structures were carefully analysed. It was then found that most of them showed a common feature: an indole group and a nearby amide group. Docking studies were performed on the same crystallographic structures using Glide 2018-4 [48] to have a consensus mode of selection. Structure-based pharmacophore and docking exploit different algorithms; thus, we decided to see which molecules of our in-house library were prioritised by the two techniques adopted. As shown in Figure 2, Tyr36, Arg38, Arg1074, Arg1075 and Leu1066 residues were found to be important for the protein-ligand complex stabilisation. From these analyses, four compounds were prioritised in terms of interactions and theoretical binding energy. Drug sensitivity The in vitro sensitivity to the potential SF3B1 modulators {splicing inhibitors IS1, IS2, IS3 and IS4 [ Figure 3]} was evaluated for the PDAC cells SUIT-2 and Hs766t. These cells were selected because they are representative of PDAC mesenchymal and epithelial phenotype [25] . A pre-screening cytotoxicity SRB assay was initially performed using concentrations of 0.1, 1 and 10 µM. Notably, all compounds showed concentration-dependent inhibition of proliferation; thus, we expanded our studies using at least eight different concentrations (from 125 nM to 16 µM) to define more accurate IC 50 values. Compounds IS1 and IS4 showed the highest sensitivity in both preclinical models [ Figure 4A and B]. In particular, the Hs766t cells were most sensitive to both compounds, with IC 50 s of 2.4 and 2.7 µM after exposure to IS4 and IS1, respectively. In contrast, the SUIT-2 cells were least sensitive, with IC 50 s ranging from 4.5 to 7.5 µM. Considering the interesting results of antiproliferative activity in vitro, we selected the most promising compounds (IS1 and IS4) for the analysis of migration inhibition and the modulation of the splicing of RON, an overexpressed gene in PDAC. Induction of cell death in cells harbouring SF3B1 mutations Heterozygous mutations in the splicing factor SF3B1 have been found to particularly occur in haematological malignancies, but more recently they have also been detected in several solid tumours including PDAC [41] with a frequency of 4% [58][59][60] . Previous studies have shown that SF3B1 mutations are concentrated in the sequence encoding the HEAT repeat domains with major hotspots including p.R625, p.K666 and p.K700E [60][61][62] . Interestingly, the latter mutation is carried by the Panc05.04 cell line together with p.Q699H [63] . Therefore, we used this model to perform further studies with the IS1 and IS4 compounds. Notably, the mutations of SF3B1 do not affect SF3B1 gene expression, which is similar to the other PDAC cells, as assessed by PCR (data not shown). Structure-based pharmacophore and docking were used to prioritise molecules. Tyr36, Arg38, Arg1074, Arg1075 and Leu1066 residues were found to be important for the protein-ligand complex stabilisation. From these analyses, four compounds were prioritised in terms of interactions and theoretical binding energy. Panc05.04 cells have a relatively long doubling time (46 h) compared to most ATCC cell lines. To achieve reliable results, these cells were exposed for 96 h to compounds IS1 and IS4, at five different concentrations in the micromolar range (from 0.1 to 10 µM). Remarkably, both drugs induced cell death, ranging from 52% to 63% at a concentration of 1 µM [ Figure 4C]. However, since these Panc05.4 cells are the only known PDAC cells harbouring a mutation in SF3B1, we could not draw conclusions on whether they are more sensitive to potential SF3B1 inhibitors. Anti-migratory activity and modulation of RON splicing pattern The metastatic potential is one of the hallmarks of PDAC, and it is closely related to the grim prognosis of this disease. Currently, the key mechanisms underlying this process are poorly understood, although it has been shown that several factors govern the metastatic process, including cell migration and invasion [5] . The promising results on the antiproliferative activity prompted us to also investigate the anti-migratory effect of our potential SF3B1 modulators by the wound healing assay, which was performed on SUIT-2 cells. These cells were selected because of their ability to form monolayers at optimal cell confluence within 24 h. A concentration of 4 × IC 50 was used for each compound because of the shorter drug exposure time compared to growth inhibition studies, which lasted 72 h, and because it was able to slightly reduce Figure 3. Chemical structures of compounds IS1, IS2, IS3 and IS4. The synthesis of compound IS1 is described in [39] , while the descriptions of compounds IS2, IS3 and IS4 can be found in [49] . migration already after 8 h exposure compared to untreated cells (set at 100%). However, IS1 and IS4 significantly inhibited the migration rate of SUIT-2 cells after 24 h of drug exposure [ Figure 5A], with percentages of migration rates below 40% and 10% for IS1 and IS4, respectively. Remarkably, this effect was associated with the mis-splicing of RON, which is a tyrosine kinase receptor belonging to the c-MET kinase family. This gene is overexpressed in PDAC and promotes cell migration, invasion and apoptotic resistance [64,65] . Of note, RON commonly undergoes AS resulting in different shorter ΔRON spliced variants [66] . The PDAC SUIT-2 cells express the truncated variant ∆RON, which plays a pivotal role in tumour cell motility due to the constantly activated kinase function [65] . A 24 h exposure to IS1 and IS4 caused intron retention in RON transcript and decrease in transcript abundance, probably due to nonsense-mediated decay [ Figure 5B]. Synergistic interaction with gemcitabine is associated with an increase of hENT1 mRNA expression Gemcitabine is a pyrimidine analogue (2',2'-difluoro-2'-deoxycytidine, dFdC; Gemzar®) widely prescribed to treat a variety of solid tumours [67] . It has been used for decades as the first-line treatment for metastatic PDAC, and it is still commonly used for PDAC patients in combination with nab-paclitaxel or as monotherapy in patients who are unfit for combination regimens, as mentioned above [7,9] . Our previous data show that some compounds from a series of new imidazo[2,1-b][1, 3,4]thiadiazole derivatives potentiated the antiproliferative effects of gemcitabine in peritoneal mesothelioma cells [40] . However, different splicing aberrations have previously been shown to enhance the activity of proliferative and glycolytic signalling associated to gemcitabine resistance [68][69][70] . Therefore, we evaluated whether the combinations with the compounds IS1 and IS4 at their IC 50 would increase sensitivity to gemcitabine of SUIT-2 and Hs766t cells. The combination of both compounds IS1 and IS4 with gemcitabine at IC 25 levels led to a significant reduction in cell growth compared to untreated cells, below 20% and 12%, respectively [ Figure 6A] [71,72] . These values were well below the theoretical achievable growth inhibition of the combinations and can therefore be considered as a synergistic effect. Because of its hydrophilic nature, gemcitabine requires facilitated or active transport for cellular uptake, which is mediated by membrane nucleoside transporters, including the human concentrative nucleoside transporter-3 and hENT1. The latter has been evaluated in several preclinical and clinical studies as a potential determinant of gemcitabine efficacy in PDAC [9] . Previously, our imidazo[2,1-b][1, 3,4]thiadiazole compounds in combination with gemcitabine significantly increased the expression of hENT1, suggesting its potential role in increasing the activity of gemcitabine [40] . These promising results prompted us to adopt the same strategy to investigate potential molecular mechanisms underlying the reduced activity of gemcitabine in combination with IS1 and IS4. Therefore, we measured the modulation of the gene expression of hENT1. Both compounds, also in this case, increased hENT1 expression significantly [ Figure 6B], supporting the role of these new compounds in reversing a key mechanism of resistance to gemcitabine. The combination of gemcitabine and IS4 reduced spheroids of PDAC primary cultures The sensitivity to anticancer drugs, including gemcitabine, in two-dimensional monolayer cell culture models is typically different from three-dimensional (3D) culture models. Thus, to determine whether IS4 would enhance the efficacy of gemcitabine in 3D systems, we tested these drugs in spheroids of PDAC3 cells [ Figure 7A]. We transferred in each well of 96-well plates spheroids that were approximately 500 μm in diameter. These growing spheroids were exposed to gemcitabine, IS4 and their combination for 72 h. The growth of these spheroids was slightly inhibited by gemcitabine and IS4, while the combination remarkably increased their disintegration, and they were significantly reduced in size compared to the untreated spheroids as well as to spheroids exposed to gemcitabine-alone [ Figure 7B]. DISCUSSION In this paper, we demonstrate that in PDAC cells inhibition of splicing can help to fight the typical resistant behaviour of these tumours to standard chemotherapeutic drugs, such as gemcitabine, most likely by reducing cell aggressiveness/invasiveness and increasing the expression of the limiting uptake transporter hENT1. The treatment of patients with gemcitabine alone gives a moderate effect, and any improvement of this effect would increase the prospects of PDAC patients. Only 15%-20% of all PDAC patients qualify for curative resection followed by adjuvant chemotherapy, often including gemcitabine [5] , and treatment options for most PDAC patients are limited. Thus, there is a clear need for new therapeutic approaches targeting key determinants of PDAC aggressive behaviour and reversing or bypassing resistance to existing therapies [10] . Expression was determined with quantitative-PCR by normalisation with the GAPDH housekeeping gene, as described in the methods. Since we previously demonstrated that hENT1 protein levels correlated with hENT mRNA expression, we did not include hENT1 protein expression [71,72] . Columns, mean values obtained from triplicate experiments; bars, SEM; *P < 0.05. Recent genomic studies have shown that heterozygous mutations in the splicing factor SF3B1 frequently occur in several tumours and prompt cancer progression through the activation of cryptic splice sites in multiple genes [11] . Most SF3B1 mutations have been detected in haematological malignancies, but PDAC is among the solid tumours harbouring these mutations in more than 3% of cases [12,59] . Moreover, PDAC has high levels of expression of SF3B1, and recent studies have demonstrated a positive correlation between expression levels of wildtype (WT) SF3B1 and tumour malignancy [11,62] , further supporting the search for drugs targeting this key spliceosomal factor. In the present study, we evaluated for the first time four potential spliceosome inhibitors {one imidazo [2,1-b][1, 3,4]thiadiazole derivative (IS1) and three indole derivatives (IS2, IS3 and IS4)}, which were selected by virtual screening from an in-house molecular library in order to investigate their potential efficacy against PDAC cells. Similar approaches have allowed identifying several splicing modulators other than SF3B1 inhibitors in different high-throughput screens, which are currently undergoing further evaluation in preclinical studies, as reviewed previously [73,74] . The emerging potential SF3B1 modulators IS1 and IS4 were able to inhibit cell proliferation in SUIT-2 and Hs766t cells, displaying IC 50 values ranging from 2.4 to 5.8 µM. Remarkable growth inhibition was also observed in Panc05.04 cells, harbouring SF3B1 mutations. This is in agreement with previous findings, showing that E7107 substantially reduced leukaemia cell burden in an isogenic mouse model carrying an Srsf2 P95H mutation as well as in PDX models from patients harbouring SRSF2 mutations compared to WT models [75] . The IC 50 values observed after treatment with our most promising compounds were however higher than what has been reported for PB and E7107 in different preclinical models of solid tumours, such as mesothelioma, where IC 50 values of these SF3B1 modulators are in the nanomolar range [54] . However, this might mitigate adverse events, which limited the clinical development of E7107 [19] . Similarly, the excellent results of splice-switching oligonucleotides and RNA interference in vitro are extremely difficult to translate to the clinical setting due to limited stability in plasma and intracellular uptake [76] . In the present study, we also evaluated the modulation of the gene expression of hENT1. It has been reported repeatedly that high hENT1 levels are correlated with increased gemcitabine cytotoxicity and prolonged disease-free status and overall-survival in patients receiving gemcitabine adjuvant chemotherapy [41] , including a PCR on laser-microdissected tissues study in which Giovannetti et al. [77] reported an overall survival of 25.7 and 8.5 months in PDAC patients with high and low levels of hENT1, respectively. Of note, the expression and activity of hENT1 is affected by multiple molecular mechanisms. In particular, it is worth mentioning that the TME of PDAC influences the expression of hENT1 causing PDAC gemcitabine chemoresistance. In fact, various components of the extracellular matrix limit the availability of oxygen (hypoxia), hindering the transport of gemcitabine via hENT1 [41] . Of note, several polymorphisms may affect the gene expression of hENT1, and therefore the efficacy of gemcitabine. Specifically, Myers and collaborators showed that individuals with CAG and CGC haplotypes exhibited significantly higher hENT1 expression than individuals with the normal CGG haplotype [78] . Other mechanisms affecting hENT1 expression include epigenetic modulation and microRNA [41] , and recent studies have shown interesting interrelationships between miRNA and splicing factors in PDAC [79] . Remarkably, the IS1 and IS4 compounds potentiated the activity of gemcitabine. In previous studies, after SF3B1 and PHF5A knockdown, leukaemia cells became highly sensitive to mitomycin C, suggesting that a combination of splicing modulation with DNA damaging agents could achieve synergistic effects [80] . However, we might also hypothesise that this effect is caused by the positive modulation of hENT1 mRNA expression, for which a low expression has been associated with gemcitabine resistance in different cancer cell types [41] . Thus, our data suggest that splicing inhibition can reverse resistance to gemcitabine. In addition, using a 3D culture model (e.g., spheroids) of primary cell culture that mimics the 3D organisation of PDAC tumour cells in vivo [81] , we showed that the antitumour activity of gemcitabine was significantly increased by the simultaneous addition of IS4. Finally, the IS1 and IS4 compounds were also able to induce a splicing shift from RON and ΔRON after 24 h from the start of treatment, which might at least in part explain the strong anti-migratory ability of IS1 and IS4 in SUIT-2 cells. Of note, RON and cMET are important indicators of prognosis in PDAC, and previous studies have shown the synergistic interaction of inhibitors of these protein kinases with gemcitabine [81,82] , further providing new means to predict clinical outcome and targets for more effective therapies against PDAC. Other markers should be evaluated in the future. However, another splice variant evaluated in previous studies, MCL-1 (myeloid cell leukemia 1) [58] , did not show an aberrant splicing pattern when evaluated using IS4 and not even with PB as reference splicing inhibitor. Therefore, we did not proceed with this marker in view of our potential SF3B1 modulators. Novel compounds targeting pivotal splicing factors, such as SF3B1, could have relevant antitumour activity, and, in the present study, we identified four potential SF3B1 inhibitors, selected from an in-house library, that showed cytotoxic and antimigratory activity in PDAC cells and potentiated the antitumour effects of gemcitabine. Our studies supported the role of RON and hENT1 modulation as molecular mechanisms to be further exploited for the characterisation of these new therapeutic approaches, other than for prognostic purposes [1] . In conclusion, our novel findings prompt further analysis of the selectivity and toxicity of our potential SF3B1 inhibitors, as well as the role of the modulation of RON and hENT1 for further studies in appropriate preclinical models, including in vivo models and new model systems [83] , in order to guide the rational development of new drug combinations that could reverse chemoresistance of PDAC.
2021-10-15T16:00:00.545Z
2021-10-08T00:00:00.000
{ "year": 2021, "sha1": "5f5116ea8a9984a13d6366d894f225bf6d07c872", "oa_license": "CCBY", "oa_url": "https://cdrjournal.com/article/download/4348", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a127509be794a339fd51b420321cd9c99530cdc8", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
53046499
pes2o/s2orc
v3-fos-license
Audio-Based Activities of Daily Living (ADL) Recognition with Large-Scale Acoustic Embeddings from Online Videos Over the years, activity sensing and recognition has been shown to play a key enabling role in a wide range of applications, from sustainability and human-computer interaction to health care. While many recognition tasks have traditionally employed inertial sensors, acoustic-based methods offer the benefit of capturing rich contextual information, which can be useful when discriminating complex activities. Given the emergence of deep learning techniques and leveraging new, large-scaled multi-media datasets, this paper revisits the opportunity of training audio-based classifiers without the onerous and time-consuming task of annotating audio data. We propose a framework for audio-based activity recognition that makes use of millions of embedding features from public online video sound clips. Based on the combination of oversampling and deep learning approaches, our framework does not require further feature processing or outliers filtering as in prior work. We evaluated our approach in the context of Activities of Daily Living (ADL) by recognizing 15 everyday activities with 14 participants in their own homes, achieving 64.2% and 83.6% averaged within-subject accuracy in terms of top-1 and top-3 classification respectively. Individual class performance was also examined in the paper to further study the co-occurrence characteristics of the activities and the robustness of the framework. INTRODUCTION Sensing and recognizing human daily activities has been demonstrated to be useful in many areas, from sustainability to health care. For example, older adults in their own homes could benefit from proactive assistance and monitoring in as a way to "live-in-place" and not be forced to move to an assisted-living or nursing facility. While on-body inertial sensors such as accelerometers and gyroscopes are popular in many human activity recognition applications, prior work suggests that they are not effective at recognizing complex and multidimensional activities on their own [1][2][3]. Audio, on the other hand, offers much promise in this respect; many daily activities generate characteristic sounds that can be captured with any off-the-shelf device with a microphone. Hence, researchers have proposed several different types of audio event recognition frameworks over the years, from applications on wearable and mobile devices [4,5] to home-based sensor systems [6,7]. With the development of deep neural networks in recent years, several efforts have been made by researchers to model large-scaled acoustic events. These include the usage of deep learning for sound classification on existing datasets [8] and the recognition of acoustic categories in the wild [9]. However, most such frameworks suffer from the laborious collection of ground truth training data. Some researchers have explored the use of crowd-sourced data to alleviate the problem, such as Nguyen et al. and Rossi et al. Despite encouraging results, these methods have proven difficult to scale as they partially rely on human input or interaction. Large-scale, open-source audio collections now offer a rich source of audio data reflecting a large number of everyday activities. In this work, we present a novel scheme to recognize activities of daily living in the home. Instead of directly collecting ground truth data and labels from users as in most prior research, we explored the feasibility of using millions of audio embeddings from general-sourced YouTube videos as the only training set. Due to the considerable size and highly unbalanced characteristics of the on-line data, our method combines both oversampling and deep learning approaches. The contributions of this work can be summarized as: • A novel framework for activity recognition with ambient audio that relies exclusively on a large-scale audio dataset. It aims to empower traditional audio-based activity recognition by applying over 2 million audio embedding features from nearly 52,000 public Youtube videos. Typically, the proposed framework does not require feature augmentation and semi-supervised learning processes as in relevant research. • An evaluation of the framework with 14 subjects in their homes and 15 activities of daily living. The proposed method was able to yield promising performance for activities and was robust to environmental variability. Inertial Sensing Activity recognition based on sensor data is not new. It has been widely used for several domains including self monitoring [10], assistance in smart home [11] or diagnosis of some activity-related disease [12]. Traditional approaches of activity recognition rely on inertial sensors such as accelerometers and gyroscopes. They can be implemented flexibly on smart phones [2,3], smart watches [10,13] or wearable sensor boards [1]. For example, Kwapisz et al. [2] managed to recognize walking, jogging, upstairs, downstairs, sitting and standing by just using a smart phone in subjects' pockets. Thomaz et al. [10] proposed the usage of 3-axis accelerometers embedded in an off-the-shelf smart watch for detection of eating moment. Similarly, Ravi et al. [1] showed the feasibility of attaching a sensor board on human body for simple movement classification. Most of such work focused on recognition of very simple activities involving limited types of sensors. The limitation of the activity class availability can be improved by combining more sensing modalities. However, implementation of complex sensor arrays always brings challenges in insufficiency of training sets and constraints of energy or computing resource. The subject and location sensitivity of traditional inertial sensing can make it even harder for generalization of activity models [14]. Audio Sensing Out of such reasons, researchers have proposed to empower activity recognition by video and audio approaches. Microphones have the benefits of simplicity and flexibility for implementation. Eronen et al. [15] proposed the pilot study to recognize common contexts based on sounds by using statistical learning methods. Yatani and Truong [16] explored the recognition of 12 activities related to throat movement such as eating, drinking, speaking and coughing by acoustic data collected from human throat area. This was completed by a simple wearable headset consisting of a tiny microphone, a piece of stethoscope and a Bluetooth module. Another study showed that human eating activity can also be effectively inferred by using wrist-mounted acoustic sensing [4]. This implies the practicality of simple audio-based activity recognition by off-the-shelf products such as smart watches. With the development of smart phones in recent years, phone-based acoustic sensing also shows great capability on activity recognition tasks. The AmbientSense application [5] is an example. It is an Android app that can process ambient sound data in real time either on user front end or on an on-line server. It was tested on mainstream smart phones (Samsung Galaxy SII and Google Nexus One) and yielded satisfactory results on classification of 23 context of daily life. Lu et al. [17] developed the SoundSense to detect multiple speech, music and ambient sound categories based on mobile platforms. Acoustic sensing can also be used for indoor scenarios, especially when video-based methods may bring privacy concerns. Laput et al. [6] described the concept of general-purpose sensing, where multiple sensor units including a microphone were embedded on a single home-oriented sensor tag. Chen et al. [7] provided an audio solution for detection of 6 common activities in bathroom based on MFCC features. Their work typically aims at elder care since direct behavioral observations can be quite embarrassing to be shared with clinicians. More recently, acoustic sensing and recognition have been significantly improved based on the usage of deep learning techniques. Salamon and Bello [8] proposed an architecture combining feature augmentation and a CNN to evaluate on-line audio data. Lane et al. [9] developed the DeepEar to classify multiple categories for different sensing tasks based on a well-tuned fully connected network. Audio-Based Activity Recognition with Online Data Most of the prior work requires a manual collection of ground truth audio data from individual users. This can be quite laborious especially if we are targeting at multiple classes of activities. Also, it is actually unrealistic to ask the users to train the model on their own before using it. Hence, Hwang and Lee [18] introduced a crowd-sourcing framework for the problem. They developed a mobile platform to collect audio data from multiple users. the Augur, a system leveraging contexts from on-line fictions to predict human activities in the real world. In terms of audio-based classification, Aytar et al. [27] described the SoundNet framework for knowledge transfer between large-scaled videos and target sounds based on a deep CNN. To the best of our knowledge, however, very few attempts have been made to adapt such tremendous scale of on-line audio samples for real-world activity recognition, and this can be even challenging when leveraging the YouTube sound features due to the ambiguous source of the raw videos from movies, cartoons to crowd-sourced data. The most relevant up-to-date achievement was proposed by Laput et al. [28], where the researchers developed a mixed process of audio augmentation for a deep network and combined the online sound effect libraries with the Audio Set data for audio context classification. Their work shows promising results when applying the augmentation process with the online sound effect data. However, the performance of the framework dropped significantly if purely using the video sounds (i.e. the Audio Set [24] data) without augmentation. Moreover, their work mainly focused on the classification of environmental contexts, and the statistics in terms of individual activity classes still largely remained unexplored. In our research, we aimed to study the feasibility and performance reported from the perspective of individual activity recognition by leveraging only the online video sound clips for training. Our in-lab and multi-subject studies showed that the proposed framework was able to yield promising performance even without any feature augmentation or semi-supervised learning techniques. Audio Set In 2017, Google's Sound Understanding team released a large-scale acoustic dataset, named Audio Set [24], endeavoring to bridge the gap in data availability between image and audio research. The Audio Set contains information of over 2 million audio soundtracks drawn from general YouTube videos. The dataset is structured as a hierarchical ontology consisting of 527 class labels and the size is still growing now. All audio clips are equally chunked as 10 seconds long and labeled by human experts. Label Association Before implementation, we need to consider the range of target activities and how we can associate the class labels in the Audio Set with them. Our research leverages existing audio samples and labels from on line as the training set, and we aim at target activities that frequently appear in the home. Specifically, the range of our target classes has been limited to target activities that are suitable for audio-based recognition. Here 'suitable' means that the sound of the activity should be featured and easily captured in practice. Hence, we excluded pet categories for our study because such sound is normally sparse in natural home scenarios. Classes such as 'silence' was also not chosen because the corresponding attributes can be ambiguous from sleeping, standing, to maybe just absence of the person in the room. Body movement with very weak sound features is not suitable for audio-based recognition as well. Further more, it is not always possible to find an exact matching between the Audio Set labels and the actual activities. In such cases, we adopted an indirect matching process. That is, we first determined the most relevant objects and environmental contexts regarding to the target activities. We then chose Audio Set classes of such objects and contexts as representation of the activities. For example, we used class 'water tap' and 'sink' as representation of 'washing hands and faces' as all three classes involve usage of water and the features are quite similar. This is actually a very subjective process as there is no quantized measurement to determine the similarity between such relevant classes and the actual target classes. For the class 'listening to music', we focused on studying only piano-related musics as examples. It is noted that the dataset provides a quality rating of audio labels based on manual assessment. Most of the labels have been assessed by experts based on a random check of 10 audio segments within the label. The samples of each label are actually divided into three subsets (evaluation, balanced training, and unbalanced training) for training and evaluation purposes. The evaluation and balanced training sets are of much smaller size than the rest unbalanced training set, and due to the considerable size of samples and factors such as misinterpretation or confusibility, many class labels of the unbalanced training sets are actually of poor rating results. In our framework, we did not consider the sample ratings and we incorporate all three evaluation, balanced training and unbalanced data for our training set. We therefore determined 15 common home-related activities for the framework. They are associated with 18 Audio Set labels. Table 1 shows the association between our target activities and the Audio Set class labels, and all audio embeddings of the listed Audio Set classes are used as the only training data in our proposed scheme. Oversampling A typical characteristic of the Audio Set data is the unbalanced distribution in terms of the class size. In our implementation, we also removed samples with label co-occurrence among the target classes to ensure mutual exclusiveness, and table 2 shows the number of embedding vectors per class in our raw training set without any sampling process. The numbers here include embeddings from all three subsets (evaluation set, balanced training set and unbalanced training set). The actual size for some classes is slightly smaller than they appear on the released Audio Set since we adopted the converted Python Numpy version of features as mentioned. As we can see, classes 'chatting' and 'listening to music' have the most embeddings (174,220 and 115,200 respectively). Class 'brushing teeth' is of the least, only 1230, which accounts for 0.7% of the largest class. In other words, the two majority classes account for over half of the whole training set. The unbalanced distribution of the class size leads to highly unbalanced training in our study. As we will see in the dedicated test section, the distribution of training class can heavily affect the recognition performance, and we implemented two oversampling processes for the problem. The unbalanced distribution of labels can mainly be affected by two facts. Firstly, the distribution actually reflects the diversity and frequency of the class labels within the source YouTube videos. For example, elements of chatting or musics can be captured in a large amount of video topics, from advertisement, news to cartoons. Brushing teeth, on the contrary, appears much less and typically just in some movie scenes or daily life recordings. Chatting can also involve several modalities according to the speaker's gender, age and the context of the speech, while brushing activities seem to be much more similar among each. Secondly, we are using only samples without label co-occurrence among the target classes. The size of the remaining disjoint data can also affect the actual distribution in our training set. The effects of unbalanced training on classification have been discussed in several work [32][33][34]. Without prior knowledge of the unbalanced priors, a classifier can always tend to predict the majority classes, and there should be higher cost for misclassifying the minority classes [32]. In our scheme, we implemented random oversampling with replacement and synthetic minority oversampling technique (SMOTE) [33] to handle the problem. The process of random oversampling can be divided into two steps. The first is to calculate the sampling size for each minority class, i.e. to calculate the difference of size between the target class and the majority class. Then each minority class will be re-sampled with replacement until the sampling size is filled. This is actually replication of existing data without introducing any extra information into the dataset. The SMOTE, on the contrary, works by adding new elements for the minority classes. It leverages the K-nearest-neighbors (KNN) approach to first generate new data points around the existing data points. Then one of the neighbors is randomly selected as the synthetic new elements and will be introduced to the minority class. In our implementation, the oversampling process was developed based on the Python imbalanced-learn package [33,35]. All parameters were set as default in the imbalanced-learn package version 0.3.3 except that the random state was kept as 0. By the oversampling processes, we actually obtain 2,613,300 embedding vectors in total for the 15 classes. The size are the same for both random oversampling and the SMOTE. Architecture Deep learning has been proven to be powerful for large-scale classification. Due to the considerable size of audio samples involved in our study, and also to keep the same feature format as released in the Audio Set, we adopted neural networks for both embedding feature extraction and classification in our proposed framework. Fig.1 shows the architecture. Overall, there are two networks in our mechanism, a pre-trained feature extraction network and a classification network. In details, we adopted the pre-trained VGGish model [29] as the extraction network and all parameters of the network were fixed during our training process. The classification network consists of 1-dimensional Fig. 1. Architecture of our proposed scheme. We applied the VGGish model [29] as the feature extraction network. The feature network was pre-trained on the YouTube-100M dataset and all parameters were fixed in our training process. The generated embeddings are then segmented and passed to the classification network. Our classification network consists of plain 1-dimensional convolutional layers and dense layers, and the model was trained and fine-tuned on the oversampled Audio Set [24] embeddings. include pre-processing steps for extracting the log mel spectrogram features to feed the model and post-processing steps for PCA transform and element-wise quantization which have also been adopted on the released Audio Set data. In our implementation, the audio pre-processing step takes as input audio waveforms with 16 bit resolution, so we manually convert other formats of audio samples (such as raw recordings from smart phones) to the wave format using a free on-line converter 4 before passing the raw audio for processing. The parameters of the VGGish network kept constant during the whole training and validation process. The network could then output a vector of 128 syntactic embeddings for every second of the input audio. Our classification network consists of 3 plain convolutional layers and 2 dense (fully connected) layers. The structure is shown in Fig.2. The convolutional layers are all 1-dimensional tensor with linear activation and same paddings to ensure the same feature size. The number of channels are 19, 20 and 30 respectively for the 3 layers. The kernel size was all set as 5 with a stride of 1. We applied 500 neurons for the first dense layer. The second dense layer is the output layer, thus there are 15 neurons and the output activation was set as softmax. A flatten layer was used to connect the convolutional layers and the dense layers. We chose categorical cross entropy as the loss. In terms of the optimizer, we applied stochastic gradient descent with Nesterov momentum. The learning rate was set as 0.001 with 1e-6 decay and 0.9 momentum. The network takes as input (128 * 1) segmented and normalized embeddings from the segmentation step of our architecture and outputs predicted probability distribution of the labels. Under the top-1 classification scenario, the label with the highest probability will be selected as the final prediction. Our classification network was built and 3 https://github.com/tensorflow/models/tree/master/research/audioset 4 https://audio.online-convert.com/convert-to-wav compiled on Python Keras API [36] with Tensorflow [30] backend. The weights were trained and fine tuned on the Audio Set embeddings. In addition to the neural networks and audio processing steps, we also applied embedding segmentation to determine the unit length of an audio segment for recognition. This is natural because the length of a single embedding vector (1 second) can be too short to some activities and may not be able to capture enough information for recognition. Also, increasing the segment length can help to alleviate the effects of outliers and noise within the real world recordings. Hence, we introduced a segmentation process on embeddings between the two networks. For convenience, in the following sections we will describe the length of a unit segment by number of embedding vectors (1 second each). In our architecture, the segmentation is completed by grouping the embedding vectors using a fix-sized window with no overlaps. The vectors will then be averaged within each group to yield a new embedding vector. In other words, each unit audio segment is described by an averaged embedding vector. Activity labels will then be assigned to the averaged vectors and those vectors actually serve as the instances for classification. The embeddings are standardized using min-max scaling before fitting to the classification network. The source code of our overall architecture has been made publicly available online 5 . Both the oversampling and training processes were developed on the Texas Advanced Computing Center (TACC) Maverick server. Specifically, we applied the NVIDIA K40 GPU on the server to accelerate the training process. The training embeddings were split as 90% for training and 10% for validation using the Pyhton Scikit-learn package [37]. The TensorFlow version provided was TensorFlow-GPU 1.0.0 [30]. Before training, we set all random seeds as 0 to ensure the same training status. Besides, a batch of 100 embedding vectors were input each time. The classification network was trained until the validation performance no longer improved (in our study, 15 to 20 epochs depending on the re-sampling set in use). We evaluated the feasibility of our framework based on a pilot lab study. There are two purposes to do so. Firstly, we would like to check if our proposed methodology can actually work based on real-world ambient recordings. Although the architecture had been well trained on the Audio Set data, the characteristics of the YouTube video sounds and real-world ambient sounds could possibly be different. Secondly, we would need a real-world test to determine the best combination strategy for the sampling process and the classifier. In this dedicated study, we collected sounds of the target activities in the wild by placing an off-the-shelf smart phone (Huawei P9) nearby for recording. In the pilot study, the context of the activities was well-controlled with low variability. Specifically, we tried best to exclude irrelevant 5 https://github.com/dawei-liang/AudioAR_Research_Codes environmental noise such as sounds of toilet fans or air conditioners during the collection. Also, when a target activity was performed there were no extra on-going activities. When the study began, the smart phone was placed in a natural fashion near where the activity was going to be performed. The collection was manually started when the sound of the activity could be clearly captured. Sound recording for each activity lasted for 60 seconds, and it would be stopped Fig. 3. Recognition results of the pilot study using random oversampling + CNN (top) and raw embeddings input + CNN (down). It is obvious that the performance with the oversampling process far exceeds the performance with only raw embeddings input. Pilot Test when the proposed time ended. This same process had been repeated for each individual activity until the collection for all 15 activities was completed. We chose a segmentation size of 10 embedding vectors (9.6 seconds) for the dedicated study. The recognition performance was evaluated based on 3 different sampling processes (raw embeddings input/no oversampling, random oversampling, and the SMOTE). To make it clearer how the classification network performs, we also tuned and trained a random forest classifier on the same training sets as a baseline. The random forest was built using the Python Scikit-learn package [37]. We used the overall accuracy and overall F-score as the performance metrics. In binary classification, the F-score is calculated as 2 * (precision * recall) / (precision + recall) and it incorporates information for both precision and recall performance. In our study, the overall F-score across multiple classes can be calculated by finding the weighted average of F-scores of the individual labels. Table 3 shows the recognition performance based on different architectures. For convenience, the random forest is abbreviated as RF in the table. From the results we can see that the random forest without any sampling process yields the worst accuracy and F-score (34.4% and 24.5%). This is comparable to the dedicated study by Rossi et al. [21], where the authors trained a GMM on 4678 raw samples from the crowd-scoured Freesound dataset and obtained 38% overall accuracy for 23 context categories. Clearly, introduction of the classification network significantly improves the recognition performance, especially if combining with the oversampling processes. The combination of random oversampling and our classification network yields the best performance (81.1% overall accuracy and 80.0% overall F-score). Generally, classifiers with oversampling outperform those without one. Fig.3 shows in details the performance of individual classes with and without oversampling, and the entries have been normalized for each class. As we can see, classification network input with raw embeddings overfits to some of the majority classes such as 'playing music' and 'strolling'. Network input with the random oversampled embeddings, on the contrary, yields equally promising results to most classes. The worst class for the top-1 architecture was 'flushing toilet' with only 17% class accuracy. This is probably because the segmentation length was too long to the flushing activity and too much irrelevant information was captured within the segments. Fig. 4. F-score performance with different segmentation size. The performance was worst when no segmentation process was applied. With increment of the segments size, the F-score significantly increased and remained stable around 80%. The random guess levels were around 7%. To determine how the segmentation process can affect the classification performance, we compared the overall F-score under different size of embedding segmentation. The comparison is shown in Fig.4. As reference, we also plotted the random guess levels (around 7%). From the figure, we can see that the performance was the worst when no segmentation process was introduced (i.e. 1 embedding vector each segment), with an F-score of only 65%. By applying a bigger segment size, the F-score value significantly increased to over 80%. In addition, we can see that the unit segmentation length of 5 embedding vectors has already enabled the instances to capture enough information for the classification. Further enlarging the size of segmentation no longer improve the overall recognition performance. More Discussions towards Domain Shifts From the perspective of transfer learning, our framework is actually a domain adaptation process where we try to find a mapping between the source Youtube soundtracks and the real-world recordings. Generally speaking, audio features from such on-line videos can be very different from those of the real-world collections for activity recognition. Interestingly, our classification network only yielded 53% validation and training accuracy on the random oversampled Audio Set embeddings. But the performance of our top-1 scheme reached to over 80% on the ambient recordings. Besides, we have noticed that the validation performance on the Audio Set data could have been further improved by adopting deeper layers. However, increasing the depth of the model would no longer help to improve the performance on real-world data (it might even harm the performance). A possible reason is that ambient sounds from the real world (especially in home settings) can generally be of less complexity and be more 'linear separable' than those on the YouTube videos. In other words, a model fit too much on the Audio Set data can probably becomes over-fitting to the sound recordings from our home settings. Test Design By the pilot test, we verified the feasibility of the proposed framework and determined the appropriate combination of the oversampling and segmentation strategies with the proposed networks. To generalize the study in more natural settings, we then implemented in-the-wild scripted tests based on 14 human subjects in their actual home environment. In the previous feasibility study, we made several assumptions for the test environment. Firstly, there was little irrelevant environmental noise such as noise of common home appliances during the collection process. The audio samples were recorded by a smart phone nearby with almost no artificial or ambient disturbance during the processes. Secondly, the start and end points of the collection were also carefully selected to ensure high quality recordings. Thirdly, there were almost no overlaps and co-occurrence among the activities. In other words, individual collections were ensured to be strictly mutual exclusive. However, in real-world settings such assumptions can always be broken. For example, human artifacts such as sounds from roommates and ambient noise from air conditioners or refrigerators are almost inevitable in our home. Also, people tend to perform activities in a more continuous way and it is quite reluctant if the framework always requires a pause. Hence, we are interested to see how the proposed architecture performs under such more natural circumstances. The real-world tests were performed based on a scripted scenario. A key advantage of the scripted tests is that the procedure of following the script can simulate the continuous process of human activities just as in natural home settings. All target activities were listed in advance in the form of instructions such as "First head to the bathroom, wash your hands and face" or "After juice prepared, please warm some food using the microwave oven". Each human subject then simply followed the instructions on a paper and freely perform the activities. We adopted the same off-the-shelf device (Huawei P9) in the collection. The smart phone was carried on the subjects' arms with a wristband so that the they could perform the activities without paying attention to the collection process. During the whole collection, an expert (one of the authors of the paper) followed the subjects while they were performing the activities but would keep a distance (e.g. waiting outside the room while the subject was performing room cleaning) to allow sufficient freedom for the subjects. The key roles of the expert were to answer questions by the subjects during the test and to label the time stamps of the target activities by using a timer started simultaneously with the recording phone. To avoid subjective bias, the tested volunteers had not been told the full purpose of the experiment until the whole collection was completed. All participants of the study were required to signed an IRB protocol form before the tests. To incorporate variability factors in the tests, the expert would occasionally introduce a small amount of free chatting during some of the activities such as watching TV, frying or strolling. To simulate the concurrence of activities, the subjects were allowed to perform some activities simultaneously such as short washing during the frying work. In addition, all 14 tests were performed in volunteers' own home and they were allowed to leave some household appliances such as the air conditioners or refrigerator compressor working as normal. To further follow their normal modality, they were encouraged to use their own devices or tools (e.g. their own vacuum cleaner, kitchen and toilet appliances) for the collection. In our script, most activities were required to be just performed once and the length was determined freely by the participants. We prepared some bacon, cucumbers or carrots in advance for activities 'frying food', 'chopping food' and 'squeezing juice'. Given the diversity of television programs, the participants were asked to watch for 5 different channels with around 30 seconds each for the class 'watching TV'. For class 'enjoying music', the subjects were specified to play their own piano or listen to relevant types of musics such as piano solo or symphonies chosen by themselves. Besides, class 'shavering' was waived for female subjects. Results and Discussions In total we were able to obtain 32105 seconds (535 minutes) of audio data from 14 subjects (7 males and 7 females). Based on the labeled time stamps, we manually segmented the target activity data from the raw recordings. Overall, we identified that roughly 12078 seconds (201 minutes) of the clips were target-related, accounting for 37.6% of the total. The resulting sparsity is comparable to audio-based activity recognition in practice as not all home-related activities can generate specific sound features and audio-based frameworks are not suitable for them. We then applied the best architecture of the proposed framework (classification network with random oversampling) to evaluate the results. The unit segmentation length was set as 10 embedding vectors (9.6 seconds). The test results were first examined based on each individual participant. Fig.5 shows the overall classification performance within single subjects. Because of the high inequality of segment length among the activities, we adopted the overall weighted average as the performance metric. In other words, for a given subject, the contribution of each tested instance to the overall accuracy is inversely proportional to the amount of tested data within that corresponding activity class. By weighting the instances, each activity class within the subject can then contribute equally to the overall performance. In our studies, the averaged top-1 classification accuracy was 64.16% for all tested subjects. In addition to the top-1 classification, we also evaluated the overall performance using a top-3 classification scenario given the co-occurrence of activities and the variability during the tests. In the top-3 classification, predicted labels with the top 3 highest probability are considered as the final predictions, and a true positive can be counted if any of the 3 labels match the ground truth. It incorporates the variants of predictions due to possible similarity of sound features or concurrence of the actual activities. From the figure we can see that the top-3 performance was much better than the top-1 scenario, with an averaged accuracy of 83.59% for all 14 subjects. To evaluate the performance of individual activity classes, we also summarized the class accuracies across all tested subjects. We calculated the average values for both the top-1 and top-3 classification, and Fig.6 and Fig.7 present the statistics for both settings. Instead of directly applying confusion matrices, we adopted a similar weighted approach for the analysis. That is, tested instances from each subject were assigned with weight that was inversely proportional to the amount of data within them. This enables samples from different subjects and different tested environment with varying data size to contribute equally to the overall performance of the target classes. In addition, the figures also indicate the figures, we can also see that the performance of most activities increased significantly from the top-1 scenarios to the top-3 scenarios, reaching to nearly 100% mean accuracy with much smaller deviations. This implies the existence of activity co-occurrence and overlaps of acoustic features among distinct activities such as simultaneous chatting with outdoor strolling or a music show on TV, which are also commonly seen in the natural home settings. Because of the difference in terms of evaluation metrics and test conditions, it was challenging to directly compare the performance across the related work. As reference, Rossi et al. [21] combined a semi-supervised or manual filtering of outliers with the Gaussian Mixture Model (GMM) to classify 23 acoustic contexts. They extracted the MFCC features from the Freesound dataset with the sequence length of 30 seconds for training. The best top-1 classification and top-3 classification performance were 57% and 80% respectively only if with manual filtering of the outliers. Hershey et al. [29] trained two fully connected networks with and without the embedding extraction process to classify the Audio Set [24] categories. They adopted the mean Average Precision (mAP) as the performance metric and obtained the best mAP of 0.31 only if taking the embeddings as input. Kong et al. [31] completed a similar test using an attention model from a probability perspective, achieving mAP of 0.327 and AUC of 0.965. The state of the art by Laput at al. [28] reported the classification performance from several perspectives. Their best model achieved 80.4% overall accuracy for 30 context classes recorded in the wild, but the framework relied on a mixed process of audio augmentation and combination of sound effect libraries for training. If purely using the online video sounds (i.e. the Audio Set [24] data), their framework yielded an overall accuracy of 69.5% when check-pointed on the test set and 41.7% when tested directly on the real-world sounds. Correspondingly, our framework was not developed based on any feature augmentation and semi-supervised learning processes. The overall classification accuracy of our model was 81.1% for 15 activity classes in the lab study. Our top-1 and top-3 performance was 64.2% and 83.6% respectively based on multi-subject tests of 14 participants in their actual home environment. CONCLUSION The collection of ground truth user data can be time-consuming and laborious in multi-class audio learning. This paper presented a novel framework leveraging large-scaled on-line YouTube video soundtracks as the only training set to empower audio-based activity recognition. Specifically, our proposed framework aims to recognize 15 common home-related activities. Due to the tremendous size of the dataset and the highly unbalanced distribution of the training classes, our framework combined both oversampling and deep learning architectures without further needs of feature augmentation and semi-supervised learning processes. To evaluate its performance, we designed both in-lab pilot tests and in-the-wild scripted tests with multiple subjects in their home. Results showed that our proposed framework was able to achieve promising performance and robustness to the environmental variability in different test scenarios. Other design considerations including the association of activity labels and effects of embedding segmentation were also discussed in the paper.
2018-11-09T23:15:45.831Z
2018-10-19T00:00:00.000
{ "year": 2018, "sha1": "64c16e5fffe805b00b64aacf48391bb2b52fafe5", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1810.08691", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e5093e0813bf56a3331fb3296b2e3a76fa2245b2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
12934718
pes2o/s2orc
v3-fos-license
Treating patients with low high-density lipoprotein cholesterol: choices, issues and opportunities Three clinical trials have recently focused on the benefits of lipid-regulating therapy in populations with normocholesterolaemia and low high-density lipoprotein (HDL)-cholesterol. Two secondary prevention studies (Veterans Affairs HDL-Cholesterol Intervention Trial [VA-HIT] and Bezafibrate Infarction Prevention [BIP] trial) testified to the efficacy of fibrates in decreasing cardiovascular events, particularly in patients with coexisting risk factors, including hypertriglyceridaemia. The Air Force/Texas Coronary Atherosclerosis Prevention Study (AFCAPS/TexCAPS) demonstrated that a statin could decrease acute coronary events in patients with isolated low HDL-cholesterol in a primary prevention setting. The absolute risk reduction in coronary events in the VA-HIT study compares favourably with those reported from the statin-based Cholesterol and Recurrent Events (CARE) and Long-term Intervention with Pravastatin in Ischaemic Disease (LIPID) trials. The absolute risk reduction in AFCAPS-TexCAPS is similar to that in West of Scotland Coronary Pravastatin Study (WOSCOPS). Recommendations are given concerning lifestyle and pharmacological management of low HDL-cholesterol. Optimal management also requires review of current treatment targets for HDL-cholesterol and triglycerides levels. retard (400 mg/day) on myocardial infarction and sudden death in middle-aged persons, most of whom were men and 10% of whom had diabetes. It lasted for approximately 6 years. The Air Force/Texas Coronary Atherosclerosis Prevention Study (AFCAPS/TexCAPS) [8] was a primary prevention trial of the effect of lovastatin (20-40 mg/day) on first major acute coronary events in middle-aged persons, most of whom were male and only 3% of whom had diabetes. Its duration was approximately 5 years. The entry plasma HDL-cholesterol level was lower in VA-HIT than in the BIP study and AFCAPS/TexCAPS, and the LDLcholesterol at entry in the latter two trials was approximately 1 mmol/l higher than in the VA-HIT population. VA-HIT also included older patients, and more of these had diabetes, hypertension and obesity, and were current smokers. As shown in Table 1, the percentage increase in HDLcholesterol was similar in VA-HIT and AFCAPS/TexCAPS, but significantly less than in the BIP study. The dominant plasma lipid changes were a reduction in triglycerides in VA-HIT and a reduction on LDL-cholesterol in AFCAPS/ TexCAPS -changes that are consistent with the effects of a fibrate and statin, respectively. In the BIP study the reduction in triglycerides was less than in VA-HIT, but a greater increase in HDL-cholesterol was seen with bezafibrate than with gemfibrozil. Risk reduction, subsets and other trials Although the relative risk reduction in the primary endpoint was greater in AFCAPS/TexCAPS than in VA-HIT, the absolute risk reduction was greater in the latter than the former trial. This is consistent with the differences in background risk of CAD between the study populations. The overall relative risk reduction in the primary end-point in the BIP study was not statistically significant. In patients with entry plasma triglycerides in excess of 2.25 mmol/l, however, there was a significant 40% reduction in relative risk, with a corresponding 8% decrease in the absolute risk. After 5 years the incidence curve of the primary endpoint appeared to level off in the placebo group in the BIP study, and this might have been related to the use of open-label statin therapy by primary care physicians [9]. In VA-HIT the number of patients needed to treat (NNT) to prevent one event over the duration of the trial was approximately 24. This compares favourably with the 5-year NNT to prevent one fatal myocardial infarction/ death from CAD of 33 and 28 in Cholesterol and Recurrent Events (CARE) [10] and Long-term Intervention with Pravastatin in Ischaemic Disease (LIPID) [11], respectively, two trials that employed pravastatin. The NNT for persons with triglycerides greater than 2.25 mmol/l was 12 in the BIP study [9], which was significantly less than the NNT of 42 in patients with triglycerides greater than 2.42 mmol/l in the Helsinki Heart Study (HHS) [12]; this is consistent with the secondary and primary prevention settings of these trials, respectively. In AFCAPS/TexCAPS the overall NNT to prevent one event was 50, and this was chiefly attributable to patients with a baseline HDL-cholesterol below 0.9 mmol/l [13], in whom the relative risk reduction of cardiovascular events was 45%; this was significantly greater than for those with a baseline HDL cholesterol greater than 1.08 mmol/l. The overall efficacy of lovastatin in AFCAPS/TexCAPS is similar to that of pravastatin in the West of Scotland Coronary Pravastatin Study (WOSCOPS) [14], an earlier primary prevention trial in hypercholesterolaemic patients. Mechanisms of benefit The mechanisms of the benefits of fibrates and statins in the above trials is not clear, although angiographic data [15][16][17] support the notion of regression of atherosclerosis. Because statins and fibrates not only increase plasma HDL, but also lower the concentration of other proatherogenic lipoproteins, such as LDL and remnants, it is not possible to ascertain how much of the benefit seen in the trials is attributable to the increase in HDL [9]. The greatest relative risk reduction in AFCAPS/TexCAPS, however, was seen in patients with a baseline HDLcholesterol below 0.9 mmol/l, with the on-treatment apolipoprotein B to apolipoprotein A1 ratio being the most significant predictor of subsequent coronary risk [13]. An independent treatment effect on outcomes was not specifically identified in this analysis of the trial, but is likely to have been operational [18]. In VA-HIT multivariate analysis [19] showed that only on-trial HDL-cholesterol and treatment group assignment predicted coronary events at 5 years; the lowest coronary event rate was seen in patients with on-treatment HDL-cholesterol in excess of 0.9 mmol/l. However, only 23% of the benefit achieved with gemfibrozil could be explained by the on-treatment plasma levels of HDL-cholesterol, triglycerides and LDL-cholesterol. Hence, it is possible that the pleiotropic effects shared by fibrates and statins that directly inhibit atherogenesis and thrombogenesis may be responsible for the reduction in coronary events in the trials reviewed here [18,20]. It is likely, but unproven, that the cardiovascular benefits seen with gemfibrozil and lovastatin in normocholesterolaemic low-HDL populations reflect a class effect of fibrates and statins, respectively. Clinical implications The VA-HIT results therefore suggest that when LDLcholesterol levels are optimal, or near optimal, increasing HDL-cholesterol with reduction in triglyceride-rich lipoproteins may be a cost-effective approach to decreasing the incidence of coronary events in secondary prevention. The BIP subgroup analysis shows that, in hypertriglyceridaemic persons with coronary disease, bezafibrate is a cost-effective treatment for dyslipidaemia if triglycerides levels are greater than 2.2 mmol/l, despite the background risk being less than in patients included in VA-HIT. The AFCAPS/TexCAPS results have implications for primary prevention in the general population, and in particular for individuals with low HDL-cholesterol in whom the increased risk of coronary disease appears to be diminished. Significantly, in AFCAPS/TexCAPS only 17% of the patients in the trial met National Cholesterol Education Program LDL-cholesterol cut-points for the initiation of statin therapy [3]. In all the trials reviewed, the safety of fibrate and statin therapies was reaffirmed. Managing low high-density lipoprotein cholesterol Lifestyle and pharmacotherapy The initial approach to treating low HDL-cholesterol should involve lifestyle modification, including cessation of cigarette smoking, weight reduction, regular physical exercise and possibly a moderate regular intake of alcohol [21]. In secondary prevention, if this metabolic abnormality is not corrected nonpharmacologically, then a statin should be employed initially to lower LDL-cholesterol to below 2.6 mmol/l; if HDL still remains below 0.9 mmol/l with or without elevation of triglycerides, then the trial evidence supports employing a fibrate as adjunctive therapy. However, the benefit of fibrates in patients whose LDL has been reduced by statins has not been formally demonstrated. If LDL-cholesterol is initially below 3.4 mmol/l, then a fibrate may be used as first-line therapy, especially if triglycerides are also greater than 1.6 mmol/l [22], with the option of adding a statin later if the LDL-cholesterol remains above 2.6 mmol/l. Again, this remains to be specifically corroborated in a clinical trial, but the advice given is consistent with existing evidence. Broadly similar recommendations could apply to primary prevention in patients with multiple cardiovascular risk factors, and this may be particularly pertinent to asympto-matic patients with type 2 diabetes or visceral obesity [23,24]. The value of treating diabetic patients with fenofibrate is presently being addressed in the Fenofibrate in Event Lowering in Diabetes (FIELD) trial, and the role of statins in the Heart Protection Study (HPS) and the Collaborative Atorvastatin in Diabetes Study (CARDS) [9]. Hypertensive patients in routine practice are likely to have the metabolic syndrome or diabetes, and constitute a special group that merits fibrate or statin treatment in order to raise coexistent low HDL levels. However, whether there is incremental benefit in both primary and secondary prevention settings of employing a fibrate together with a statin in treating patients with low HDL-cholesterol remains to be rigorously demonstrated. This issue is being addressed in diabetic patients in the Oxford-based Lipids in Diabetes Study (LDS) [9]. Although niacin is the most potent agent for raising HDL levels and trial evidence suggests that it may decrease coronary events in hyperlipidaemic patients with previous myocardial infarction [25], it has never been tested in populations similar to those in VA-HIT and AFCAPS/ TexCAPS. Tolerability and adherence is a major problem with niacin. The efficacy of the new niacin formulations as monotherapy and combined therapy for patients with low HDL levels needs to be confirmed in clinical trials with cardiovascular end-points. Caution should be exercised when employing combination therapy of fibrates or niacin with statins, because of potential hepatotoxicity and myopathy; close monitoring of liver and muscle enzymes is therefore recommended. Finally, many patients with low HDL levels will have diabetes and insulin resistance [9], so that another important question for future trials is whether metformin or thiazolidinediones confer cardiovascular benefit over and above that due to lipid-regulating therapy. Concerning women Although a similar strategy for managing low HDL-cholesterol is at present recommended for both men and women, the specific use of oestrogen in postmenopausal women merits consideration. Oestrogen supplementation is well recognized to increase plasma HDL-cholesterol effectively [26], but it also increases triglycerides, and this may explain its lack of benefit on CAD risk in clinical trials [27]. The potential synergistic benefit of oestrogen replacement, including selective oestrogen receptor modulators, and that of other pharmacotherapies for increasing HDL requires further research. Other considerations Finally, in hypertensive patients with low HDL that is refractory to the aforementioned therapies, consideration should be given to employing α-blockers, such as prazosin and doxazosin. However, the efficacy of α-blockers alone and in combination with other agents that elevate HDLcholesterol still needs to be demonstrated in clinical end-point trials. Recent findings from the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) [28] have also cast some doubt on the use of doxazosin. Drug and gene therapies that selectively elevate HDL are under development [29], and may eventually find a place in clinical practice. Conclusion Translation of the trial data presented above into effective clinical management requires more accurate methods for assessing cardiovascular risk and reconsideration of the optimal therapeutic targets for plasma lipid and lipoprotein levels [9,22,30]. Although the guidelines for total cholesterol and LDL-cholesterol in risk assessment are well established [3][4][5], treatment recommendations concerning HDL-cholesterol are not as rigorous or aggressive. Failure to recognize HDL in assessing patients significantly underestimates their risk of CAD [31]. Also, measuring triglycerides has not generally been recommended in risk assessment, but its inclusion may be critical in patients with visceral obesity, hypertension and diabetes [9,30]. Serial, fasting blood tests will provide a more precise evaluation of triglyceride-mediated cardiovascular risk, particularly in the presence of an elevated HDLcholesterol. Given the results of VA-HIT and related studies, the National Cholesterol Education Program guidelines of a triglycerides level below 2.2 mmol/l and an HDL-cholesterol greater than 1 mmol/l as 'normal' certainly need to be reviewed [3]. Of relevance, data from the Prospective Cardiovascular Munster (PROCAM) Heart Study suggest that plasma triglycerides should be lowered to below 1.1 mmol/l and HDL increased to above 1.2 mmol/l in high-risk individuals to prevent coronary events [32,33], which is consistent with the aggregate findings of the trials reviewed here. Accordingly, expert bodies need to review their guidelines for assessing and treating plasma triglycerides and HDL-cholesterol levels [3][4][5]. Some consideration should also be given to employing apolipoproteins B and A1 in risk assessment and treatment, based on the AFCAPS/TEXCAPS findings [13]. Finally, for many patients with low HDL-cholesterol, treatment with statins and fibrates will need to be complemented with lifestyle changes and other drugs, including antihypertensive and antidiabetic agents. Ensuring patient adherence to all of these potentially effective measures probably remains the major challenge for the prevention and reversal of CAD.
2018-04-03T00:46:49.143Z
2001-05-11T00:00:00.000
{ "year": 2001, "sha1": "b645d0dff7e66164cca136cf5f003b7bf0bfa50e", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/cvm-2-3-118", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b645d0dff7e66164cca136cf5f003b7bf0bfa50e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42378119
pes2o/s2orc
v3-fos-license
Internal and Surface-Localized Major Surface Proteases of Leishmania spp. and Their Differential Release from Promastigotes ABSTRACT Major surface protease (MSP), also called GP63, is a virulence factor of Leishmania spp. protozoa. There are three pools of MSP, located either internally within the parasite, anchored to the surface membrane, or released into the extracellular environment. The regulation and biological functions of these MSP pools are unknown. We investigated here the trafficking and extrusion of surface versus internal MSPs. Virulent Leishmania chagasi undergo a growth-associated lengthening in the t1/2 of surface-localized MSP, but this did not occur in the attenuated L5 strain. The release of surface-localized MSP was enhanced in a dose-dependent manner by MβCD, which chelates membrane cholesterol-ergosterol. Furthermore, incubation of promastigotes at 37°C with Matrigel matrix, a soluble basement membrane extract of Engelbreth-Holm-Swarm tumor cells, stimulated the release of internal MSP but not of surface-located MSP. Taken together, these data indicate that MSP subpopulations in distinct cellular locations are released from the parasite under different environmental conditions. We hypothesize that the internal MSP with its lengthy t1/2 does not serve as a pool for promastigote surface MSP in the sand fly vector but that it instead functions as an MSP pool ready for quick release upon inoculation of metacyclic promastigotes into mammals. We present a model in which these different MSP pools are released under distinct life cycle-specific conditions. The digenetic protozoa of Leishmania spp. shuttle between an extracellular promastigote form in the sand fly vector and an intracellular amastigote form in mammalian hosts, including humans. In the sand fly, the avirulent procyclic promastigotes develop to the virulent metacyclic organisms, a process termed metacyclogenesis that can be mimicked by in vitro cultivation of logarithmic-to stationary-phase promastigotes (36). Leishmania causes 1.5 to 2 million new cases of human leishmaniasis annually, with manifestations ranging from selfhealing cutaneous sores to life-threatening visceral leishmaniasis (8,9). Among a few well-characterized Leishmania virulence factors is the major surface protease (MSP), also called GP63. MSP plays several important roles during Leishmania spp. infection of mammals, including (i) enhancing promastigote phagocytosis by macrophages, (ii) facilitating promastigote evasion of complement-mediated lysis, and (iii) promoting amastigote survival in the phagolysosomes of macrophages (see reference 45 for a review). There is also evidence suggesting that in the sand fly MSP plays a role in the early-stage development of promastigotes (16) and contributes to promastigote adhesion in the guts and salivary glands (see reference 37 for a review). MSP is encoded by a family of highly conserved genes organized in a tandem array. MSP genes (MSPs) and homologues have been found in all Leishmania spp. studied to date, as well as in other trypanosomatids, including the monoxenous insect parasite Crithidia and the extracellular mammalian parasite Trypanosoma brucei (11,13,45). The number of MSPs in individual trypanosomatids ranges from seven in L. major, to dozens in L. braziliensis, to hundreds in T. cruzi (12,28,39,41). At least 18 MSPs are present in L. chagasi, the causative protozoan of visceral leishmaniasis in Latin America (33,35). During in vitro promastigote growth of virulent strain L. chagasi from the logarithmic to the stationary phase, MSP protein abundance increases 14-fold. Concomitantly, the number of MSP isoforms observed on two-dimensional gel electrophoresis (2-DE) increases from 4 to 11 (46)(47)(48). In the present study we use "MSP" or "MSPs" when referring to properties of all MSP isoforms and "MSP isoforms" when referring to specific MSP isoforms. In addition to detecting surface MSP, we and other groups have independently found that MSP is released into the extracellular medium from Leishmania spp. and other trypanosomatids (7,10,18,26,46). Moreover, a subpopulation of internal MSPs has been detected and appears to be stable for several days (42,47). Collectively, data generated from several laboratories, including our own, have demonstrated the existence of three subpopulations, i.e., surface-localized MSP, internal MSP, and released MSP. We hypothesize that these three MSP subpopulations are separately trafficked through the cell to interact with the environment, and that internal MSP serves as a pool ready for rapid release after inoculation of metacyclic promastigotes into mammalian skin. We previously showed that the half-life (t 1/2 ) of surface-localized 63-kDa MSP in virulent L. chagasi increases 75% during promastigote growth from the logarithmic to the stationary phase (47). In the present study, we demonstrate that this growth-associated regulation of surface-local-ized MSP t 1/2 diminished in the attenuated L5 L. chagasi strain. Furthermore, we report that the membrane lipid disruption reagent methyl-␤-cyclodextrin (M␤CD) enhanced the release of surface-localized MSP into the extracellular medium, whereas the internal MSP was released only after environmental exposure to an in vitro extracellular matrix modeling basement membrane, but only at the elevated temperature characteristic of a mammalian host. These data suggest that the different MSP pools are regulated independently and play distinct functions during the life cycle of Leishmania spp. A model illustrating the potential relevance of these findings during the parasite life cycle is presented. Parasites. A Brazilian strain of L. chagasi (MHOM/BR/00/1669) was continuously passaged, by intracardiac injection of amastigotes, in golden hamsters to maintain its virulence. Amastigotes were isolated from the spleens of infected hamsters and transformed to promastigotes at 26°C, in hemoflagellate-modified minimal essential medium with 10% heat-inactivated fetal calf serum (HOMEM; reagents from Gibco, Rockville, MO). Virulent promastigotes were passaged weekly in HOMEM and used within five passages. The attenuated L5 strain of L. chagasi has been continuously cultured in vitro in HOMEM for over 9 years (43). Strain L5 differs from the virulent strain in several respects, including (i) lessabundant MSP and the expression of only MSPL genes (6,43,48), (ii) a shorter and simpler lipophosphoglycan (27), and (iii) reduced virulence for rodent models (43). In some experiments, virulent promastigotes were spread on semisolid M199 agar plates to obtain clonal cells (19). A total of 124 clones were established in two independent experiments. Promastigote cultures were started at a cell density of 10 6 cell/ml at day 0 of cultivation. Logarithmic-and stationaryphase promastigotes were collected between days 2 and 4 and days 6 and 9, respectively, with phases defined according to cell density and morphology as previously described (49). Metabolic labeling and surface biotinylation. These procedures were conducted by using previously published protocols (47). Briefly, the promastigotes were pulsed in Hanks balanced salt solution (HBSS; Gibco) with Promix for 0.5 h, followed by surface biotinylation for an additional 0.5 h in sulfo-NHSbiotin. Samples were taken between 0 and 72 h of "chase" in serum-free, bovine serum albumin-free medium. Both cells and cell-free spent medium were collected. Newly synthesized MSP, localized either on the cell surface or intracellularly, was isolated by streptavidin-affinity purification and immunoprecipitated from the streptavidin-cleared fraction, respectively, and detected by autoradiography as previously described (47,48). The efficiency of pull-down via biotinstreptavidin was routinely monitored by peroxidase-conjugated ExtrAvidin (Sigma) and ECL Western blotting detection reagents (Amersham). In contrast to the streptavidin pull-down fractions, the streptavidin-cleared fractions exhibited no detectable signals. M␤CD treatment of promastigotes. Promastigotes were washed twice by centrifugation in HBSS and incubated for 48 h at 2 ϫ 10 7 cells/ml in freshly prepared M␤CD in RPMI 1640 (Gibco) ranging in concentration from 0 to 15 mM. All conditions were done in triplicate. To monitor cell viability, M␤CD-treated or control (0 mM M␤CD) promastigotes were metabolically labeled in HBSS with Promix for 0.5 h, and triplicate samples were assayed by a liquid scintillation analyzer for incorporation of radioisotope after total proteins were precipitated with trichloroacetic acid as described previously (3). The relative 35 S-labeled amino acid incorporation in the presence of M␤CD was compared to that in control (untreated) promastigotes. A ratio of 1.0 indicated that M␤CD had no effect on promastigote viability. To investigate whether membrane lipid chelation reagent M␤CD affects the release of surface-localized MSP, stationary-phase promastigotes were incubated in either 0 or 15 mM M␤CD for 3 h after surface biotinylation. Spent medium was collected and concentrated as previously described (48). Both biotinylated proteins and the internal MSP were analyzed. MSP release into Matrigel matrix. Stationary-phase promastigotes in the first passage after being converted from amastigotes isolated from hamsters were surface biotinylated. Triplicate samples of cells were suspended to a density of 2 ϫ 10 8 cells/ml, in ice-cold HBSS (100 l) plus Matrigel matrix (200 l). Cultures were then incubated at either room temperature or 37°C for 1 to 3 h. Matrigel matrix solidified under these conditions. The mixtures were transferred to 4°C overnight to liquefy the matrix, and promastigote cells were separated from the liquefied Matrigel matrix by centrifugation. Biotinylated proteins and the nonbiotinylated MSP were collected from both the whole cellular lysate and the liquefied Matrigel matrix. Electrophoresis and protein detection. Sodium dodecyl sulfate (SDS)-polyacrylamide gel electrophoresis and Western blotting were conducted as described previously (46). Autoradiogram was achieved by exposing X-ray MR films (Kodak, Rochester, NY). Samples analyzed by 2-DE were separated in the first dimension by isoelectric focusing (IEF) in Immobiline Drystrips pH 4-7 (Amersham) and in the second dimension according to size in SDS-7.5% polyacrylamide gels (48). In the case of cellular lysates of individual clones, the samples were separated in Immobiline Dryplates pH 4-7 (Amersham), after which proteins were transferred to a nitrocellulose membrane and MSP was detected by Western blotting. isoelectric points (pI) 5.8 and 6.7 were detected, along with an additional three bands between pI 4.8 and 5.2. Each of these bands formed a spot in the second dimension, as shown in immunoblots of 24 representative clones ( Fig. 1). As we anticipated but thought it important to investigate, we did not detect clonal variation in MSP expression during growth in vitro. MSP expression in clonal Surface-localized MSP isoforms are differently regulated in attenuated and virulent strains. The MSP proteins of L. chagasi promastigotes are found in three cellular locations, i.e., internal, surface, and released into the environment (46,47). The cellular distribution of MSP proteins changes during in vitro growth. We previously showed that an increase in total promastigote MSP content during "in vitro metacyclogenesis" is associated with a decrease in the rate of MSP release into the environment. To study the population of MSP released we contrasted MSP synthesis in, and loss from, virulent promastigotes compared to a nonvirulent attenuated line of parasites (L5) that does not undergo changes in total MSP content during metacyclogenesis (Fig. 2). For the purposes of comparison, we contrasted the 63-kDa MSP, an isoform that is expressed in both promastigote lines. First, the rate of MSP synthesis in both virulent and attenuated lines, in both logarithmic-and stationary-growth phases, was almost identical (Fig. 2B). L5 or virulent L. chagasi promastigotes were metabolically labeled with [ 35 S]methionine, MSPs were immunoprecipitated, and newly synthesized MSPs were detected by autoradiogram. Cytosolic P36, which is constitutively expressed (22,47), was used as a control. The ratio of MSP to P36 remained constant in the logarithmic-and stationary-phase promastigotes of both L5 and virulent strains. Second, the t 1/2 of cellular surface MSP was longer in stationary virulent promastigotes than in logarithmic virulent promastigotes, coinciding with its increased abundance in stationary virulent promastigotes. In contrast, surface MSP was lost at a uniform rate in the growth phases of attenuated L5 parasites ( Fig. 2A). Both virulent and attenuated promastigotes were surface labeled by biotinylation during the logarithmic or stationary phase of growth and chased over the next 72 h. Immunoblotting was used to confirm that the indicated bands were indeed MSPs (not shown). The surface MSP of virulent L. chagasi promastigotes had a shorter t 1/2 (52 h) when labeled in logarithmic growth phase than MSP proteins labeled in stationary phase (90 h; Fig. 2A). Whether this is due to the predominant MSP isoforms synthesized in the different growth stages or to other factors inherent in the growth phase of parasites cannot be determined (46,47). In contrast, the t 1/2 of surface-localized 63-kDa MSP in the attenuated L5 strain of L. chagasi promastigotes remained unchanged during growth from the logarithmic (51 h) to the stationary (52 h) phase. Indeed, the MSP t 1/2 was almost identical to that of logarithmic-growth-phase virulent strain promastigotes (52 h) ( Fig. 2A). In contrast to surface MSP, the internal MSP of both virulent and attenuated L5 strains remained stable throughout promastigote growth ( Fig. 2A) (47). Third, the mechanism differentiating the t 1/2 of surface MSP in virulent as opposed to attenuated L5 promastigotes was a difference in the rate of MSP shedding into the medium. Surface MSP was labeled by biotinylation in both L5 and virulent strain parasites. Parasites were then incubated in fresh medium, and surface biotinylated MSPs were detected by Western blotting of the streptavidin bead-purified fraction of the spent media after 48 h of incubation. A minimum of fourfold more MSPs was found in the spent media of both the logarithmic and the stationary phases of L5 strain and the logarithmic phase of the virulent strain than the stationary phase of the virulent strain (Fig. 2C). Collectively, these data indicate the increase in surface-localized MSP in the stationary-phase virulent promastigotes is associated with a decrease in the rate of shedding into the environment compared to logarithmic-phase virulent promastigotes. There is no similar growth phase-dependent regulation of MSP in the L5 attenuated strain of L. chagasi. M␤CD enhances the release of the surface-localized MSP isoforms. The unique retention of surface MSP by virulent stationary-phase promastigotes could be due to its association with surface lipid-containing membrane domains. M␤CD depletes lipid rafts from the plasma membranes of a variety mammalian cells by chelating and transiently removing membrane cholesterol (14,17,21,24,31,40). Based on the hypothesis that differential association of MSP with membrane lipids could account for its release by logarithmic promastigotes and retention by stationary promastigotes, we reasoned that membrane lipid disruption with M␤CD could enhance MSP release from the Leishmania membrane. In replicate experiments, virulent L. chagasi promastigotes were treated with 0, 5, 10, or 15 mM M␤CD for 48 h. A dose-dependent augmented release of MSP into the extracellular medium was observed. Specifically, control cells (0 mM M␤CD) released ca. 35% of MSP, whereas the cells in 15 mM M␤CD released ϳ80% of MSP into the extracellular medium ( Fig. 3A and B). To eliminate the possibility that the enhanced MSP release was due to a toxic effect of M␤CD on promastigotes, the rate of promastigote protein synthesis was measured in the absence or presence of M␤CD under the experimental conditions. Comparable levels of 35 S-radioisotopes were incorporated into newly synthesized proteins of untreated or M␤CD-treated cells (Fig. 3C), indicating that M␤CD treatment of promastigotes under these conditions is not detrimental to the cells. Furthermore, the growth curves of untreated or M␤CD-treated cells in HOMEM were similar (data not shown). Thus, perturbing the plasma membrane lipid-containing domains of stationaryphase promastigotes accelerates MSP release into the extracellular medium, although it does not appear to harm the promastigotes in culture. To test the hypothesis that disruption of membrane lipid- containing domains with M␤CD only promotes release of surface-localized MSP, stationary-phase promastigotes were treated with 15 mM M␤CD for 3 h after surface biotinylation. Control cells were treated identically but received no M␤CD. Spent medium was collected, from which surface-biotinylated proteins were isolated by streptavidin affinity purification. Internal MSP was purified by immunoprecipitation from the streptavidin-cleared fraction of the spent medium. Immunoblots were used to assay for the presence of MSP. As shown in Fig. 3D, nonbiotinylated, internal MSP was not detectable in extracellular medium. In contrast, biotinylated, surface MSP was 34.6% Ϯ 12.8% (n ϭ 3) more abundant in the spent media of the M␤CD-treated cells than in controls. Furthermore, no cytoskeletal ␤-tubulin and cytosolic P36 markers were detected by immunoblotting in the clear fraction of the same spent media after biotin-streptavidin affinity purification and MSP immunoprecipitation ( Fig. 3D and data not shown), which eliminates the possibility that MSP release is due to cell lysis. These data indicate that M␤CD enhances the release of only surface-localized MSP, a result consistent with the possibility that MSP stabilization in the surface membrane requires an association with cholesterol/ergosterol-containing lipid domains. Release of internal MSP isoforms is stimulated by the Matrigel matrix, specifically at 37°C. Although surface MSP can be artificially released by disrupting membrane lipid domains, the natural evolution of stationary promastigotes in the sand fly is to a cellular state that retains surface MSP. Metacyclic promastigotes are inoculated by sand flies into mammalian tissues, whereupon they initially encounter an elevated temperature and components of extracellular mammalian environment. We investigated whether MSP would be released under conditions that mimic the in vivo setting. First, we tested whether the highest mammalian body temperature encountered by the parasite, i.e., 37°C, would stimulate internal MSP release. Stationary-phase promastigotes were metabolically labeled, surface biotinylated, and subsequently incubated at 37°C for 24 h to test for release of surface versus internal MSP. Similarly treated control promastigotes were incubated at room temperature. Surface and internal MSPs were immunoprecipitated from the streptavidin bead-enriched or -cleared cellular lysates and detected by autoradiography. Under these conditions, there was no detectable change in internal versus surface MSP in the promastigotes after 24 h at a higher temperature (data not shown). These data suggest that a temperature increase to 37°C is by itself insufficient to stimulate internal MSP release. We then incubated stationary-phase promastigotes in the Matrigel matrix at 37°C to test whether this combination would stimulate the release of internal MSP. Matrigel matrix is a soluble basement membrane extract of Engelbreth-Holm-Swarm tumor cells, which has been used to study the metastasis of cancer cells (29,34). One prominent feature of this matrix is that it is a liquid at 4°C but it gels at room temperature and above, forming a reconstituted basement membrane. Consequently, when promastigotes are incubated in the matrix at 37°C, this setting experimentally mimics the site of sand fly inoculation into a mammalian host. Stationary-phase promastigotes were surface biotinylated prior to incubation in either Matrigel matrix or HBSS. Pro-mastigotes incubated in HBSS released surface MSP but little or no internal MSP into the extracellular medium at room temperature. Neither MSP form, either surface or internal, was substantially released at 37°C (Fig. 4A and B). In contrast, incubation of promastigotes in the Matrigel matrix for 3 h at 37°C stimulated release of mostly internal MSP (Fig. 4A and B). This effect was enhanced by a longer (3 versus 1 h) Strikingly, the effect of Matrigel on release of internal MSP was significantly lower at room temperature, whereas more surface MSP was released under these conditions ( Fig. 4A and B). Furthermore, the level of total internal MSP was significantly higher in parasites incubated in Matrigel compared to HBSS, although there was no change in internal MSP when parasites were incubated at room temperature versus 37°C (Fig. 4C). Hence, it is very unlikely that the specific release of internal MSP stimulated by a combination of Matrigel matrix and 37°C was due to leakiness of intracellular content from damaged promastigotes, even though we cannot formally eliminate this possibility at this time. Overall, these results lead us to conclude that surface MSP is released at room temperature and that this release is inhibited at 37°C, whereas internal MSP is released in response to the presence of Matrigel matrix, specifically at 37°C (Fig. 4). DISCUSSION MSPs are among the most abundant cellular proteins in promastigotes of all Leishmania spp. studied to date. Indeed, in L. mexicana, MSPs account for 1 and 0.1% of total proteins in promastigotes and amastigotes, respectively (1). Promastigote cell-associated MSP is predominantly attached to the cell surface by glycosylphosphatidylinositol anchors (4,5). However, our laboratory and others have observed that as much as one-third of the cell-associated MSP is located intracellularly, as determined by a combination of surface biotinylation, immunoelectron microscopy, and cytofluorometry (42,47). Furthermore, the internal MSP in L. chagasi is so stable that no reduction in abundance is detected for up to 6 days using pulse-chase analysis (47). We hypothesized that the surfacelocalized and internal MSPs are regulated separately via different mechanisms. Furthermore, the role and origin of the MSP released by promastigotes into extracellular medium has as yet been uncharacterized. In the present study we showed by using M␤CD that the decreased release of surface MSP by the virulent stationary promastigotes is associated with the content of membrane lipids, since M␤CD-mediated removal of cholesterol/ergosterol specifically enhanced the release of surfacelocalized MSP into extracellular medium. This likely reflects changes in the promastigote membrane during metacyclogenesis, in that a lipid-rich membrane retaining MSP in metacyclic parasites could promote retention of high surface levels of this virulence factor. In contrast, exposure to conditions mimicking mammalian tissue with Matrigel at 37°C stimulated the release of internal, but not surface, MSP. These data demonstrate for the first time that the surface-localized and internal MSPs are trafficked out of the promastigote cell in response to different external stimuli. Phenotypic variation has been found in isoforms of a 235-kDa rhpoptry protein between clones of Plasmodium yoelii parasites. This protein is encoded by a multigene family of ϳ50 genes and may be involved in the selection of red blood cells for invasion by merozoites (2,15,20,30,38). Because there are at least 11 MSP isoforms in stationary-phase virulent L. chagasi promastigotes (47,48), we hypothesized that similar variation between L. chagasi parasites could yield clonal isolates that express one or a few MSPs. However, we were not able to document clonal variation in MSP expression by cells ex-panded from individual clones by using 2-DE immunoblotting. This does not prove that all parasite clones express all MSP isoforms or that individual parasite clones cannot express only one or a few MSP isoforms in vivo. Nonetheless, according to our ability to detect we tentatively conclude that at least some L. chagasi parasites are able to express the majority of MSP isoforms when derived from a single cloned cell. The three MSP classes of mRNAs (MSPL, MSPS, and MSPC) in L. chagasi are posttranscriptionally regulated. In the case of MSPL mRNA this regulation is known to occur specifically at the level of mRNA stability (6,32,44). Regarding MSP regulation at the protein level, we showed herein that the measurable rate of MSP synthesis was very similar throughout promastigote growth in vitro from logarithmic to stationary phase, a finding consistent with our earlier report (47). Therefore, the growth-associated 14-fold increase in the abundance of cell-associated MSP must be posttranslationally regulated. An increase in protein stability, associated with decreased MSP shedding, accounts for a fivefold increase (47). We show here that the internal pool of MSP is extremely stable throughout growth of both attenuated L5 and virulent parasite strains. Consequently, internal MSP appears not to be affected by the growth-associated regulation of MSP stability. We also demonstrate that the t 1/2 of surface-localized 63-kDa MSP in the attenuated strain is similar to that of the lower MSP-expressing, logarithmic-phase promastigotes of the virulent strain, regardless of the growth phase (Fig. 2). One plausible explanation for this difference between attenuated and virulent strains during growth is the different rates of MSP shedding. We documented that the rate of MSP shedding by stationary-phase virulent promastigotes is slower than that of logarithmic-phase virulent promastigotes and that MSP shedding by L5 attenuated promastigotes is more rapid than virulent L. chagasi in all growth phases (Fig. 2C). The biochemical mechanisms by which Leishmania spp. promastigotes regulate MSP release are not well understood. Released MSPs have electrophoretic mobilities similar to those of their cell-associated counterparts (46). At least some surfacelocalized L. amazonensis MSP is released through autoproteolytic activity, as shown by site-specific mutation and inhibition by a zinc chelator (26). We previously determined that released MSP does not bind to a antibody against the cross-reactive determinant, suggesting it is not released by a phosphatidylinositol-specific phospholipase C (46) similar to the released MSP of L. amazonensis. Released L. amazonensis MSP does not contain ethanolamine, suggesting it lacks a glycosylphosphatidylinositol membrane anchor (26). The data generated here by using lipid chelation suggests that the decreased release of MSP from stationary virulent promastigotes is due to remodeling of the surface membrane such that MSP is retained in association with lipids. We cannot rule out the additional possibility that there may also be recycling and degradation of MSP as a means of decreasing cellular levels of MSP, but this has yet to be tested. In addition to the above evidence that MSP release by virulent promastigotes requires a specific membrane lipid composition, we approached the mechanisms by which Leishmania spp. promastigotes release MSP using a model of in vivo conditions. The Matrigel matrix contains laminin, collagen IV, entacin, heparin sulfate proteoglycan, growth factors, collage-nases, and other undefined components. We demonstrated here that a combination of this matrix and mammalian body temperature is sufficient to stimulate internal MSP release. We suggest that the mechanism of MSP release in mammalian tissue differs from release in promastigote axenic culture. Whether this reflects differences between MSP trafficking in the sand fly versus the mammalian hosts is not clear. The goal of the present study was to address how the three MSP subpopulations (surface, internal, and released) are regulated during metacyclogenesis and in response to the mammalian host environment. A model for MSP regulation in the different promastigote environments is illustrated in Fig. 5. In this model MSP is abundantly released by the dividing, procyclic promastigotes in the sand fly gut, as simulated by the logarithmic growth of L. chagasi in culture. This released MSP might be related to the nutrient requirements of Leishmania in the insect gut environment, where residual mammalian blood from the sand fly meal is a main source of nutrients. Indeed, it has been shown that downregulation of MSP in L. amazonensis reduces the parasites' early development in sand flies (16). As procyclic promastigotes develop to metacyclic promastigotes, the rate of released surface-localized MSP decreases and the abundance of surface-localized MSP increases (47). Our data suggest that this increase is due to association of metacyclic MSP with lipid-containing membrane domains. Internal MSP is not released during metacyclogenesis. However, after inoculation into mammalian subcutaneous tissue by a sand fly vector, metacyclic promastigotes encounter a temperature increase, host extracellular matrix, and innate immune mechanisms such as complement, antimicrobial peptides and phagocytotic cells. In response to these stimuli, promastigotes could release internal MSP into mammalian tissue. It is thus logical to consider the possibility that the surface-localized MSP plays a role in the promastigotes' evasion of complementmediated killing and their phagocytosis and/or internalization by macrophages and other cells. Internal MSP, on the other hand, may play a role in the degradation of extracellular matrix components such as collagen IV and fibronectin, as suggested in a prior report on an L. amazonensis (25). As such, it could facilitate promastigote migration toward cells such as macrophages, dendritic cells, and fibroblasts that are favorable for parasite entry and long-term survival. Thus, it is likely that the many isoforms of MSP protease facilitate parasite survival through different mechanisms in the diverse host and vector environments encountered by the parasite.
2014-10-01T00:00:00.000Z
2007-08-10T00:00:00.000
{ "year": 2007, "sha1": "2fef910b691ea6c8b7ebc13f27e4d1fb51d08569", "oa_license": null, "oa_url": "https://ec.asm.org/content/6/10/1905.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4bd16b0fb9cedb7fb11dd0985b77b63afa551ce", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
214144748
pes2o/s2orc
v3-fos-license
Educated commissioning and operation of a complex nearly zero energy office building with the help of dynamic thermal HVAC-simulations - a best practice report from the Austrian postal service headquarter Post am Rochus This paper presents lessons learned from practice focusing on planning, commissioning and first year of operation of the Austrian Post AGs’ new headquarter “Post am Rochus”, a nearly zero energy building with 49.300 m2 gross floor area. The execution phase was supported by a research project aiming for a significant reduction of the commissioning phase. Based on the positive experience of this Austrian lighthouse project, general conclusions are drawn focusing on which activities and recommendations can be applied to similar projects, and which parts need further improvement or adaption. The question of how and when building energy performance simulations of complex HVAC-systems can be used to increase the energy efficiency, room comfort and operational stability of modern office buildings is investigated from a practical perspective. The challenge, and an outlook of how to reduce the information gap between the various disciplines involved in a complex multi-stakeholder process and the conclusions from the building owner are discussed and summarized in the conclusions. Motivation and Objectives Current changes in weather and climate again fuel the discussion on future building standards like net or nearly zero energy (nZEB/NZEB). On the one hand harsher weather conditions lead to an increased climate awareness, and in return to political/regulatory targets regarding energy efficiency. On the other hand the changes also affect the capabilities and expectations of modern office buildings when it comes to (thermal) comfort. In order to master these challenges with technical means, the processes of planning, construction, commissioning and operation play an enabling role and thus require a high quality themselves. In reality, buildings often operate in sub-optimal conditions due to various technical and non-technical barriers resulting in inefficiencies and discomfort, in other words "dissatisfaction" for domestic buildings and "loss of productivity" (mainly) for commercial buildings. One reason is the common practice to design and size the system for peak-load operation (which is often legally required to be granted permission to build), even though this scenario might never occur during the whole building lifecycle. In order to fully exploit the high efficiencies of the different components (compression chillers, inverter driven heat pumps, free cooling loops, etc.) it is necessary to thoroughly analyse, design and document all relevant operational modes (part-load behaviour, transition period behaviour, etc.). Another barrier to overcome is the often insufficient commissioning phase, which should primarily guarantee/check the physical presence and correct operation of all sensors and actuators but also test the correct implementation of the various control strategies within the building automation system. The aim of this publication is to report from the Austrian project PEAR providing recommendations and improvements for future similarly ambitious building construction projects with a specific focus on extending the usage of simulation models/ results especially during the first years of a buildings' life cycle. The evaluation of monitoring data as well as the innovative "controller in the loop" methodology are not part of this report. A summary of the complete activities can be found at the Austrian research and development agencies' website 1 2. Building and process description 2.1. The "Post am Rochus" building The aim of the owner was to obtain a modern comfortable building with very high standards regarding sustainability and energy efficiency not only in the operational phase but also during the execution phase and commissioning. Therefore, the opportunity to frame the building construction project with a research project was highly appreciated and also supported. The goal was to shorten the commissioning phase by a very detailed study and optimization of control strategies of the HVAC systems along with the involved planning, construction and operational project team of the "Post am Rochus" project (PEAR). The results should not only be documented in a clear manner, moreover the developed strategies should be tested with the real hardware well in advance and thus avoiding/reducing time-and cost intensive bug-fixing. The contractor was actively involved in the research project as well, mainly contributing to the control strategy implementation. The following chapter briefly describes the control strategy development process. Regarding the energy and HVAC system, notable highlights of this project are: • three highly efficient compression chillers with 1 MW capacity each, • the option for heat recovery during cooling season (reheat-coil during dehumidification), • the utilization of a 320m 3 sprinkler water basin as cold temperature storage for free-cooling (via recoolers) • concrete core activation for cooling, and • major air handling units are equipped with latent and sensible recovery (enthalpy rotor with bypass). The heating energy for concrete core activation and fan coils is supplied by the local district heating grid. Service hot water is prepared decentralised by electric heaters. The planning process during the execution phase The ambitious aim was to drastically shorten the commissioning phase. In order to achieve this, all relevant parties (HVAC planner, measurement and control engineers, owner and operator) were involved from the very beginning, which in this case was almost two years prior to commissioning. This should ensure that the commissioning happens in an "educated" manner, with a very high degree of system understanding, minimal "confusion" and a well-documented description of functionalities, operational phases and control rules/set points as well as specific performance targets. In order to have a quantifiable basis, building performance energy simulations (BEPS) have been used in a broad manner extending their application to the domains of control engineering and building automation. System integration and control strategy development Within the project nine different operational modes have been identified, see Figure 2. Cost-optimized control strategies have been defined for each mode by using parameter variations of i.e. set points, hysteresis, time schedules, cross-dependency curves (i.e. power cascades), etc. The aim was to describe all modes in a way that the control engineers and the operators/facility managers obtain a practical guidance for programming, operation and bug-fixing. Furthermore, it should ensure that all involved parties refer to the same data basis, i.e. a "functional manual" including controller set points, times schedules etc. Once the control engineers finalized the control strategies, certain parts of their implementation in the building automation software have been tested using the "controller-in-the-loop" approach. This allows to test the control strategies on the actual controller hardware to be implemented in the building. Functionality-checks were performed for all operational modes prior to their implementation in the building and thus independently of the actual weather during commissioning, which is an important benefit, as cooling systems can hardly be fully tested in heating season. This approach reduces commissioning time, since bugs could be found before the soft-and hardware was finally implemented in the "Post am Rochus" building, further details see e.g. [1] Figure 2. Overview of the major thermal system components and operational modes. Simulations For BEPS the software TRNSYS 17.02 has been chosen, as it was used by all involved simulation parties and thus the most straightforward choice. Due to the high complexity of the simulation tasks and for unhindered workflows in the different offices the overall HVAC model has been split into three independently functioning decoupled sub-models (the following investigations are based on the second part model): 1. Multizone building model Optimization of the envelope, control strategy development of concrete core activation, calculation of heating and cooling loads (input for thermal system) 2. Thermal (cooling) system Control strategy development and optimization of all operational modes (see Figure 2) CISBAT Ventilation system Control strategy development and optimization for various operational modes of the air handling units The results of the sub-models have been validated individually and have been taken as input for the thermal model. This model used the load curves for high temperature cooling (concrete core activation) and low temperature cooling (fan-and cooling coils) that have been calculated in the building and ventilation model as an input. The load curves of the supermarket and server rooms have been estimated according to standards (SIA2024 2 ). Component model validation As a first step within the model generation process the tendered main components have been modelled and validated against manufacturers' data sheets. An example of the chiller 3 and the recooler 4 is shown in Figure 3. The four standard storage tanks (3m 3 capacity each) have been implemented with multilayer stratified storage models (TRNSYS Type 4b). The two sprinkler basins with a volume of 170m 3 each (dimensions: 6 x 6 x 5 m) have been modelled with the TRNSYS Type 531 5 . The simulations have been used for two purposes: on the one hand to analyse typical/potential failures that the facility management reported from similar components/systems with the specific system, on the other hand for further optimization of the control strategies. Subsystem models The individually validated component models have been connected to subsystem-models with the purpose to optimize the layout (hydraulic connections and temperature levels of each individual component) of the overall HVAC model. Therefore parameter variations regarding temperature levels of the high and low temperature loop, cooling curves for fan coils, server rooms and concrete core activation as well as operating/charging conditions for storages, recoolers and all three compression chillers have been conducted in order to find the cost-optimal configuration. To quantify the influence of major assumptions like the cooling demand for the server rooms sensitivity analysis have been carried out. Conclusions and recommendations 4.1. The commissioning phase One of the main goals was to drastically shorten the commissioning phase, however due to delays in the construction process the trial-operation could not take place as planned, as the building has been occupied rapidly. Based on the experiences of the project, a successful commissioning phase is supported by the following factors: • Functional quality management on component level • Correct functionality of the whole hydraulic system including extensive hydraulic balancing • Provision of a clear documentation for the commissioning phase of all major components (building automation, energy efficiency/performance targets, set points, etc.) • Definition of suitable overall control strategies and operational modes for the whole system. In order to guarantee an efficient operation of the system it is further recommended to install a quality management scheme for the trial operation phase. Therefore, major operational modes and procedural steps to be tested must be clearly defined together with the control engineers. Owners' perspective The Austrian Post AG was actively involved in the research project contributing as well as demanding based on her experience within various professional units like the internal technical facility management and the group real estates. The overall feedback was very positive as costly problems/failures could be identified and avoided in an early stage. Maintaining an interdisciplinary approach throughout the construction and commissioning phase and actively integrating the research results into practice were major pillars to obtain efficient building operation. The owner's quality management needs to involve as well as demand and promote the technical facility management. The integral process and the results from simulations, discussions and agreed-upon targets are extremely helpful and a basis for that. The periodic workshops with various experts and professionals build the communication basis for the whole process. These workshops definitely need to be continued in a higher frequency during the first weeks of operation, as the overall system knowledge that has been built up during the previous years should be exploited to the fullest extent possible. Doing so potential faults can be identified rapidly and improvements or necessary adjustments can be discussed immediately in an "educated" manner. Even though it's often a time critical process (which is frequently skipped or delayed in reality), enough time should be reserved for the acceptance, handover and trial operation of the whole building technology and automation system (including all sensors, actuators and data handling/visualization). Those processes need to be conducted and documented in a well-structured and coordinated manner. Therefore, special attention must be put on the legal/contractual side already in the early planning stage, as the majority of all the efforts during planning and construction culminate in this crucial point. Research and general The project has proven to be very successful, still there is a long way to go when it comes to establishing integral planning practices and "educated commissioning" procedures in highly complex nZEB. The extension of the usage of dynamic-thermal simulations into the building automation and control domain should become common practice, as it really plays an enabling role for the successful realisation of high quality and comfortable nZEB. One barrier is the lack of information in the early planning phases during execution as some components will be fixed on quite short notice. This results in a high number of assumptions that all might affect the quality and reliability of the obtained results. The time needed to set up such complex models 6 including bug-fixing, plausibility check and adequate visualization of the results might be problematic, as decisions are often time-critical and of course time means costs which is always a problem in building construction processes.
2019-11-22T00:55:07.777Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "56fd95c04eef8e54768e29c3a02737181b2223c2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1343/1/012137", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f4695144f4124033edcdd4c89b8a4327ffe7ff62", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
59491093
pes2o/s2orc
v3-fos-license
Pair correlation function of an inhomogeneous interacting Bose-Einstein condensate We calculate the pair correlation function of an interacting Bose gas in a harmonic trap directly via Path Integral Quantum Monte Carlo simulation for various temperatures and compare the numerical result with simple approximative treatments. Around the critical temperature of Bose-Einstein condensation, a description based on the Hartree-Fock approximation is found to be accurate. At low temperatures the Hartree-Fock approach fails and we use a local density approximation based on the Bogoliubov description for a homogeneous gas. This approximation agrees with the simulation results at low temperatures, where the contribution of the phonon-like modes affects the long range behavior of the correlation function. Further we discuss the relation between the pair correlation and quantities measured in recent experiments. I. INTRODUCTION One of the appealing features of the experimental achievement of Bose-Einstein condensation in dilute vapors [1][2][3], is the demonstration of first order coherence of matter waves [4]. The interference pattern of this experiment agrees with the theoretical calculation [5], which reveals that the underlying theoretical concept of off-diagonal long range order due to a macroscopically occupied quantum state is justified [6]. Additional experiments have explored certain aspects of second and third order coherence of a trapped Bose gas [7][8][9]. Here we study the density-density correlation function which is related to second order coherence. With the knowledge of this pair correlation function, the total interaction energy can be calculated. In [7] the release energy of the atoms was measured after switching off the magnetic trap. In the Thomas Fermi regime at zero temperature the initial kinetic energy can be neglected and the release energy is dominated by the interaction energy. By comparison with the usual mean field interaction energy using a contact potential, it was concluded that the release energy is mainly proportional to the pair correlation function at vanishing relative distance. Strictly speaking this statement cannot be correct as for interactions with a repulsive hard core the pair correlation function must vanish at zero distance. To give a precise meaning to this statement one needs to access the whole correlation function. In this paper we consider in detail the spatial structure of the correlation function of an interacting trapped Bose gas. The Fourier transform of this function is directly related to the static structure factor which can be probed by off-resonant light scattering. The tendency of bosonic atoms to cluster together causes atom-bunching for an ideal gas above the condensation temperature, for the atoms separated by less than the thermal de-Broglie wavelength [10]. For the condensate atoms, this bunching vanishes, since they all occupy the same quantum state [11,12]. However, for a gas with strong repulsive interatomic interaction, it is impossible to find two atoms at exactly the same place, and hence the pair correlation function must vanish at very short distances. This mutual repulsion can significantly reduce the amount of bosonic bunching at temperatures around the transition temperature [13]. At much lower temperature, the presence of the condensate changes the excitation spectrum as compared to the noninteracting case. It is known that in a homogeneous Bose gas the modes of the phonons give rise to a modification of the long range behavior of the correlation function [14]. Using path integral quantum Monte Carlo simulations all equilibrium properties of Bose gases can be directly computed without any essential approximation [15]. It has been shown that this calculation can be performed directly for the particle numbers and temperatures of experimental interest [16]. Here, we use this approach to calculate the pair correlation function for various temperatures and compare our results with simple approximate treatments. Near the critical temperature our data are quantitatively well explained by an improved semiclassical Hartree-Fock theory, where the full short range behavior is taken into account. At low temperature this single-particle approximation fails since the low lying energy modes become important and they are not correctly described by the Hartree-Fock treatment. In the Bogoliubov approach these modes are phononlike and change the behavior of the correlation function. Adapting the homogeneous Bogoliubov solution locally to the inhomogeneous trap case we find an excellent agreement with the Monte Carlo simulation results at low temperature. II. HAMILTONIAN OF THE PROBLEM The Hamiltonian of N interacting particles in an isotropic harmonic trap with frequency ω is given by where V is the interatomic potential, which depends only on the relative distance r ij = | r i − r j | between two particles. This potential in the experiments with alkali atoms has many bound states, so that the Bose-condensed gases are metastable systems rather than systems at thermal equilibrium. To circumvent this theoretical difficulty, we have to replace the true interaction potential by a model potential with no bound states. This model potential is chosen in a way that it has the same low energy binary scattering properties as the true interaction potential. In the considered experiments, the s-wave contribution strongly dominates in a partial wave expansion of the binary scattering problem, so that it is sufficient that the model potential have the same s-wave scattering length a as the true potential. For simplicity we take in the quantum Monte Carlo calculations a pure hard-core potential with diameter a. In the analytical approximations of this paper, we have taken, as commonly done in the literature, the pseudo-potential described in [14], which is a regularized form of the contact potential, gδ( r 1 − r 2 ) ∂ ∂r12 (r 12 ·), with a coupling constant The partition function Z of the system with inverse temperature β = (k B T ) −1 is given as the trace over the (unnormalized) density matrix ̺: over all symmetrized states. Both satisfy the usual convolution equation which we can write in the position representation: Here τ = β/M , where M is an arbitrary integer, R is the 3N-dimensional vector of the particle coordinates R = ( r 1 , r 2 , ..., r N ), P is a permutation of the N labels of the atoms and R P denotes the vector with permuted labels: R P = ( r P (1) , r P (2) , ..., r P (N ) ). Since only density matrices at higher temperature (τ ≪ β) are involved, high temperature approximations of the N -body density matrix can be used. The simplest approximation is the primitive approximation corresponding to exp[τ ( , which neglects the commutator of the operators A and B. It corresponds to a discrete approximation of the Feynman-Kac path integral and gives the correct result in the limit M → ∞ [17,15]. This can be seen by using the Trotter formula for the exponentials of a sum of two noncommuting operators e τ (A+B) = lim n→∞ e τ A/n e τ B/n n . The discretisized path integral for the N -particle density matrix at inverse temperature τ can therefore be written in the primitive approximation with symmetric splitting as where ̺ 1 ( r k , r k ′ , τ ) is the density matrix of noninteracting particles in the harmonic trap and r ij = r i − r j , However, this approximation leads to slow convergence since the potential energy in the argument of the exponentials are not slowly varying compared to the density matrix of one particle in the external potential, ̺ 1 ( r i , r i ′ , τ ). This has the consequence that eq. (7) is not a smooth function in the region where two particles are in contact, as it should. In order to get such a smooth function we use the fact that the potential energy part of eq.(7) can also be written as: where the brackets correspond an average over an arbitrary distribution of r ij (t), starting from r ij and ending at r ij ′ , which reproduces the correct high temperature limit of the primitive approximation. It is convenient to take the random walk corresponding to the kinetic energy as weight function so that g 2 is the solution of the binary scattering problem in free space: where p ij is the operator of the relative momentum between particles i and j. This leads to the so called pair-product approximation [18,15], where the density matrix is approximated as This approximation has the advantage to include exactly all binary collisions of atoms in free space, only three and more atoms in close proximity will lead to an error; convergency with respect to M → ∞ is reached much faster. In the simulation the two-particle correlation function g 2 is equal to one for noninteracting particles and plays the role of a binary correction term in presence of two-body interactions. As in [16] we take N = 10, 000 particles with a hard-core radius of a = 0.0043(h/mω) 1/2 . The transition temperature of the noninteracting Bose-gas is k B T 0 c = 20.26hω or β 0 c ≃ 0.05(hω) −1 and a value of τ = 0.01(hω) −1 was found sufficient. In the low temperature regime (k B T ≪h 2 /ma 2 ) the most important contribution to g 2 for hard spheres is the s-wave contribution, which can be calculated analytically [19]; for non vanishing relative angular momenta (l > 0) we neglect the effect of the potential outside of the hard core. In this way we can obtain an explicit formula for g 2 , for r and r ′ outside of the hard core diameter (| r| > a and | r ′ | > a), otherwise g 2 = 0. The density-density correlation function can be easily calculated as As the atoms are in a trap rather than in free space, this quantity is not a function of the relative coordinates r ′ − r ′′ of the two particles only. Imagine however that this pair distribution function be probed experimentally by scattering of light by the atomic gas, where we assume a large beam waist compared to the atomic sample. As the Doppler effect due to the atomic motion is negligible, the scattering cross section depends only on the spatial distribution of the atoms. Furthermore, for a weak light field very far detuned from the atomic transitions, the scattering cross section can be calculated in the Born approximation; it then depends only on the distribution function of the relative coordinates r ′ − r ′′ between pairs of atoms. We therefore take the trace over the center-of-mass position R = ( r ′ + r ′′ )/2: where we have divided by the number of pairs of atoms to normalize ϕ (2) to unity. Note that the result depends only on the modulus r of r as the trapping potential is isotropic. B. Results of the Simulation In fig.1 we show ϕ (2) (r, β) for various temperatures below T 0 c , obtained by the simulation of the interacting bosons in the harmonic trap, where the critical temperature T c is reduced compared to the ideal gas [20,16,21]. All pair correlation functions are zero in the region of the hard-core radius as they should. At larger length scales the r dependence of the result is also simple to understand qualitatively, as we discuss now. Consider first the case T > T c , where no condensate is present. As the typical interaction energy n(r)g (n(r) being the total one-particle density at r) is much smaller than k B T , we expect to recover results close to the ideal Bose gas. The size of the thermal cloud (k B T /mω) 1/2 determines the spatial extent of ϕ (2) (r); the bosonic statistics leads to a spatial bunching of the particles with a length scale given by the thermal de Broglie wavelength The Bose enhancement of the pair distribution function is maximal and equal to a factor of 2 for particles at the same location ( r = 0). This effect is preserved by the integration over the center of mass variable and manifests itself through a bump on ϕ (2) (r) in fig.1. Due to the influence of interactions the bump is suppressed at small distances and the factor of 2 is not completely obtained. For T < T c a significant fraction of the particles accumulate in the condensate. As the size of the condensate is smaller than that of the thermal cloud, the contribution to ϕ (2) of the condensed particles has a smaller spatial extension, giving rise to wings with two spatial components in ϕ (2) , as seen in fig.1. Apart from this geometrical effect the building up of a condensate also affects the bosonic correlations at the scale of λ T : The bosonic bunching at this scale no longer exists for particles in the condensate. This property, referred to as a second order coherence property of the condensate [7,8,13], is well understood in the limiting case T = 0; neglecting corrections due to interactions, all the particles are in the same quantum state |ψ 0 so that e.g. the 2-body density matrix factorizes in a product of one-particle pure state density matrices. This reveals the absence of spatial correlations between the condensed particles. This explains why in fig.1 the relative height of the exchange bump with respect to the total height is reduced when T is lowered, that is when the number of non-condensed particles is decreased. IV. COMPARISON WITH SIMPLE APPROXIMATE TREATMENTS At this stage a quantitative comparison of the Quantum Monte Carlo results with well known approximations can be made. A. In presence of a significant thermal cloud: Hartree-Fock approximation As shown in [21] in detail, at temperatures sufficiently away from the critical temperature, the Hartree-Fock approximation [20] gives a very good description of the thermodynamic one-particle properties. To derive the Hartree-Fock Hamiltonian we start from the second quantized form of the Hamiltonian with contact potential where H 0 is the single particle part of the Hamiltonian. Due to the presence of the condensate we split the field operatorΨ in a classical part ψ 0 , corresponding to the macroscopically occupied ground state and the part of the thermal atomsψ with vanishing expectation value ψ = 0: After this separation we make a "quadratization" of the Hamiltonian by replacing the interaction term by a sum over all binary contractions of the field operator, keeping one or two operators uncontracted, e.g.ψ †ψ †ψψ ≃ 4 ψ †ψ ψ †ψ − 2 ψ †ψ ψ †ψ . This is done in such a way that the mean value of the right hand side agrees with the mean value of the left hand side in the spirit of Wick's theorem. In the Hartree-Fock approximation we neglect the anomalous operators, such asψ †ψ † , and their averages, and we end up with a Hamiltonian which is quadratic in ψ 0 andψ, but also linear inψ andψ † . Now we choose ψ 0 such that these linear terms vanish in order to force ψ = 0. This gives the Gross-Pitaevskii equation for the condensate [22] −h 2 ∇ 2 2m + 1 2 mω 2 r 2 + g[n 0 (r) + 2n T ( r, r)] ψ 0 (r) = µψ 0 (r) where n 0 (r) = |ψ 0 (r)| 2 corresponds to the condensate density with N 0 particles and n T ( r, r) = ψ † ( r)ψ( r) is the density of the thermal cloud. Up to a constant term we are left with the Hamiltonian for the thermal atomŝ where n(r) = n 0 (r) + n T ( r, r) denotes the total density and depends only on the modulus of r. To work out the density-density correlation function, we formulate (12) in second quantization: we use the splitting (16), together with Wick's theorem and get ̺ HF ( r; r ′ , β) = ψ 0 (r)ψ 0 (r)ψ 0 (r ′ )ψ 0 (r ′ ) +ψ 0 (r)ψ 0 (r)n T ( r ′ , r ′ ) + ψ 0 (r ′ )ψ 0 (r ′ )n T ( r, r) + 2ψ 0 (r)ψ 0 (r ′ )n T ( r, r ′ ) +n T ( r, r)n T ( r ′ , r ′ ) + n T ( r, r ′ )n T ( r, r ′ ). Here we have chosen the condensate wave function to be real and corresponds to the nondiagonal elements of the thermal one body density matrix. Since the Hamiltonian (19) of the thermal atoms is quadratic inψ, this density matrix is given by In the semiclassical approximation (k B T ≫hω) we can calculate explicitly these matrix elements by using the Trotter break-up, which neglects the commutator of r and p: We finally get For the diagonal elements the summation gives immediatly the Bose function g 3/2 (z) = ∞ l=1 z l /l 3/2 . For a given number of particles N , eq.(18) and the diagonal elements r = r ′ of eq.(26) have to be solved self consistently to get the condensate density n 0 (r) and the thermal cloud n T ( r, r). With this solution we can work out the nondiagonal matrix elements of the density operator which give rise to the exchange contribution of the density-density correlation (21), and the correlation function can be written as a sum over the direct and the exchange contribution Up to now the short range correlations due to the hard core repulsion have not been taken into account, but we can improve the Hartree-Fock scheme further to include the fact that it is impossible to find two atoms at the same location: We assume that the particle at r interacts with the full Hamiltonian with the particle at r ′ but only with the mean-field of all others (over which we integrated to derive the reduced density matrix). This gives in first approximation: where the two particle correlation function g 2 is the solution of the binary scattering problem, eq.(11). Further we used the fact that g 2 ≃ 1 for particle distances of the order of λ T and larger. In principle one should integrate over the second particle to get a new one-particle density matrix and find a selfconsistent solution of the Hamiltonian. But since the range of g 2 is of the order of the thermal wavelength, it will only slightly affect the density, so we neglect this iteration procedure. Using the solution of the coupled Hartree-Fock equations to calculate (29), and integrating over the center-mass-coordinate, we get ϕ (2) HF (r, β). As shown in fig.1, this gives a surprisingly good description of the correlation function at high and intermediate temperatures. The Hartree-Fock description must fail near zero temperature: Since the anomalous operatorsψ †ψ † andψψ have been neglected, it describes not well the low energy excitations of the systems. It is known that the zero temperature behavior can be well described by the Bogoliubov approximation [23]. In this paper it is not our purpose to calculate the correlation function using the complete Bogoliubov approach in the inhomogeneous trap potential. This could be performed using approaches developed in [24,25]. Here we use the homogeneous description of the Bogoliubov approximation and adapt it to the inhomogeneous trap case with a local density approximation. This approach already includes the essential features which the Hartree-Fock description neglects at low temperatures. We start with the description of the homogeneous system with quantization volume V and uniform density n = N/V . As in [26] we split the field operatorΨ into a macroscopically populated state Φ and a remainder, which accounts for the noncondensed particles: In the thermodynamic limit N → ∞, V → ∞, keeping N/V = n and N g = const, the typical matrix elements of δΨ at low temperatures are √ N times smaller thanâ Φ . Hence we can neglect terms cubic and quartic in δΨ, when we insert (30) in the expression of the density-density correlation function (20). Since the condensate density is given by the total density minus the density of the excited atoms, we have to express the operator of the condensate density in the same order of approximation for consistency, Finally we use the mode decomposition of the homogeneous system whereb k annihilates a quasiparticle with momentum k. The components u k and v k satisfy the following equations: together with the normalization: At low temperatures the quasiparticles have negligible interactions and we can use Wick's theorem to get the following expression for the correlation function where we used Φ(r) = V −1/2 . The quasiparticles obey Bose statistics, so that the mean number of quasiparticles with momentum k and energy E k is given by We see from eq.(35) that in the homogeneous system the density-density correlation function depends only on the relative distance r = | r ′ − r ′′ |. The derivation of the properties of the pair correlation function is given in the appendix. At T = 0 the pair correlation function has the following behavior [14,27] ϕ (2) n=const (r, where ξ = (8πna) −1/2 is the healing length of the condensate. For finite but small temperatures this structure is only slightly changed (see appendix). The modification of the low energy spectrum due to the Bogoliubov approach is responsible for the long range part of the correlation function. Apart from the edge of the condensate, the total density n(r) for low temperature in the trapped system varies rather slowly compared to the healing length ξ for the considered parameters. So it is possible to adapt the result of the homogeneous system to the inhomogeneous trap case. For a given density n(r) we get with a local density approximation for the pair correlation function instead of eq.(35) where u k (R), v k (R), and E k (R) are solutions of eq.(33) for the given density n(R). As shown in fig.2 this gives an excellent agreement with the Quantum Monte Carlo results at low temperature. We have checked that at this temperature the difference with the Bogoliubov solution at T = 0 is almost negligible. The good agreement with the simulation reflects that the long range behavior of the pair correlation function in this approximation is correctly described by eq.(37). We note that in an intermediate temperature regime, which is not shown, both approaches, the Hartree-Fock and the local density Bogoliubov calculation, do not reproduce the simulation results quantitatively: The maximum local error is about 5%. V. CONNECTION TO THE INTERACTION ENERGY The knowledge of the pair correlation function permits us to calculate the total energy of the trapped atoms E tot : One has to pay attention that the regularized form of the contact potential, V = gδ( r) ∂ ∂r (r·), acts on the off-diagonal elements r 12 and r ′ 12 of the density-density correlation function ϕ(r ′ 12 , r 12 , β) = r 1 ′ , r 2 ′ |ρ| r 1 , r 2 in the space of relative coordinates r 12 and r ′ 12 . As the 2-body density matrix ϕ(r ′ 12 , r 12 , β) diverges as (1 − a/r ′ 12 )(1 − a/r 12 ) we actually get the simple form: This form involves only the diagonal elements of the correlation function ϕ (2) (r, β). Both the improved Hartree-Fock solution and the Bogoliubov solution behave for small distances (r ≪ ξ) like where ϕ (2) (0, β) can be obtained graphically by extrapolating the pair correlation function to zero, neglecting the short range behavior (r < ξ); numerically it can be obtained from the Hartree-Fock calculation of (21) (see [13] for analysis of the temperature dependence of ϕ (2) (0, β) ). This behavior of the correlation functions shows that eq.(40) gives a finite contribution linear in a, which we can identify with the mean interaction energy H I : In order g 2 , eq.(40) contains a diverging part, We note without proof that this divergency is compensated within the Bogoliubov theory by a divergent part of the kinetic energy, so that the mean total energy, eq.(39), is finite. This lacks in the Hartree-Fock calculation, which is, however, limited to linear order of g. In the Thomas-Fermi limit the kinetic energy is negligible, and the interaction energy eq.(42) dominates the total energy, which can be measured. This measurement provides some information about the correlation function, however, the true correlation function is not accessible. Only the fictive correlation function ϕ (2) (0, β) for vanishing interparticle distances is obtained. VI. CONCLUSION We numerically calculated the pair correlation function of a trapped interacting Bose gas with a Quantum Monte Carlo simulation using parameters typical for recent experiments of Bose-Einstein condensation in dilute atomic gases. At temperatures around the critical point, an improved Hartree-Fock approximation was found to be in good quantitative agreement with the Monte Carlo results. The improved Hartree-Fock calculation presented in this paper takes the short-range behavior of the correlation function into account, especially the fact that two particles can never be found at the same location. At low temperature we compared our simulation results to a local density approximation based on the homogeneous Bogoliubov approach. The phonon spectrum changes the behavior of the pair correlation function for distances r of the order of the healing length ξ. With the knowledge of the pair correlation function we calculated the total interaction energy. We showed that the results of recent experiments on second order coherence do not measure the true correlation function, which has to vanish for small interparticle distances. Only an extrapolated correlation function is determined, where the exact short range behavior disappears. ACKNOWLEDGMENTS This work was partially supported by the EC (TMR network ERBFMRX-CT96-0002) and the Deutscher Akademischer Austauschdienst. We are grateful to Martin Naraschewski, Werner Krauth, Franck Laloë, Emmanuel Mandonnet, Ralph Dum and Bart van Tiggelen for many fruitful discussions. VII. APPENDIX In this appendix we give the explicit formulas for the pair correlation function in the Bogoliubov approach for an homogeneous system and discuss its behavior at short and long distances, since only some aspects have been discussed in literature [14,28]. Starting from eq.(35), the pair correlation function φ (2) n=const can be be written explicitly as: with R = √ 2r/ξ (ξ = (8πna) −1/2 is the definition of the healing length) and To get the behavior of eq.(43) for small distances (r ≪ ξ), we can replace f (q) by its behavior for large wavevectors, q → ∞ Using the value of the integral [29] ∞ 0 dx sin we get the short range behavior of the pair correlation function [27]: To get the long range behavior (r ≫ ξ), we integrate several times by part: For the function f (q) and its derivatives at q = 0 we get T = 0 : f (0) = 0, f (2) (0) = 1 T = 0 : f (0) = 0, f (2) (0) = 0, f (4) (0) = 0, ... and the long range behavior at zero temperature given in (37) is obtained. For finite temperature it can be shown that f (q) is an odd function of q, so that f (2n) (0) = 0 for all n. Due to that the correlation function vanishes faster than any power law in 1/R. To work out an explicit expression for finite temperatures we use this antisymmetry to extend the range of the integral (43) to −∞ and we can analytically calculate the expression for two limiting cases via the residue calculus. For large distances we only have to take the poles q 0 of f (q) with the smallest modulus into account. For λ T /2π ≪ ξ corresponding to k B T ≫ ng, and r ≫ ξ, we get q 0 = i, so that φ (2) n=const (r, β) = Note the + sign in this expression, leading to φ (2) n=const > 1/V , that we interpret as a bosonic bunching effect for thermal atoms. In the opposite limit, λ T /2π ≫ ξ and r ≫ λ 2 T /4π 2 ξ, the pole with the smallest imaginary part is given by q 0 = i4π 2 ξ 2 /λ 2 T and we get [28] φ (2) n=const (r, β) =
2018-12-21T09:29:36.288Z
1998-12-16T00:00:00.000
{ "year": 1998, "sha1": "e5495b4231040589637de1ded4d046ad32323fe9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/9812029", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e5495b4231040589637de1ded4d046ad32323fe9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244779378
pes2o/s2orc
v3-fos-license
Spontaneous Fluctuations in Oscillatory Brain State Cause Differences in Transcranial Magnetic Stimulation Effects Within and Between Individuals Transcranial magnetic stimulation (TMS) can cause measurable effects on neural activity and behavioral performance in healthy volunteers. In addition, TMS is increasingly used in clinical practice for treating various neuropsychiatric disorders. Unfortunately, TMS-induced effects show large intra- and inter-subject variability, hindering its reliability, and efficacy. One possible source of this variability may be the spontaneous fluctuations of neuronal oscillations. We present recent studies using multimodal TMS including TMS-EMG (electromyography), TMS-tACS (transcranial alternating current stimulation), and concurrent TMS-EEG-fMRI (electroencephalography, functional magnetic resonance imaging), to evaluate how individual oscillatory brain state affects TMS signal propagation within targeted networks. We demonstrate how the spontaneous oscillatory state at the time of TMS influences both immediate and longer-lasting TMS effects. These findings indicate that at least part of the variability in TMS efficacy may be attributable to the current practice of ignoring (spontaneous) oscillatory fluctuations during TMS. Ignoring this state-dependent spread of activity may cause great individual variability which so far is poorly understood and has proven impossible to control. We therefore also compare two technical solutions to directly account for oscillatory state during TMS, namely, to use (a) tACS to externally control these oscillatory states and then apply TMS at the optimal (controlled) brain state, or (b) oscillatory state-triggered TMS (closed-loop TMS). The described multimodal TMS approaches are paramount for establishing more robust TMS effects, and to allow enhanced control over the individual outcome of TMS interventions aimed at modulating information flow in the brain to achieve desirable changes in cognition, mood, and behavior. INTRODUCTION were the first to show that the human brain could be stimulated non-invasively using rapidly changing magnetic fields. This transcranial magnetic stimulation (TMS) method was virtually painless, required minimal preparation, and offered a flexible stimulation coil which could be rapidly and easily moved between scalp locations (brain areas). When the TMS coil was placed on the scalp above the motor cortex, movements could be induced in contralateral body parts, and the muscles' responses could be measured using electromyography (EMG) (Rothwell et al., 1999;Hallett, 2000Hallett, , 2007. These so-called "motor-evoked potentials" (MEPs) are caused by the excitation of corticospinal neurons (Berardelli et al., 1990;Burke et al., 1993;Di Lazzaro et al., 1998), and MEPs are still used in contemporary research as a measure of motor cortex excitability (Boroojerdi et al., 2002;Rossini et al., 2015). IMMEDIATE AND AFTEREFFECTS OF TRANSCRANIAL MAGNETIC STIMULATION SHOW HIGH INTER-AND INTRA-SUBJECT VARIABILITY Given the widespread use of TMS in research and clinical settings, one might assume that TMS generally leads to positive and consistent findings. Yet, the effects of TMS are not always robust and reliable. Inconsistent TMS effects between experiments/clinical trials could partially be due to methodological factors, such as differences in the coil placement method (Beam et al., 2009;Rusjan et al., 2010;Gomez et al., 2021). But even if methodological factors are kept constant, TMS effects can show substantial variability. There are two types of variability in the effects of TMS: different individuals may respond differently to TMS (inter-subject variability), and the effect of TMS may differ within the same individual over time (intra-subject variability). We should furthermore distinguish between two types of TMS effects: the immediate effects of singlepulse TMS, and the aftereffects of repetitive TMS ("rTMS"). Below, we present evidence that suggests that both the immediate and aftereffects of TMS show substantial inter-and intraindividual variability. The immediate effects of single-pulse TMS to the primary motor cortex are often measured with MEPs, which provide a measure of the momentary TMS reactivity (Rossini et al., 2015). Within the same individual, TMS-MEP amplitudes vary over trials (Kiers et al., 1993;Burke et al., 1995;Wassermann, 2002;Rösler et al., 2008;Goetz et al., 2014;Goldsworthy et al., 2016a). Interestingly, optimization of TMS target localization does not necessarily improve the variability and reproducibility of TMS-induced MEPs (Jung et al., 2010). This finding already suggests that factors beyond the TMS parameters may contribute to immediate TMS reactivity. Such variability in immediate TMS effects is not limited to the motor network. When stimulating early visual cortex, some individuals can perceive "phosphenes" (an illusory percept). The "phosphene threshold" (the minimal TMS intensity required to perceive a phosphene in half of the cases) is often used as a measure of visual cortex excitability (Boroojerdi et al., 2002;Bestmann et al., 2007;de Graaf et al., 2017). The probability of inducing phosphenes within the same participant can vary over time (Gerwig et al., 2003;Romei et al., 2008a,b;Dugué et al., 2011). Variability in TMS aftereffects can be illustrated by evaluating individual responses to rTMS protocols that were designed to modulate synaptic plasticity beyond the duration of stimulation (Pascual-Leone et al., 1998;Ridding and Ziemann, 2010). Low (<1 Hz) and high (>1 Hz) frequency rTMS were originally reported to decrease and increase the excitability of the human motor cortex, respectively (Wassermann et al., 1998). This may indeed be the case on average, but when inspecting individual responses, not all participants showed these effects (Maeda et al., 2000). Similarly, intermittent and continuous theta burst stimulation (iTBS and cTBS, two forms of patterned rTMS) were reported to enhance and suppress motor cortex excitability for ∼30 min after stimulation, respectively (Huang et al., 2005). These findings have not always been replicated in another subject sample (Goldsworthy et al., 2012;Hordacre et al., 2017), and even if they are present at the group level, not all individuals show these effects (Cheeran et al., 2008;Nettekoven et al., 2015;Schilberg et al., 2017). In fact, one study reported that only 1 in 4 participants showed the expected pattern of results (Hamada et al., 2013). Another TMS procedure aimed at modulating neuroplasticity is called "paired associative stimulation" (PAS). Originally, PAS involved peripheral nerve stimulation that was paired with single-pulse TMS to primary motor cortex in order to enhance corticomotor excitability (Stefan et al., 2000), but PAS has also been employed to facilitate communication between the motor cortex and interconnected cortical areas (Veniero et al., 2013). As for the other plasticity-inducing TMS protocols, there is high inter-subject variability in the effects of PAS (Sale et al., 2007;Florian et al., 2008;López-Alonso et al., 2014), with a recent study reporting that only 61% of participants responded to PAS (Minkova et al., 2019). Besides inter-subject variability, TMS aftereffects also show significant intra-subject variability. Some reports indicated that the aftereffects of iTBS and cTBS were relatively stable within the same individuals (Hinder et al., 2014;Vernet et al., 2014), but a recent study showed the opposite (Schilberg et al., 2017). Schilberg et al. (2017) further investigated the within-subject reliability of iTBS effects over the course of 60 min, and across two experimental sessions that were scheduled ∼8 days apart. They found that the effect of iTBS on corticospinal excitability (as measured with MEP amplitude) differed between sessions. The average increase in MEP amplitude was approximately 23% in the first session, but only approximately 6% during the second visit. From these examples, it becomes clear that TMS effects show considerable inter-and intra-subject variability, for both the immediate effects of single-pulse TMS (MEP amplitudes, phosphene induction) and the longer-lasting plasticity effects as induced by rTMS, TBS, or PAS. The limited consistency of TMS effects can have negative consequences in research and clinical settings, because TMS effects are not always predictable or optimized. If TMS effects are not sufficiently reliable, they thus have limited use as a biomarker for individual changes in neuroplasticity and concomitant desirable changes in cognition and behavior (Schambra et al., 2015). It is therefore important to identify the factors that contribute to the variability of TMS effects (Corp et al., 2020(Corp et al., , 2021, such that the consistency and efficacy of TMS can be improved. We here discuss one possible source of this variance, namely, spontaneous fluctuations in neuronal oscillations (Buzsáki and Draguhn, 2004;Pasley et al., 2009;Iscan et al., 2016;Bergmann, 2018). Below, we explain how spontaneous fluctuations in oscillatory brain state contribute to variability both in the immediate effects of TMS and in TMSinduced plasticity effects. SPONTANEOUS FLUCTUATIONS IN NEURONAL OSCILLATIONS CONTRIBUTE TO VARIATIONS IN IMMEDIATE TRANSCRANIAL MAGNETIC STIMULATION EFFECTS To investigate the link between TMS effect variability and ongoing neuronal oscillations, TMS can be combined with magneto-or electroencephalography (M/EEG). Specific characteristics of neuronal oscillations (i.e., their frequency, power, or phase; Palva and Palva, 2007) might be correlated with the immediate responsivity to single-pulse TMS. Indeed, the probability of inducing phosphenes when applying TMS to early visual cortex was negatively correlated with EEG alpha power prior to TMS (Romei et al., 2008a,b). The probability of perceiving TMS-induced phosphenes was also associated with the phase of ongoing EEG alpha oscillations (Dugué et al., 2011). Results have been less clear for the motor system. Some studies reported a negative association between pre-TMS EEG alpha power and TMS-induced MEP amplitude (Sauseng et al., 2009;Zarkowski et al., 2016). Others reported a negative association between TMS-MEP amplitude and oscillatory beta power (Lepage et al., 2008;Mäki and Ilmoniemi, 2010;Schulz et al., 2014), or no relation with oscillatory power in any frequency band (Mitchell et al., 2007;Berger et al., 2014). Spontaneous fluctuations in the phase of ongoing beta and alpha (Schaworonkow et al., 2018(Schaworonkow et al., , 2019Bergmann et al., 2019) oscillations may also play a role in TMS-MEP variability. Note that inconsistencies across studies may in part be explained by methodological differences, such as differences in TMS intensity (Pellegrini et al., 2018). Schilberg et al. (2021) recently assessed the relation between the power and phase of ongoing EEG alpha and beta oscillations with motor cortex TMS reactivity. They found that TMS-MEP amplitude correlated positively with pre-TMS oscillatory power in the alpha and beta bands. The authors also reported a significant effect of alpha phase on TMS-MEP amplitude, but there was no consistent alpha phase that led to high TMS-MEP amplitudes across participants. The latter is in contrast with previous reports showing that higher TMSinduced MEP amplitudes are mostly induced during alpha troughs instead of peaks (Schaworonkow et al., 2018(Schaworonkow et al., , 2019Zrenner et al., 2018). Interestingly, a standard FFT analysis did not reveal a significant correlation between pre-TMS beta phase and TMS-MEP amplitude, while a Hilbert transform did show an effect (Schilberg et al., 2021). This discrepancy between analyses may be partially explained by the variability in individual beta frequency (IBF), which is larger than the variability in individual alpha frequency (IAF) (Haegens et al., 2014). The Hilbert transform is less affected by frequency variations compared to the FFT approach, since the former can be used for non-stationary time series (Schilberg et al., 2021). Another contributing factor might be that participants were not involved in any active motor task. Ongoing beta power was therefore naturally low, making it more difficult to reliably estimate beta phase. When TMS is applied at high beta power, the relation between beta phase and TMS-MEP amplitude indeed becomes evident (Torrecillos et al., 2020). In any case, most of the evidence presented above is of correlational nature, because oscillations were measured rather than experimentally manipulated. DIRECT EVIDENCE FOR A CAUSAL LINK BETWEEN (CONTROLLED) OSCILLATORY STATE AND VARIATIONS IN IMMEDIATE TRANSCRANIAL MAGNETIC STIMULATION EFFECTS Transcranial alternating current stimulation (tACS) can be used to establish the causal relevance of neuronal oscillations (Herrmann et al., 2016). TACS is a form of non-invasive brain stimulation (NIBS) that involves electrical stimulation with a sinusoidal waveform (Antal and Paulus, 2013). It can be used to enhance the power of oscillations of a certain frequency within the stimulated brain area (Herrmann et al., 2013;Vossen et al., 2015;Vieira et al., 2020), potentially through mechanisms of entrainment (Thut et al., 2011;Huang et al., 2021) or spike-timing dependent plasticity (Herrmann et al., 2013;Vossen et al., 2015). The causal relevance of oscillatory phase can then be established by presenting stimuli at certain phases of the tACS waveform . It was previously shown that it is possible to apply TMS at certain tACS phases with high temporal precision (ten Oever et al., 2016), and that it is feasible to use simultaneous tACS-TMS to investigate the causal relation between oscillatory tACS phase and TMS-MEP amplitudes (Raco et al., 2016). The same logic was applied by Schilberg et al. (2018), who administered TMS pulses at eight equidistant phases of a tACS waveform, using IBF-, IAF-, or sham tACS to primary motor cortex. The authors found that tACS modulated TMS-MEP amplitude only for the IBF-tACS condition, and this effect seemed to be specific to individuals with lower IBF frequencies. These findings suggest that beta-tACS phase at the time of TMS influences the immediate effects of TMS (intra-subject variability), and that this effect interacts with the individual dominant beta frequency (between-subject variability) (Haegens et al., 2014). SPONTANEOUS FLUCTUATIONS IN NEURONAL OSCILLATIONS CONTRIBUTE TO THE PROPAGATION OF TRANSCRANIAL MAGNETIC STIMULATION PULSES THROUGH FUNCTIONALLY CONNECTED NETWORKS Simultaneously combining TMS with M/EEG or tACS is an excellent approach to investigate the link between ongoing neuronal oscillations and the variability of TMS effects. However, this approach does not allow an accurate (high-resolution) visualization of the immediate effects of TMS at the level of the brain. Functional magnetic resonance imaging (fMRI) can be used to visualize TMS signal propagation, given its potential to measure whole-brain activation with good spatial resolution (Walsh and Cowey, 2000;Sack and Linden, 2003;Sack, 2006;Bestmann et al., 2008;Reithler et al., 2011). Simultaneous TMS-fMRI studies have shown that the effects of TMS pulses can extend beyond the targeted brain area, since signals can spread toward interconnected brain areas (Ruff et al., 2006;Sack et al., 2007;Blankenburg et al., 2010). Though the local effects of TMS pulses do not reach deeper than the superficial cortex, remote effects can even be observed in subcortical areas (Bergmann et al., 2021). Nonetheless, to achieve a full understanding of how TMS pulses propagate through functionally connected networks, it is important to investigate whether and how TMS-evoked fMRI responses vary as a function of ongoing neuronal oscillations on a trial-by-trial level. This was made possible with a unique setup, which simultaneously combines TMS, EEG, and fMRI. This technically challenging experimental triad approach was introduced by our lab in 2013 (Peters et al., 2013). We demonstrated that concurrent TMS-EEG-fMRI is feasible and safe in both phantom and human measurements, and we showed that the EEG and fMRI data were of sufficient quality. Yet, the full potential of this approach only became apparent in a recent publication from our lab, in which we mapped whole-brain TMS signal propagation as a function of the pre-TMS oscillatory state as indexed by simultaneous EEG (Peters et al., 2020). In four healthy individuals, we applied triple-pulse (15-Hz) TMS to the right dorsal premotor area (PMd), while continuously measuring EEG. Triple-pulse TMS was used to probe the motor network with a sufficiently strong stimulus, rather than to modulate neuroplasticity as with typical rTMS protocols (the findings described here thus relate to immediate TMS effects). TMS to PMd evoked both local and remote fMRI activation in a cortico-subcortical motor network, resembling the activations as seen for voluntary movements. It again became evident that different individuals may respond differently to TMS (inter-subject variability): two individuals showed less/more confined activations in response to TMS compared to the other two individuals. These individuals also showed less engagement of the motor network irrespective of TMS ("low activators, " the others were called "high activators"). It should be noted that the difference in TMS-evoked responses may in part be due to differences in TMS intensity between the "low activators" and "high activators." In any case, to evaluate immediate TMS-evoked responses within the cortico-subcortical motor network as a function of oscillatory state, it was crucial that participants showed reliable engagement of the motor network. The EEG-informed analyses were therefore performed only for the two "high activators." The main question of interest was whether TMS signal propagation within a cortico-subcortical motor network varies with pre-TMS parietal alpha power. Pre-TMS alpha power was negatively correlated with TMS-evoked fMRI responses in both local and remote (including subcortical) areas of the motor network. This negative association is in line with the supposed inhibitory role of alpha oscillations (Klimesch et al., 2007). From these findings, we can conclude that, within the same individual, TMS pulses may propagate differently throughout the motor network depending on pre-TMS oscillatory state (intra-subject variability). Our group has recently also established the feasibility of using simultaneous TMS-EEG-fMRI for non-motor areas (Janssens et al., 2020a). This comes with additional technical challenges, including the determination of the TMS site and intensity, because most non-motor areas are so-called "silent" areas that do not show any overt response to TMS. DIRECT EVIDENCE FOR A CAUSAL LINK BETWEEN (CONTROLLED) OSCILLATORY STATE AND VARIATIONS IN TRANSCRANIAL MAGNETIC STIMULATION AFTEREFFECTS Thus far, we focused on within-and between-subject variability in the immediate effects of TMS, and how such variability can be linked to ongoing neuronal oscillations. There is reason to believe that changes in oscillatory state also contribute to variations in TMS-induced neuroplasticity (TMS aftereffects). Goldsworthy et al. (2016b) applied cTBS to the primary motor cortex, while phase-aligning the TMS pulses to either the peak or the trough of concurrent alpha-tACS. They investigated whether the response to cTBS, as measured with TMS-induced MEP amplitudes, depended on the alpha-tACS phase. The excitability of the motor cortex was suppressed (TMS-MEP amplitudes were reduced) when cTBS was aligned with alpha-tACS troughs. Crucially, cTBS did not modulate motor cortex plasticity when cTBS was aligned with alpha-tACS peaks. Furthermore, the effect of tACS-trough-aligned cTBS was greater for individuals with higher IAFs (Goldsworthy et al., 2016b). Thus, TMS-induced neuroplasticity may vary both as a function of the controlled momentary oscillatory state and the intrinsic dominant oscillatory frequency. Besides oscillatory phase, the power of ongoing neuronal oscillations might be relevant for TMS-induced neuroplasticity as well. Guerra et al. (2018) showed that concurrent gamma tACS enhanced and prolonged iTBS-induced increases in TMS-MEP amplitude, in contrast to beta-tACS and sham-tACS (Guerra et al., 2018). This positive effect of simultaneous gamma tACS on iTBS efficacy was later replicated, but it seems that simultaneous gamma tACS reduced the efficacy of cTBS (Guerra et al., 2020a). These findings are especially relevant in a clinical context, where the goal is to employ rTMS to modulate neuroplasticity for longer periods of time. It would be beneficial to optimize plasticityinducing TMS protocols based on oscillatory brain state, such that treatment efficacy can be improved. ACCOUNTING FOR SPONTANEOUS FLUCTUATIONS IN NEURONAL OSCILLATIONS DURING TRANSCRANIAL MAGNETIC STIMULATION Thus far, we have outlined that immediate and prolonged TMS effects vary considerably within-and between-individuals. We also showed that spontaneous fluctuations in neuronal oscillations can explain at least part of the variability in TMS effects, as can more stable oscillatory characteristics (individual peak frequencies). The question then becomes: how can we incorporate such oscillatory information into our TMS protocols? The first step is to form a clear hypothesis regarding the to-betargeted oscillatory frequency, since different frequency bands are associated with different functions (Başar et al., 1999;Ward, 2003;Clayton et al., 2018). Even within the same (e.g., alpha) frequency band, there might be different functionally relevant oscillation generators in the brain, which are not easily disentangled in the M/EEG signal (Bollimunta et al., 2011;Haegens et al., 2015;Sokoliuk et al., 2019). More advanced techniques might be needed to extract the relevant oscillatory frequency from the M/EEG signal (Schaworonkow et al., 2018). Once the relevant oscillatory frequency has been determined, there are two potential technical solutions that can directly account for oscillatory brain state during TMS: simultaneous tACS-TMS, and M/EEG-based "closed-loop" TMS (Huang et al., 2017). As discussed previously, TMS can be applied at the (controlled) optimal tACS phase (Raco et al., 2016;ten Oever et al., 2016;Fehér et al., 2017). Crucially, individuals differ in terms of their oscillatory brain rhythms. For instance, peak alpha frequencies (IAFs) can range between 7 and 14 Hz across individuals (Haegens et al., 2014). To ensure optimal tACS efficacy, it is therefore important to individually calibrate the tACS frequency, for instance based on a resting state M/EEG measurement (Janssens et al., 2021) or through functional identification (Gundlach et al., 2017;Schilberg et al., 2018). Besides personalizing the tACS frequency, it might also be necessary to individually determine the optimal tACS phase to deliver TMS, given the recent finding that no consistent alpha phase was correlated to high TMS-MEP amplitudes (i.e., high TMS responsivity) across participants (Schilberg et al., 2021). Simultaneous tACS-TMS has already been used to link tACS beta phase to motor cortex TMS reactivity (Guerra et al., 2016;Schilberg et al., 2018). It has furthermore been shown that single TMS pulses applied to dorsolateral prefrontal cortex propagate differently through a cortical network depending on the phase of concurrent theta-tACS (Fehér et al., 2017). Thus, by applying single-pulse TMS at the optimal (controlled) tACS phase, TMS signal propagation may be modulated. Besides its relevance for immediate TMS effects, tACS can also be used to enhance and prolong TMS aftereffects, as described above (Goldsworthy et al., 2016b;Guerra et al., 2018Guerra et al., , 2020a. Simultaneous tACS-TMS is useful, but not perfect. Individual peak frequencies show good within-subject test-retest reliability (Grandy et al., 2013;Haegens et al., 2014;Janssens et al., 2021), but peak frequencies can still fluctuate, and the extent to which this happens differs across individuals. For example, IAF decreased over the course of 1 h during visual task performance, with some participants showing reductions of up to 2 Hz (Benwell et al., 2019). If tACS were to be applied at the originally determined peak frequency, tACS efficacy may be compromised, since the matching between the endogenous dominant frequency and the driving (tACS) frequency would not always be optimal (Romei et al., 2016). The best approach might thus be to continuously track the instantaneous dominant frequency, and to adjust the tACS frequency accordingly. However, it is difficult to recover EEG signals during tACS due to the sizeable tACS artifacts (Kasten and Herrmann, 2019). Another complication of the simultaneous tACS-TMS approach is that if the effect of tACS on oscillatory activity is not verified through means of concurrent M/EEG measurements, we cannot be certain that the applied tACS phase corresponds to the phase of ongoing neuronal oscillations. Finally, it could be the case that there is an "optimal" amount of oscillatory power, in the sense that if tACS enhances oscillatory power above a certain threshold, it might reduce the reactivity of a brain area to TMS. In contrast to simultaneous tACS-TMS, the second technical solution to account for oscillatory brain state during TMS does measure ongoing neuronal oscillations. In this so-called "closed-loop" TMS approach, the M/EEG signal is continuously measured, and the timing of TMS pulses is adjusted to the optimal power and/or phase of the ongoing oscillations (Bergmann et al., 2016;Zrenner et al., 2016;Thut et al., 2017;Guerra et al., 2020b). This method can only be successful if the instantaneous phase can be reliably estimated (that is, if the power of the ongoing oscillations is sufficiently high). This has two important implications if the aim is to target specific oscillatory phases. Firstly, it might be necessary to control participants' cognitive state (i.e., task engagement vs. rest) to ensure high oscillatory power. Secondly, the closed-loop TMS approach might fail in individuals that show naturally/pathologically low oscillatory power. Irrespective of these technical challenges, EEG-based closedloop TMS has already been applied successfully. It was shown that MEP amplitudes were higher during the rising phase of ongoing slow (<1 Hz) oscillations compared to the falling phase, when TMS was applied to primary motor cortex (Bergmann et al., 2012). Interestingly, these findings were consistent across two cognitive states (wakefulness and sleep). In another study, rTMS applied to primary motor cortex at the troughs of the ongoing alpha rhythm enhanced MEP amplitudes, while rTMS applied at alpha peaks did not . These findings clearly show that temporally targeting TMS pulses to the optimal oscillatory state improves its efficacy both in terms of signal propagation (immediate effects) and the induction of neuroplasticity (aftereffects). CONCLUSION TMS is widely used in both research and clinical settings. Still, its immediate and prolonged effects are not robust and reliable, as is evident from both intra-and inter-subject variability. One potential source of this variability may be the spontaneous fluctuations of neuronal oscillations. We showed this for both immediate TMS effects (TMS-MEP amplitudes, TMS phosphene induction, TMS-fMRI signal propagation), and for TMS aftereffects (of rTMS, TBS, or PAS). The oscillatory brain state can be accounted for during TMS by using either simultaneous tACS-TMS or closed-loop M/EEG-TMS. This may reduce both inter-and intra-individual variability in TMS effects. The described multimodal TMS approaches allow enhanced control over the individual outcome of TMS protocols aimed at modulating information flow and/or neuronal plasticity in the healthy and diseased brain. They therefore pave the way to stronger and more consistent TMS-induced improvements in cognition, mood, and behavior. AUTHOR CONTRIBUTIONS SJ: conceptualization, writing-original draft, writing-review and editing, and funding acquisition. AS: conceptualization, writing-review and editing, funding acquisition, and supervision. Both authors contributed to the article and approved the submitted version.
2021-12-02T14:36:25.885Z
2021-12-02T00:00:00.000
{ "year": 2021, "sha1": "993a9acab4ff6d40aedb7452789ac03165d52001", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2021.802244/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "993a9acab4ff6d40aedb7452789ac03165d52001", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
11459145
pes2o/s2orc
v3-fos-license
Increase in positive selection of CD8+ T cells in TAP1-mutant mice by human beta 2-microglobulin transgene. Mice harboring a deletion of the gene encoding the transporter associated with antigen presentation-1 (TAP1) are impaired in providing major histocompatibility complex (MHC) class I molecules with peptides of cytosolic origin and lack stable MHC class I cell surface expression. They consequently have a strongly reduced number of CD8+ T cells. To examine whether selection of CD8+ T cells is dependent on TAP- dependent peptides, we partially restored MHC class I cell surface expression in TAP1-deficient mice by introduction of human beta 2- microglobulin. We show that selection of functional CD8+ T cells can be augmented in vivo in the absence of TAP1-dependent peptides. C D8 T lymphocytes are positively selected by MHC class I molecules to ensure self-restriction, a process that requires proper surface expression of MHC class I molecules (1). MHC class I molecules present peptides, derived mainly from cytosolic proteins, to CD8 + T cells. Most MHC class I molecules rely on these peptides, as provided by the heterodimeric transporter associated with antigen processing (TAP) complex, for efficient expression at the cell surface (2). Mice in which the gene encoding the TAP1 subunit is deleted have strongly reduced MHC class I cell surface expression and are impaired in positive selection of CD8 + T cells (3). However, HLA-A2 can be expressed at intermediate levels on the cell surface of TAP-deficient cells, due to its ability to bind signal sequence-derived peptides, a TAP-independent source of peptides (4). We crossed mice transgenic for HLA-A2 and human/~2-microglobulin (h32m) (TAPI+3A2) onto a TAP1deficient background to examine whether TAP-dependent peptides are essential for development of CD8 + T cells, or whether TAP-independent peptides suffice. We also crossed mice transgenic for HLA-B27 and hl32m (TAP1 + 3B27) with TAP1--mutant mice. HLA-B27 is inefficiently expressed at the cell surface of TAP-mutant cells (5), and the TAP1-3B27 mice were intended as controls. However, whereas only HLA-A2 was expressed at the cell surface in a TAPl-deficient background, TAP1-3A2 and TAP1-3B27 mice showed a similar increase in percentage of CD8 § T cells compared with TAP1-mice. Rescue of CD8 + T cells is therefore independent of the ability of the transgenic human MHC class I molecules to bind signal sequence-derived peptides and must be due to a feature shared by the transgenic animals, to wit, the presence of h3zm. Materials and Methods Mice. TAPl-deficient mice have been described previously (3). HLA-A2/hfl2m-and HLA-B27/h32m-transgenic mice have been described (6). Mice were kept in the animal facilities of the Massachusetts Institute of Technology. Mice 6-10 wk of age were used in all experiments. Steady State Distribution of K b Molecules in the Thymus. Thymuses were isolated from 6-8-wk-old mice. Thymocytes were removed by gently squeezing the thymic lobes with forceps and rinsing with PBS. The remaining capsule was macerated and dissolved in IEF sample buffer by repeated shearing through a 25Gll/2 needle. Samples were separated on 1D-IEF and blotted to nitrocellulose paper. The blot was incubated with a rabbit antiserum raised against a peptide derived from the cytoplasmic tail of K s (~p8) followed by a horseradish peroxidase-coupled goat anti-rabbit antibody (Amersham, Arlington Heights, I1,). Bound antibody was detected by a chemiluminescence detection kit (Amersham) and exposure to films (X-OMAT AR; Eastman-Kodak Co., Rochester, NY). Stability Assay. Freshly isolated spleen cells were labeled by lactoperoxidase-catalyzed iodination (see reference 18). Cells were lysed in 2 ml NP-40 lysis mix. L~ates were divided into four equal parts and were incubated for 15 min on ice after addition of 0.5 ml lysis mix with or without 60 #M YAPGNFPAL peptide. Samples were kept on ice or incubated at 39~ for 45 min. Lysates were precleared twice, followed by immunoprecipitation of H-2 b corn-plexes with a conformation-dependent anti-H-2 b serum (a gift from Dr. S. Nathenson, Albert Einstein College of Medicine, Bronx, NY). Immunoprecipitates were analyzed by 12.5% SDS-PAGE. Peptide-binding Assay. YAPGNFPAL peptide was labeled by chlofamine T-catalyzed iodination (see reference 11). 106 splenocytes were incubated in PBN buffer (PBS + 1% BSA + 0.01% NAN3) with the indicated concentrations of 12sI-YAPGNFPAL for 1 h at 23~ Cells were lysed in NP-40 lysis mix containing 100 #M cold FAPGNYPAL. Lysates were precleared once, and K b molecules were precipitated with the c~p8 serum. Immunoprecipitates were counted in a 3/counter. Values are the mean of triplicates. Results. Spleen cells from TAP1-BA2 mice were stained with an mAB directed against HLAoA2. They express an intermediate level of HLA-A2 on the cell surface (Fig. 1 a) compared with cells from TAPI+3A2 mice. This is in agreement with observations on TAP-deficient cell lines (8). The cell surface expression of HLA-A2 is paralleled by an approximately twofold higher percentage of CD8 § cells in the thymus (Fig. 2 a) and an approximately sixfold higher percentage of CD8 + T cells in blood (Fig. 2 b) in TAP1-3A2 mice, as compared with TAP1-mice. HLAoB27 cannot be detected either at the cell surface in TAPl-deficient animals, in contrast to HLA-A2 (Fig. 1 b). A 3-h metabolic labeling of TAP1-3B27 spleen cells followed by immunoprecipitation with the conformation-dependent antibody W6/32 and analysis of the immunoprecipitates on a 1.D-IEF gel shows that a substantial fraction of the HLA-B27 molecules fail to assemble with fl2m (Fig. I g). Parallel immunoprecipitation with an antiserum raised against denatured free heavy chains reveals that most B27 molecules are present as free heavy chains (Fig. 1 g). The few complexes that are present do not carry sialic acids, the acquisition of which is indicative of proper intracellular transport. These data underscore the reliance of HLA-B27 on TAP for a suitable source of peptides required for assembly and surface expression. Nonetheless, the TAPl-flB27 mice show a similar increase in the percentage of CD8 + T cells in the periphery, as do the TAP1-3A2 mice (Fig. 2 b). By examination of the surface expression of H-2K b and D b, this paradox may be satisfactorily explained. Surface expression of H-2K b and D b class I molecules in a murine TAP2-deficient cell line can be increased by transfection of this cell line with hfl2m (5). Indeed, K b shows a fivefold and D b a twofold increase in cell surface expression on spleen cells from both TAPI-flA2 and TAP1-3B27 mice compared with TAP1-mice (Fig. 1, c-f). Pulse-chase analysis shows that these K b and D b heavy chains preferentially associate with hfl2m and are transported to the cell surface, as judged also by acquisition of sialic acids (Fig. 1 k). Relative levels of MHC class I surface expression in the thymus were determined at steady state. Thymic lobes were depleted of thymocytes, and extracts of the remaining thymic capsules were resolved on 1D-IEF and analyzed by immunoblotting. In TAP1-animals, the majority of K b heavy chains remains unmodified (Fig. 1 i), but in TAPI-flA2 animals (and TAP1-fiB27 animals, data not shown) modification of K b is observed, indicating that the K b molecules have traversed the trans-Golgi network. Thus both in the periphery and in the thymus, surface expression of endogenous MHC class I molecules is increased in the presence of h32m. This increase in MHC expression may explain selection of CD8 + T cells in the TAP1-3A2 and TAP1-3B27 mice. We examined polyclonality of the CD8 + T cell population present in TAP1-3A2 mice. Cell suspensions, made from lymph nodes of TAPI-BA2 and TAPI+BA2 mice, were stained with antibodies against CD8 and different TCR V3 chains. Of all the V3 chains tested, those used by CD8 + T cells in TAPI+3A2 mice are also used by CD8 + T cells in TAP1-3A2 mice, indicating that the CD8 + T cell population in TAP1-3A2 mice is polyclonal (Fig. 2 c). These CD8 + T cells are able to respond in a primary mixed lymphocyte reaction. After 5 d of culture, strong CD8 ' § T ceU-dependent cytotoxicity against H-2 a targets is observed, similar to that in TAPI+~A2 mice (data not shown). MHC molecules from TAP-deficient cells (9-11) and mice (3) have been proposed to be devoid of peptide based on their failure to be expressed efficiently at the cell surface, their thermolability, and their increased peptide-binding capacity. Do MHC class I molecules in TAP1-3A2 mice display similar properties? Detergent lysates of cell surface-iodinated spleen cells were incubated at 4~ or 39~ in the presence or absence of the 9-mer peptide YAPGNFPAL, a variant of the Sendai virus peptide FAPGNYPAL (12,13) that contains the anchor residues for both K b and D b. H-2 b class I complexes were then immunoprecipitated with a conformation-specific anti H-2 b serum. Labeled MHC dass I complexes are absent from lysates of TAP1-cells, but they are detected in lysates from TAP1-3A2 cells (Fig. 3 a) in amounts in accordance with the data obtained by flow cytometric analysis (Fig. 1, c and e). Exposure of lysates of TAP1-3A2 cells to 39~ results in a strong decrease in reactivity with the conformationspecific anti H-2 s antiserum, but addition of YAPGNFPAL peptide prevents thermal unfolding of the H-2 s molecules (Fig. 3 a). No such loss of immunoreactive material is observed in extracts from TAP1 + and TAP1 +3A2 cells. To determine the pool size of K b molecules on the cell surface that are available for binding dass I peptides, a peptidebinding assay was performed on freshly isolated splenocytes by use of radiolabeled 12sI-YAPGNFPAL. K s molecules on splenocytes from both TAP1-and TAP1-3A2 mice bind considerably more t2sI-YAPGNFPAL than K b molecules on TAP1 + and TAP1 +3A2 cells (Fig. 3 b), despite lower levels of K b call surface expression (Fig. 1 c). An approximately twofold difference in binding capacity between TAP1-3A2 and TAP1-cells is observed, whereas TAP1-3A2 cells express approximately fivefold more K s molecules on the cell surface. A significant proportion of K s molecules on TAP1-3A2 cells may have bound peptides from a TAPindependent source (15). These peptides presumably bind with a lower affinity than TAP-dependent peptides, since they fail to stabilize H-2 s complexes in vitro (Fig. 3 a). Discussion In both ~zm-and TAP1 -mice, small numbers of CD8 + T cells are present that can be expanded in vivo and vitro (16)(17)(18)(19). Our data show that selection of CD8 + T cells in TAP1-mice can be augmented in vivo by increasing cell surface expression of H-2K b and D b through heterodimerization with h32m. In vivo selection of CD8 + T cells is therefore not strictly dependent on peptides of cytosolic origin and can be mediated by MHC class I molecules that are either devoid of peptide, have bound peptides from a TAPindependent source, or both. The pool of CD8 + T cells selected by these molecules is polyclonal and alloreactive. In fetal thymic organ cultures (FTOCs), an in vitro model for thymic selection, peptides contribute to the spedficity of positive selection (20)(21)(22)(23). In FTOCs derived from 32mmice, cell surface expression of MHC class I molecules is not detectable, and structural similarity between the selecting peptide and the nominal antigen is required for positive selection of a monoclonal T cell population beating a K brestricted, ovalbumin-specific TCR (22). Selection in TAP1-FTOCs of a Db-restricted lymphocytic choriomeningitis virus peptide-specific T cell can be accomplished by the nominal antigen at low peptide concentrations (30/xM) (23). However, at a 10-fold higher peptide concentration, and therefore at much higher MHC class I density (21), the same T cell can be selected by a DS-binding peptide (influenza NP366-374) structurally unrelated to the nominal antigen. At physiological densities of MHC dass I/peptide complexes, selection of a given CD8 § T cell may be less dependent on a specific peptide and may be accomplished by MHC class I molecules bearing a heterogeneous set of peptides not necessarily related to the nominal antigen other than by their ability to bind to the restriction element in question. In the early phase of this project, invaluable assistance was provided by Drs. M. van Roon and P. Laird in the generation of the HLA/h/~2m-transgenic mice. We thank Drs. A. Bandeira, M. T. Heemels, H. G. Ljunggren, and T. N. M. Schumacher for invaluable discussions; R. M. Machold for discussions and contribution to experiments shown in Fig. 1; L. Vaught and G. Paradis for help with flow cytometric analysis; and Drs. H. N. Eisen, A. Hill, and J. Lafaille for critically reading the manuscript.
2016-05-12T22:15:10.714Z
1995-02-01T00:00:00.000
{ "year": 1995, "sha1": "8b8e8758c450b00a8f8d36197df816c3ecd139b7", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/181/2/787.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8b8e8758c450b00a8f8d36197df816c3ecd139b7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14924244
pes2o/s2orc
v3-fos-license
Testing AJAX functionality with UniTESK 1 —AJAX (Asynchronous JavaScript and XML) is a very promising technology for building interactive web applications. At the same time, AJAX significantly complicates the development of the client side of web applications. The paper demonstrates the possibility of utilizing the UniTESK test development technology for testing the client side functionality of AJAX web applications. Using UniTESK, test systems are developed for 8 AJAX web applications. Then the fault revealing capability of the test systems is evaluated in experiments. INTRODUCTION A classic web application is built around the notion of web pages and generally consists of a set of static web pages or server side programs that generate web pages.Such a web application is sufficiently inferior in interactivity to a web application developed with AJAX.The main reason is that the user communicates with the classic web application synchronously, that is he supplies input to the browser, e.g.clicks on a submit button or a link, and then waits until the browser refreshes the page.As opposed to this, web applications developed with AJAX can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. At the same time, improving interactivity with AJAX sufficiently increases the complexity of the client side development.Using the JavaScript programming language, an AJAX application developer should implement an intermediate level between the browser and web-server which is responsible for handling user actions, managing browser-server dialog, and changing the interface according to web server responses.This task is hard enough to make a couple of faults. In this paper, we consider the problem of testing the client side functionality of AJAX web applications.We show that qualitative tests can be elaborated using UniTESK [1,2], an industrial model based test development technology designed in Institute for System Programming of Russian Academy of Sciences. UniTESK was initially applicable to only systems with synchronous interfaces.After a period of time, an approach [3, 4, 5, and 6] was designed and implemented that extends this technology to asynchronous interfaces.Since then This work was supported by the RFBR (grant 09-01-00576-a) UniTESK has given a good account of oneself in testing several classes of complex applications with asynchronous interfaces such as internet protocols, components of a distributed operating system, and functions of the standard binary interface of Linux.Actually, these successful applications of UniTESK suggested that we apply this technology to AJAX web applications. UniTESK offers a test suite architecture consisting of a set of components that are used as building blocks to organize test systems.In the paper, we present a technique for developing these components so that the test system they form aims at revealing faults in the client side of the AJAX web application under test.We do not consider the problem of testing the server side of AJAX applications in this paper. After presenting the approach to testing systems with asynchronous interfaces proposed by UniTESK and our technique of its use, we conduct several experiments in which we practically apply them.The obtained results show the applicability of UniTESK and the technique for testing the client side functionality of AJAX web applications.At the end of the paper, we present a comparison between our approach and the existing approaches to highlight the key advantages of UniTESK and our technique.We also discuss the main limitations and drawbacks of our approach. The paper is structured as follows.Section II is devoted to the AJAX technology.We consider the architecture, the behavioral model, and the main features of a typical AJAX application.Section III outlines the UniTESK approach to testing systems with asynchronous interfaces.In section IV, we present our technique for testing the client side of AJAX web applications with UniTESK.We empirically evaluate the applicability of UniTESK and the technique in section V. Section VI compares our approach with the existing techniques.We conclude with a summary of our key contributions, and suggestions for future work in section VII. II. AJAX AJAX is an approach to web interaction that combines a set of well known technologies to achieve high interactivity of web applications.In this section, we consider the architecture, the behavior and the main features of a typical AJAX application. Let us discuss AJAX applications comparing them with web applications that we call "classic".The architectures of both the classic and AJAX applications are shown in Fig. 1. A classic web application consists of a set of web pages.Some web pages may be described in static HTML (Hypertext Markup Language) files; the others may be generated by the server side programs.A web page is displayed to the user, containing lists of links and form elements that allow the user to drill down to further web pages. Figure 1. The architectures of classic and AJAX web applications The main functionality of a classic web application is implemented at the server side.Some animation and additional functionality can be provided using client side programming languages and technologies, but it doesn't change the main behavioral model of the application.This model works as follows: the user supplies input to the browser, e.g.types a URL (Uniform Resource Locator), clicks on a hyperlink, or submits a form; the browser sends the HTTP (Hypertext Transfer Protocol) request for the URL to the web server; the web server responses with a new web page; the browser renders the page and waits for the user's next input. The key features of classic web applications are as follows: 1.The user interacts with the web application synchronously, i.e. he requests for the next web page only after the response to the previous request has been handled by the browser and the appropriate web page has been displayed.2. HTTP requests are issued for entire web pages and the entire page gets refreshed as a result of this action. HTTP requests are issued by the browser, and HTTP responses are handled by the browser.4. HTTP requests occur as a direct consequence of user actions.As contrasted with a classic web application, the user communicates with an AJAX application asynchronously.The behavioral model proposed by AJAX works as follows: 1.The user performs an action on the web interface, e.g.clicks on a hyperlink, or a button.2.An appropriate user interface event is fired.3. The handler of this event, a JavaScript function, is called.It builds an asynchronous HTTP request, sets a callback function that will handle the response, and issues the request to the web server.4. The web server replies with the data. The callback function is called, it reads the data and changes the client side state that includes the DOM (Document Object Model) state, cookies, and global JavaScript variables.According to this model, the user is able to go on working with the AJAX web application right after the user interface event handler has been executed, i.e. the user does not has to wait until the client-server dialog has been completed as it is happens in case of a classic web application.Because of the small size of the transferred data, the browser responds very quickly and the user does not feel any delay. The key features of AJAX web applications are as follows: 1.The user interacts with the AJAX application asynchronously, i.e. he goes on working with the application while asynchronous HTTP requests are issued and responses are handled in the background.2. The web server does not response with the entire web page, it responses with data that the client side JavaScript uses to dynamically refresh a small part of the currently displayed page.3. HTTP requests are formed, issued and handled by JavaScript functions.4. User actions can trigger the execution of JavaScript functions that may change the client side state and perform communication between the client and server, but JavaScript functionality is also able to work independently from user actions.It is usually achieved with special JavaScript functions that use timers to call other JavaScript functions.5.The JavaScript programming language doesn't support multithreading.The browser uses one thread to handle user actions and execute JavaScript functions, including user interface event handlers and callback functions.6. Concurrent HTTP requests are possible in some AJAX web applications, i.e. the next HTTP request may be issued before the response to the previous one has been handled.In the paper, we consider AJAX applications, the clientserver dialog of which complies with the behavioral model presented in this section.It doesn't matter which mechanism an AJAX application uses to perform asynchronous clientserver communication.Let us note that the use of the XMLHttpRequest [7] object implies a sequence of HTTP responses to a single HTTP request.We take this fact into account. We also suppose that an AJAX application itself is able to perform client-server communication independently from user actions. III. TESTING ASYNCHRONOUS INTERFACES WITH UNITESK UniTESK is a model based test automation technology.It can be used for testing systems with synchronous and asynchronous interfaces.A synchronous interface implies that the subsequent action on the interface may be performed only after the interface has already responded to the previous action.The interface of a software system is considered to be asynchronous if this system can simultaneously interact with several other systems or interactions may be initiated by the system itself.The approaches and test suite architectures for testing systems with synchronous and asynchronous interfaces differ.We will discuss UniTESK implying only the asynchronous case in the remainder of the paper.Each test system developed with UniTESK consists of a set of components.UniTESK defines the number of the components, their responsibilities and relationships.Fig. 2 contains the test suite architecture proposed by UniTESK for testing systems with asynchronous interfaces.Some of the components are already implemented and are used as is independently from the type of the application under test.Their representations have the gray background in the figure 2. The other components should be implemented by the tester and their implementations vary depending from the application under test.UniTESK provides formal descriptions to describe these components, extensions of some of the industrial programming languages to develop them, and software instruments to translate formalisms into the code in the target industrial programming language.The following formal descriptions are provided: specifications, mediators, and scenarios. The test system developed with UniTESK supposes that the interface of the application under test consists of atomic operations of two types: stimuli and reactions.The test system supplies input to the application by means of applying stimuli to it.The application outputs through reactions that the test system evaluates.Reactions can be of two types: immediate or deferred.Immediate reaction is a reaction that is visible from outside immediately after affecting target system.When testing an application with an asynchronous interface, reactions to some stimuli may not be observed immediately because of the internal processes in the application.Deferred reaction is a reaction that is visible from outside later some time after a set of affecting target system. Stimuli and reactions are the notions of UniTESK.The tester has to represent the real application interface through stimuli and reactions, and provide UniTESK with this interface.The following question could be set.Is it always possible to represent an arbitrary asynchronous interface through atomic stimuli and reactions?We haven't heard of a formal proof of it, but we also haven't heard of a contrary instance refuting it. A formal interface of the application under test consisting of stimuli and reactions is fixed in specifications.Requirements to the application behavior are also fixed in specifications in the form of pre-conditions and postconditions of stimuli and reactions, and invariants of data types.Specification also contains data structures that model the state of the application under test, i.e. describe the model state.The model state reflects the state of the application under test during testing.The requirements in specifications are imposed on the model state changes.The pre-condition for the stimulus describes constraints on the state, in which the test system is able to apply the stimulus.Violation of the precondition for the stimulus represents that the test is made incorrectly.The immediate reaction does not have its precondition.The post-condition for the stimulus and postcondition for the immediate reaction are the same things.The post-condition for the stimulus defines the requirements to the result of its application, i.e. to the state change and possibly the return value of the application operation the stimulus refers to, e.g. when applying the stimulus leads to the call of a public application operation that returns a value.The pre-condition for the deferred reaction describes if appearance of the reaction in the given state is possible.When precondition for the deferred reaction is violated, incompliance between the behavior of the application and its specification is registered.The post-condition for the deferred reaction checks compliance of the result obtained when the reaction emerges, to the expected one. UniTESK defines the structure of specifications.The main goal of this structure is to provide the test completeness metric. Specifications are translated into the test suite architecture components that take part in the verification of stimuli and reactions: model state, action oracles and state mediators. To be able to verify requirements to stimuli and reactions, the test system should somehow link specifications to the application under test.Action Mediator component is generated from the formal description called mediator.It performs actions on the application under test, i.e. really applies stimuli.It also registers immediate reactions.The other component, implemented in the target programming language, registers the appearance of the deferred reactions.It is called catcher.The component that keeps information about the order of stimuli and reactions is called interaction register.The exact order of stimuli and reactions can not always be observed when testing a system with an asynchronous interface; therefore the UniTESK approach to testing systems with asynchronous interfaces was designed to be able to take advantage of the observable partial order of stimuli and reactions.So, interaction register usually keeps information about the detected partial order of stimuli and reactions. The component of the UniTESK test suite architecture, which is called test scenario, is generated from the formal description of the same name and is used to combine Stimuli are applied and reactions appear during the execution of the scenario function.The completion of the scenario function indicates that all the stimuli have already been applied and all the reactions have been cached.After the scenario function has been executed, hyper oracle begins evaluating the observable behavior of the application under test.Information about the detected order of stimuli and reactions is utilized during the evaluation process as follows. The test system goes over all the possible orders of stimuli and reactions that conform to the partial order detected.For each particular order, each stimulus, and reaction test oracle checks the pre-condition, state mediator synchronizes the model state with the state of the application under test, and again test oracle checks the post-condition.If this procedure discovers at least one order, for which all the constraints on stimuli and reactions are met, the test system claims that the behavior of the application under test is acceptable. To completely automate the execution of UniTESK tests and automatically generate sequences of test inputs, the developer has to define test scenario automata.A special component of the UniTESK test suite architecture goes over all the states of test scenario automata and calls each scenario function in each accessible state.To define test scenario automata, a function should be implemented that returns the state of test scenario automata after each scenario function call.In theory, the state of test scenario automata is constructed on the base of the model state.In practice, it may be an arbitrary function.This function allows the test system to construct test scenario automata incrementally during testing. UniTESK imposes the following restriction on the behavior of the application under test: after applying a set of stimuli to the application, it demonstrates a set of reactions during a finite period of time and goes to a state in which no reactions appear spontaneously.Such states are called stationary.Stationary states allow the test system to perform the evaluation process and call the next scenario function at the state in which the previous scenario function finished. In this section, we have only outlined the main characteristics of the approach we use for testing the client side of AJAX web applications.The details can be found in [3, 4, 5, and 6]. IV. TESTING AJAX APPLICATIONS WITH UNITESK In this section, we present a technique for developing the UniTESK test suite architecture components so that the test system they form aims at revealing faults in the client side functionality of AJAX web applications. A. The technique In practice, functional testing of web applications aims at discovering faults of two types: general faults such as dead links and incorrect markup, and business logic faults concerning the behavior of the web application under test.Business logic faults are discovered when the web application under test incorrectly reacts to a logically related set of stimuli.The technique we propose in this section aims at discovering faults concerning the behavior of the client side functionality of AJAX applications. At the first step, the requirements to the behavior of the client side of the AJAX application under test are extracted.When testing a web application, it is natural that there aren't any well-structured documents describing functional requirements.The probability of getting the requirements to the client side of the AJAX application is even lesser.We do not propose a method for the extraction of the requirements in the paper, because elaboration of such a method requires additional investigations and a separate paper is better to be written on the matter.We only assume here that the result of the requirements extraction procedure is a set of wellstructured documents describing the requirements to the client side of the AJAX application under test. At the second step, the extracted requirements are to be formally fixed in specifications in the form of pre-conditions and post-conditions of stimuli and reactions, and invariants of data types.To be able to formalize the requirements using the software contracts proposed by UniTESK, the tester must represent possible interactions of the client side functionality with its environment as a set of stimuli and reactions. We believe that an adequate model is shown in Fig. 3.This model conforms to the behavioral model of a typical AJAX web application presented in section II, but it only concerns the client side of the application.An individual action on the application interface represents a stimulus if this action leads to the modification of the client side state or if an asynchronous HTTP request is issued.A user interface event occurs as a result of such an action.The handler of this event is called.It may change the client side state or issue an asynchronous HTTP request.The result of its execution is modeled as a reaction.The new proxy server component of the test system intercepts the request issued by the user interface event handler.It in turn issues the HTTP response.It is modeled as a stimulus.The callback function is called that handles this response.The client side state can be modified as a result of its execution or something else can happen.It is modeled as a reaction. The client side functionality of the AJAX web application under test may change the client side state or issue an asynchronous HTTP request independently.Such an activity is modeled as a rection. Having this model, the requirements to the stimuli and reactions can be formalised.Stimuli are specified trivially.A reaction results to the client side state change and possibly an asynchronous HTTP request.So, the postcondition for the reaction should asses the client side state change and the HTTP request in case the request is issued as a result of the reaction.In order that the test system may really verify the behavior of the AJAX application, action mediator, catcher and proxy server test suite architecture components are implemented at the third step. Action mediator contains functions that programmatically perform actions on the application interface. Catcher must detect the reactions, and extract and save the client side state changes after them.The single threaded nature of JavaScript helps a lot for the extraction of the client side state changes.If the extraction of the client side state change is accomplished by a JavaScript function, it is guarantied that there aren't another activity that modifies the client side state at the same time. Proxy server is not a part of the UniTESK test suite architecture.It is a new component specifically desigent to support testing of AJAX applications.Proxy server has two responsibilities: • intercept asynchronous HTTP requests; • apply stimuli that model the responses of the target web server.The use of proxy server allows modeling the real situation of multiple users working with a single web server.The server side state can be changed by the users.Proxy server is able to respond taking the possibility of the server side state changes into account. The client side state changes and the intercepted HTTP requests are used by the state mediator to synchronize the state of the requirements model with the state of the AJAX application under test during the verification procedure. At the fourth step, specifications are used to determine the test coverage criteria.The higher is the criteria, the more complecate are test scenarios. At the fifth step, test scenatios are developed so that the choosen test coverage criteria could be achieved during testing. Testers often do not take faults concerning multiple asynchronous HTTP requests into account, because of their low probability.A typical example of such a fault can be the following: the second asynchronous HTTP request is issued before the response to the previous one has come; due to network delay, the response to the second request comes before the response to the first one; the callback function that handles the second response removes a DOM element; the response to the first request comes; its callback function crashes trying to access the deleted DOM element.It is obvious, that the proposed technique for modeling stimuli and reactions alows developing scenatio functions aiming at testing multiple asynchronous HTTP requests. B. Application domain The approach to testing systems with asynchronous interfaces proposed by UniTESK has two main application conditions: 1.A formal interface consisting of atomic stimuli and reactions may be provided for the real interface of the application under test.This formal interface should adequately model the real application interface.2. After responding to a set of stimuli, the application under test must go to a stationary state in which no reactions can appear spontaneously.The technique we have just presented explains how to get a formal interface complying with the first condition. As concerns to the second condition, we have mentioned in section II of the paper that AJAX web applications may have client side functionality that changes the client side state and communicates with the server independently from user actions and at an unpredictable time.Formally, there are no stationary states in such applications.If such functionality is out of the scope of testing, it usually may be ignored or deactivated by hand.If the test system must take such functionality into account, it has to model stationary states.For instance, the test system may artificially execute a piece of JavaScript during the evaluation process in order that the application under test does not change the client side state or issue an HTTP request. At the moment, we can not imagine a client side functionality of an AJAX web application that can not be modeled and tested using UniTESK and our technique. V. EMPIRICAL EVALUATION In order to evaluate the applicability of the UniTESK test development technology and the technique of its use presented in the paper for testing functionality of the client side of AJAX web applications, we perform a set of experiments. We collect 8 AJAX design patterns.Each pattern describes how the objects, components, and levels constituting the AJAX web application should interact in order that the application could respond to user actions in a certain way or a certain interactivity effect could be achieved.The patterns primarily describe client sides of AJAX web applications.Implementing them allows us to get AJAX applications that both implemented differently and behave differently. We implement each pattern in an AJAX web application.So, we have 8 AJAX applications.After that, using the UniTESK technology and our technique, we create a test system for each AJAX web application developed.In order to assess the fault-revealing capability of the test systems we intentionally introduce faults into the source code of the AJAX web applications, perform testing and count the percent of the faults revealed.This section presents the results of our experiments. A. AJAX design patterns Here we briefly introduce 8 AJAX design patterns and their implementations for which we develop test systems.Detailed description of the patterns can be found in [8, 9, and 10]. Pattern: Explicit Submission.Problem: How can information be submitted to the server?Solution: Instead of automatically submitting upon each browser event, require the user to explicitly request it, e.g.submit upon a button click.AJAX application: A simple authorization form. Pattern: Periodic Refresh.Problem: How can the application keep users informed of changes occurring on the server?Solution: The application periodically issues asynchronous requests to gain new information, e.g. one request every five seconds.AJAX application: An application alerts the user as a new comment has been added. Pattern: Submission Throttling.Problem: How can information be submitted to the server?Solution: Instead of submitting upon each JavaScript event, retain data in a browser-based buffer and automatically upload it at fixed intervals.AJAX application: An application that submits a single field periodically as changes are made. Pattern: Predictive Fetch.Problem: How can you make the AJAX application respond quickly to user activity?Solution: Have the application anticipate likely user actions and call the server in preparation.AJAX application: An application that preloads the next page of the article. Pattern: Browser-side Cache.Problem: How can you make the AJAX application respond quickly to user activity?Solution: Retain server results in a browser-side cache.Whenever the application performs an asynchronous request, it first checks the cache.If the query is held as a key in the cache, the corresponding value is used as the result, and there is no need to access the server.AJAX application: A simple calculator that performs calculations on the server and retains the results in a client-side cache. Pattern: Guesstimate.Problem: How can you cut down on calls to the server?Solution: Instead of requesting information from the server, use a historical data and make a reasonable guess on the client.AJAX application: An approximate calculation of the number of registered users. Pattern: Pseudo-threading.Problem: AJAX web applications are single-threaded.Some of them require complex processing on the client.If the thread of execution is busy performing such processing, users won't be able to perform input.Solution: Instead of solving the entire problem at once and returning, a processing function is called once in a while, incrementally processes a bit more of the problem, before yielding.AJAX application: Sorting of a big table on the client. Pattern: Multi-stage Download.Problem: How can you optimize downloading performance?Solution: Break content download into multiple stages, so that faster and more important content will arrive first.AJAX application: An application that downloads additional links after the main content of the article has been downloaded. B. Experiments To implement test systems for the AJAX applications introduced in the previous subsection, we exploit both the Java and JavaScript programming languages.The JavaTESK [11] toolkit is used to implement the UniTESK test suite architecture components and run the test suites developed.The Selenium Remote Control [12] testing tool is used to drive the browser, programmatically perform actions on the web interface, and access the resulting DOM states.We exploit Mozilla Firefox as a browser in our experiments.Our technique of the use of UniTESK introduces the proxy server component in the test suite architecture.We implement this component using the Java programming language.It is universal, i.e. implemented once it is included in all the test systems. We perform five experiments for each AJAX web application and corresponding test system.Thus forty experiments are conducted in the total.Each experiment consists in introducing a single fault into the source code of the application, running the corresponding test system on the application, and analyzing the test results.Table 1 summarizes the results of the experiments performed.Here are some examples of the faults introduced: building incorrect HTTP requests in JavaScript functions, removing user interface event handlers, wrong modifications of the DOM, removing an XMLHttpRequest object from the pool of XMLHttpRequest objects, setting timers with wrong time intervals, removing identifiers of HTML elements and etc.All the faults appear at the client side of the AJAX applications. The test systems reveal 85% (in the mean) of all the errors introduced.We believe it is a good result that confirms the applicability of UniTESK and the technique of its use for testing functionality of AJAX web applications.It is worth noting that the percentage of the faults revealed depends on the quality of the test systems developed. VI. COMPARISON WITH THE EXISTING APPROACHES We didn't manage to discover another approach specifically designed for testing the client side of AJAX applications.In this section, we present an overview of the existing AJAX functional testing approaches.The approaches test an AJAX application as a whole; therefore they are able to reveal faults in both the client side and server side of AJAX applications.We compare them with the approach we propose in the paper, i.e. the UniTESK technology complemented with the technique of its use. A. Approaches proposed by the scientific community We succeed in discovering three approaches specifically designed for functional testing of AJAX web applications: • Invariant Based Testing [13]; • State Based Testing [14] ; • Search Based Testing [15].All the approaches use a FSM (Finite State Machine) model of the AJAX web application under test to produce tests; therefore we label them as FSM based test generation approaches. The Invariant Based Testing approach is rather directed to revealing faults in dynamical DOM states such as dead links, incorrect markup, and the absence of widgets, DOM elements, and error messages; than organizing complex test situations in which the test system applies a set of logically related stimuli to the application and verifies the reactions to these stimuli.Accomplishing the latter is the primary purpose of the approach we propose in the paper, i.e. the UniTESK technology complemented with the technique of its use.So, the Invariant Based Testing approach and our approach aim at revealing faults of different types; therefore there is no point in their further comparison. The State Based Testing approach divides test creation into two stages.At the first stage, the FSM model of the AJAX application under test is constructed on the base of a set of preliminarily recorded real execution traces of the application.The states of the FSM are abstracted from the real DOM states.The transitions are the JavaScript method invocations triggered by user events or server responses and modifying the DOM.At the second stage, tests are generated on the base of the traversal of the FSM extracted at the first stage.The test generation is accomplished so that the generated tests are able to automatically reveal faults leading to the modification of a correct sequence of states in the FSM model of the application. Because the FSM model is constructed on the base of the real behavior of the application, the approach is expected to show its best in regression testing.The authors strengthen the approach by providing the ability to express general requirements to the behavior of the application in the form of pre-conditions and post-conditions.This feature of the approach makes it possible to apply it for functional testing.An advantage of the software contracts proposed by UniTESK is that they additionally provide test coverage criteria.The State Based Testing approach deals with concurrent asynchronous HTTP requests, but it only warns whether there may be a problem.As opposed to this, our approach reveals faults concerning multiple asynchronous HTTP requests.The authors of State Based Testing claim that their approach is a good complement to the classic functional testing. The Search Based Testing approach is based on the State Based Testing approach.The authors propose a technique that enhances the fault revealing capability of the tests generated.The main features of the approach remain the same. A common advantage of the State Based Testing and Search Based Testing approaches over our approach is that they are better automated.The approaches are designed for testing only AJAX applications, the authors of the approaches tried to automate them as much as possible.In contrast to the approaches, the UniTESK technology doesn't take the AJAX specific features into account, because it was developed to be applicable for general purpose software.That is why developing some of the UniTESK test suite architecture components is a fairly labor-intensive task.For instance, special functions should be implemented in order that action mediator could programmatically perform actions on web interface elements.Each particular AJAX application requires its own functions because there aren't two AJAX applications that have the same interface.Other functions should be implemented in order that catcher could get DOM states after the reactions. B. Approaches used in industrial practice We examined existing test automation tools that support functional testing of web applications.The tools that are positioned as AJAX test automation tools implement the Capture and Playback [16] approach.According to the approach the tester records the user actions; saves them in a script; enhances the recorded script with verification points, where some property or data is verified against an existing baseline; plays back the script and observes the results.The Capture and Playback approach is very useful for regression testing.It is also widely used for functional testing of classic web applications. In order to support testing of AJAX applications, Capture and Playback testing tools implement either a method for automatically detecting responses to asynchronous HTTP requests or a method for detecting DOM state changes.Such a method allows a Capture and Playback testing tool to determine whether the application has already responded to the user action during the playback stage.The Capture and Playback approach supporting AJAX is implemented in IBM Rational Functional Tester [17], SWEA [18], and many other test automation tools.The Capture and Playback approach doesn't aim at creating complex test sequences like the approach we propose in this paper.Using it leads to the generation of a big amount of test scripts.A script usually verifies a sequence of possible user actions.Week modularity is a common disadvantage of such scripts.As opposed to this, the test suite architecture is one of the most competitive advantages of UniTESK. The most flexible of the existing AJAX functional testing techniques is to use a combination of a unit testing framework and a software library which makes it possible to programmatically perform actions on the application interface and then access the resulting DOM state.An example of such a technique is the JUnit [19] unit testing framework complemented with the Selenium Remote Control testing tool.By analogy with the Capture and Playback testing tools, AJAX support is limited to designing and implementing either a method for detecting responses to asynchronous requests or a method for detecting DOM state changes.Let us note that this technique is flexible because it provides minimal support for test automation.In fact tests are handmade, but can be executed automatically. VII. CONCLUSION In this paper, we demonstrate the applicability of the UniTESK test development technology for testing the client side functionality of AJAX web applications.We outline the approach to testing systems with asynchronous interfaces proposed by UniTESK, present the technique for modeling and testing AJAX applications with UniTESK, practically evaluate UniTESK and our technique, and compare our approach with the existing approaches. Though UniTESK can be used to develop test systems for AJAX web applications, UniTESK is not an AJAXspecific testing technique.Developing tests for AJAX with UniTESK is a very labor-intensive task.The future work may consist in enhancing the automation level of the approach we propose in the paper. In this paper, we ourselves develop AJAX applications.Then we apply UniTESK to them.In our future work, we should apply UniTESK to a couple of applications really working in Internet. Our approach can only be used for testing the client side functionality of AJAX web applications.On the one hand, the approach is directed to the client side faults that are typical and specific for AJAX web applications.On the other hand, we do not test the server side at all.Future investigations may consist in designing an AJAX testing technique that will take both the client side and the server side faults into account. Figure 2 . Figure 2. The UniTESK test suite architecture for testing systems with asynchronous interfaces Figure 3 . Figure 3. Interactions of the client side of an AJAX web application with its environment
2015-03-06T19:42:58.000Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "61c5b5cbbdfb926da4c921ab3996ca0138b115eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15514/syrcose-2010-4-9", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "61c5b5cbbdfb926da4c921ab3996ca0138b115eb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
268991
pes2o/s2orc
v3-fos-license
Technical Developments and Clinical Use of Telemedicine in Sleep Medicine The use of assistive technology and telemedicine is likely to continue to shape our medical practice in the future, notably in the field of sleep medicine, especially within developed countries. Currently, the number of people suffering from obstructive sleep apnea syndrome (OSAS) is increasing. Telemedicine (TM) can be used in a variety of ways in sleep medicine: telediagnostics, teleconsultation, teletherapy and telemonitoring of patients being treated with positive pressure devices. In this review, we aim to summarize the recent scientific progresses of these techniques and their potential clinical applications and give consideration to the remaining problems related to TM application. Introduction During the last decade, the use of assistive technology and telemedicine (TM) has dramatically increased in developed countries. In 2005, 1090 PubMed publications were related to TM. However, by 2015 the number of papers published on this subject had increased to 2307. TM involves bi-directional interaction between patients and healthcare providers [1]. The type of data transmitted by the patient, the frequency of data transfer, and the frequency of interactions between patient and healthcare providers varies largely across studies, according to the purpose of TM interventions [1]. Patient data transmission methods include telephone, Internet, video and smartphone data transfer. Feedback to the patients is given using the same methods. In sleep medicine, telehealth is applicable at each stage of patient management, from diagnosis to the monitoring of treatment. In this review, we will focus on the potential use of TM in obstructive sleep apnea syndrome (OSAS). We aim to summarize the recent scientific progresses of these techniques and their potential clinical applications, and the remaining problems and barriers related to TM application. We will detail recent advances in telediagnostics, teleconsultation, teletherapy and telemonitoring in OSAS patients [2]. Telediagnostics for Obstructive Sleep Apnea Syndrome Today, polysomnography (PSG) remains the reference method for the diagnosis of sleep disordered breathing (SDB) [3]. Patients spend one night in a sleep laboratory to enable an attended, in-lab PSG to be performed. As the number of obese people continues to rise, OSAS is becoming increasingly prevalent [4] and, as a result, the waiting time to obtain a PSG can be very long. The high cost of attended in-lab PSG (equipment, maintenance costs, staff costs and hospitalization) can be a limiting factor. To address these problems, development of simplified portable monitoring devices (PM), to perform home sleep testing, began in the 1980s and continues today. These machines generally record up to a maximum of four cardiorespiratory signals (polygraphs), and their use has led to earlier diagnosis and treatment initiation, better patient comfort and potential cost-savings [5]. However, the lack of a skilled attendant and the patient's inability to use a PM correctly can affect the quality of the recording: failed exams are about 10%-20% [6,7]. Home-PSG, a complete sleep recording device, which includes electroencephalographic channels, avoids many of these problems, but the failure rate due to the lack of a skilled attendant continues to be a problem. With home-PSG, poor quality recordings vary between 4.7% and 20% compared to 0%-8% in the sleep lab environment [8]. TM has been studied in these circumstances in order to obtain a better quality of sleep recordings. Three studies have tested the impact of real-time telematic transmission for unattended PSG in patients with a clinical suspicion of OSAS. Gagnadoux et al. [9] included 99 patients in a prospective randomized crossover trial. Each patient underwent one home-PSG and one in-hospital unattended, but telemonitored, PSG (TM-PSG) on two consecutive nights. The TM-PSG was recorded in the medical unit of two peripheral hospitals, with remote control from the central sleep lab. The sleep technicians checked the quality of the recordings every 30 min and instructed the nursing staff at the two hospitals to replace any electrodes which gave faulty signals. Failure rate was 11% for TM-PSG vs. 23% for home-PSG. Thirteen TM-PSGs required technical intervention to replace lost sensors, but in four cases, the nurse in the medical unit was not able to correct the problem. Without telemonitoring, the failure rate would have been 19%, but was reduced to 11% with remote supervision. A cost analysis was also performed and concluded that although telemedicine was more effective (half the number of failures) it was also very expensive ($244 vs. $153 for home-PSG) [10]. We have conducted two studies to assess the performance of the Sleepbox ® device to obtain real-time remote supervision of home or unattended PSG from the sleep lab [11,12]. The first pilot study was performed in Brussels, on 21 patients with a clinical suspicion of OSAS. Sleepbox ® is a wireless system able to communicate with the polysomnograph to transmit recordings to the sleep lab in real time. It also includes a microphone. The sleep technicians performed a remote monitoring of the home PSG every hour. In case of sensor loss, the technician was able to call the patient, through Skype ® or via the Sleepbox ® microphone, to ask the patient to replace the sensors correctly ( Figure 1). up to a maximum of four cardiorespiratory signals (polygraphs), and their use has led to earlier diagnosis and treatment initiation, better patient comfort and potential cost-savings [5]. However, the lack of a skilled attendant and the patient's inability to use a PM correctly can affect the quality of the recording: failed exams are about 10-20% [6,7]. Home-PSG, a complete sleep recording device, which includes electroencephalographic channels, avoids many of these problems, but the failure rate due to the lack of a skilled attendant continues to be a problem. With home-PSG, poor quality recordings vary between 4.7 and 20% compared to 0-8% in the sleep lab environment [8]. TM has been studied in these circumstances in order to obtain a better quality of sleep recordings. Three studies have tested the impact of real-time telematic transmission for unattended PSG in patients with a clinical suspicion of OSAS. Gagnadoux et al. [9] included 99 patients in a prospective randomized crossover trial. Each patient underwent one home-PSG and one in-hospital unattended, but telemonitored, PSG (TM-PSG) on two consecutive nights. The TM-PSG was recorded in the medical unit of two peripheral hospitals, with remote control from the central sleep lab. The sleep technicians checked the quality of the recordings every 30 minutes and instructed the nursing staff at the two hospitals to replace any electrodes which gave faulty signals. Failure rate was 11% for TM-PSG vs 23% for home-PSG. Thirteen TM-PSGs required technical intervention to replace lost sensors, but in four cases, the nurse in the medical unit was not able to correct the problem. Without telemonitoring, the failure rate would have been 19%, but was reduced to 11% with remote supervision. A cost analysis was also performed and concluded that although telemedicine was more effective (half the number of failures) it was also very expensive ($244 vs $153 for home-PSG) [10]. We have conducted two studies to assess the performance of the Sleepbox  device to obtain real-time remote supervision of home or unattended PSG from the sleep lab [11,12]. The first pilot study was performed in Brussels, on 21 patients with a clinical suspicion of OSAS. Sleepbox  is a wireless system able to communicate with the polysomnograph to transmit recordings to the sleep lab in real time. It also includes a microphone. The sleep technicians performed a remote monitoring of the home PSG every hour. In case of sensor loss, the technician was able to call the patient, through Skype  or via the Sleepbox  microphone, to ask the patient to replace the sensors correctly ( Figure 1). Ninety percent of the recordings were of excellent quality. Among the 10% PSG failure rate, one failure was due to the polysomnograph (battery failure), and one was related to a recording of poor quality. For sensor losses, two Skype ® interventions were required, resulting in readjustment of the defective probes. On the basis of these first encouraging results, and after some technical improvements, we performed a second study in Paris [12], to monitor real-time unattended PSG performed on 27 patients hospitalized in the acute coronary care suffering from acute coronary syndrome. The purpose was to screen OSAS in this particular population, likely to suffer frequently from SDB. The PSG was remotely controlled from the sleep lab of the same hospital, located in another building. The sleep lab nurses called their colleagues in the acute coronary care to replace probes in case of faulty signals. Results were much more interesting: all sleep recordings were interpretable and 89% of PSGs were graded as excellent. Eighty-two percent of the patients exhibited SDB. The Sleepbox ® was efficient in 78% of the patients and all the problems related to remote surveillance were linked to the 3G network connection. However, 10 interventions were performed: eight for replacement of the nasal cannula; one for electrode repositioning; and one for pulse oximeter, effectively increasing the global quality of PSG recordings. We did not assess the cost of the Sleepbox ® system. A recent study from Coma-del-Corral et al. [13] investigated telemonitored polygraphy (TM-PG) in patients with clinical suspicion of OSAS. TM-PG was performed on 40 patients, in a "Virtual Sleep Unit", in another hospital some 80 km from the central sleep lab. The sleep lab nurses performed real-time continuous TM-PG check. Continuous videomonitoring (via a Webcam) was also available. No PG failure was observed, but data transmission failed for 2.5% of the recordings. The cost analysis showed also that telemedicine is associated with additional costs: TM-PG cost €277 compared to €145 for a PSG. These studies, which were conducted in different settings and on patients with various clinical conditions, have shown that real-time attended intermittent or continuous remote supervision of home/unattended PSG/PG is feasible and has the potential to reduce failure rates of sleep recordings. Teleconsultation for Obstructive Sleep Apnea Syndrome Teleconsultation is a system to facilitate healthcare accessibility for OSAS patients. In a recent interesting study, Isetta et al. [14] tested the feasibility of teleconsultation. Two different schemes were studied to assess whether teleconsultation could replace: (1) continuous positive airway pressure (CPAP) follow-up consultation (50 patients); (2) CPAP training consultation (40 patients randomized to receive face-to-face vs. teleconsultation). For CPAP follow-up, 95% of the patients were satisfied with the teleconsultation, and 66% declared that teleconsultation could replace 50%-100% of the CPAP therapy follow-up visits. Younger patients (<65 years) were more inclined to recommend teleconsultation to others. For CPAP training, patients trained via videoconference demonstrated the same knowledge about OSAS and CPAP therapy as the face-to-face group (94% of correct answers vs. 92%). Video-trained patients also showed similar performances on mask placement and mask leak avoidance. Coma-del-Corral et al. [13] implemented teleconsultation in patients with confirmed OSAS. After the TM-PG, patients were randomized to receive either a face-to-face consultation or a teleconsultation to receive the results of their sleep study. The teleconsultation was made through videoconferencing. The 16 patients requiring CPAP were then treated at home by auto positive airway pressure device (APAP) and the data was telematically transmitted during two nights. At six months, in this very small group of patients, adherence was not different: 85% for the face-to-face consultation and 75% for the teleconsultation group. In these two prospective studies, teleconsultation seems to be an interesting and viable option for the purpose of CPAP education and follow-up. Teletherapy with CPAP for Obstructive Sleep Apnea Syndrome In order to perform remote-attended CPAP titration at home, Dellaca et al. [15] recorded 20 severe OSAS patients who were using CPAP for the first time. CPAP was coupled with a telemetric unit, working via General Packet Radio Services (GPRS) mobile phone network in order to allow remote control of CPAP parameters (flow, pressure, leaks) and CPAP pressure adaptation. One week later, patients underwent full in-lab PSG and another CPAP titration. Pressure level was similar in both settings: 9.15 at home vs. 9.2 cm H 2 O. Real-time remote CPAP titration is feasible and offers pressure-setting outcomes similar to in-hospital attended CPAP titration. Telemonitoring of CPAP-Treated Obstructive Sleep Apnea Syndrome Patients When treating OSAS patients with CPAP, the challenge is to obtain adequate adherence, defined as use during at least 4 h/night and for more than 70% of the nights [16]. Independently of this usually accepted cut-off, the CPAP effect grows with increased use. Weaver et al. [17] showed a linear relationship between CPAP use and subjective/objective sleepiness. Using the Functional Outcomes of a Sleep Questionnaire, these authors showed a greater improvement in memory when CPAP was used more than 6 h/night in comparison with <2 h. Barbé et al. [18] demonstrated, in a series of 359 OSAS patients, that nightly use longer than 5.65 h achieved better blood pressure and sleepiness reduction. A recent randomized study from Bouloukaki et al. [19] in a cohort of 3100 CPAP-treated patients, randomized in intensive versus standard interventions, also confirmed the positive effect of a greater CPAP use (6.9 vs. 5.2 h/night) on cardiovascular outcomes, indicating that a regular 5-6 h use/night is required. It has also been demonstrated that early adherence (at one week or one month) is associated with better adherence at six months [20,21]. In a recent literature review, the long-term adherence rate has been estimated at 66% [22]. Factors such as psychological barriers, social concerns, side effects, disease characteristics and first CPAP exposure can affect treatment acceptance and adherence [23] and as many as 5% to 50% of patients refuse treatment or reject it rapidly after initiation [24]. Improvements can be obtained through supportive, educational and behavioral therapy. A recent Cochrane Database Systematic Review [25] pooling data from 30 low-to-moderate quality studies reported that supportive, educational and behavioral therapy increases adherence by a respective amount of 50, 35 and 104 min/night and also results in a larger proportion of patients using CPAP for more than 4 h/night. During the last decade, TM has been applied in order to improve adherence. In all the studies, CPAP devices were fitted with a wireless data transmitter to collect compliance and efficacy data. The way to deliver feedback/interventions to the patients in case of troubleshooting varies between studies. The randomized studies comparing TM follow-up versus standard care for CPAP patients are shown in Table 1. We can see that, despite the use of globally similar methods and a reasonably large number of patients, the results are disappointing. Half of the studies do not show any improvement in adherence with telemonitoring. We must emphasize that in two of these studies [28,29], despite the telemonitoring, the adherence remained very low, questioning the value of the usual care in these series. A recent study has assessed the impact of direct access to daily CPAP device parameters for patients. Kuna et al. [34] randomized 138 recently diagnosed OSAS patients requiring CPAP, to usual care, usual care with access to CPAP usage, or usual care with access to CPAP usage and a financial incentive. After three months, mean adherence was 4.8 and 5 h/night in the intervention groups vs. 3.8 for usual care (p < 0.0001). Web access and direct daily feedback seems to act positively on adherence. Interestingly, patients frequently consulted their own data during the first week of treatment, but this then decreases rapidly. More CPAP data consultation was associated with better CPAP adherence. Telemonitoring for CPAP-treated OSAS patients is currently widely applied. Even if the impact on adherence is limited, it offers the possibility for the healthcare providers to detect "problematic" patients and to react accordingly. However, multimedia approaches offer other advantages as it can help save nursing time [31,32], which could allow more patients to be managed with the same manpower. Discussion Sleep medicine has always relied heavily on technology, and telemedicine now offers more possibilities. In 2013 already, a systematic review focused on teleneurology stressed, through the very pragmatic Functionality, Application, Technology, Evaluative phase (F.A.T.E.) scoring system, the emergence of little but solid literature regarding sleep disorders [35]. The present review has demonstrated that numerous telemedicine options are available to enhance ambulatory care, healthcare accessibility and remote therapy monitoring for OSAS patients. However, some points of care are less developed than others. Telediagnostics for OSAS is not yet widely implemented. This can be explained by the fact that even if telediagnostics is efficacious in offering better quality, more comfort and enhanced accessibility to sleep tests, its widespread use is slowed down by the costs and the complexity of the technical aspects. It also requires a change to the current model of care delivery, as it will become home-and patient-centered rather than hospital-centered. Such huge changes are going to take time to be implemented. Teleconsultation seems to be easier to practice, since it does not require changes in the model of care, just a good teleconference platform. This method is associated with numerous advantages for patients including the removal of the need to travel to and from the healthcare center. Teleconsultation is a part of sleep-integrated models of care and some centers, in the United States, have longstanding experience in teleconsultation-guided care [36,37]. Baig et al. [36] demonstrated the long-term effectiveness of a five-year TM program: the delay to obtain CPAP was reduced from more than two months to less than one week. The concept of remote-attended CPAP titration at home is very attractive, but questions remain. As home-APAP titration is nowadays currently performed with good results [38][39][40][41], one can wonder if it is really necessary to obtain a remote real-time control of titration. With unattended home-APAP titration, the treatment can quickly be adjusted following analysis of the downloaded data from the CPAP device. Currently, patients have to visit the sleep lab shortly after home-APAP titration and, therefore, a remote-attended titration strategy would simplify this. Costs of both strategies should be assessed in future studies. The problem of performing home-APAP titration is more related to the contra-indications than to technical aspects. This pathway is restricted to patients without comorbidities (neuropsychiatric, cardiovascular, respiratory comorbidities and comorbid sleep disorders) [42] and with a body mass index (BMI) below 40. In our local experience, the proportion of patients who can benefit from home-APAP titration is about 30%. We also know from previous studies that even in well-selected patients, APAP titration failure occurs in 6%-15% of cases [38][39][40][41]. Home-APAP titration is very interesting for selected patients, who will benefit from treatment in more familiar surroundings. Telemonitoring for CPAP-treated OSAS patients is clearly the most popular of TM tools in sleep medicine. The transmission system for CPAP data is universally available in recent models of machines. Telemonitoring allows better control of therapy. The sleep lab team will quickly be able to distinguish problematic patients requiring more support, education and time investment. Telemonitoring will also avoid unnecessary nurse/medical visits to correctly treated patients. The impact on adherence is uncertain but recent data highlighting nurse saving time shows another added value of TM monitoring [31,32]. The benefits of TM in OSAS patients have been demonstrated through several studies, for different stages of care. Telemonitoring is widely implemented and helps both clinicians and patients to, not only monitor, but to accurately and rapidly adjust the CPAP therapy. Teleconsultation use is also likely to grow in the future. These two aspects of OSAS care do not require large changes in care programs or strategies. Contrarily, remote continuous attendance of sleep tests and home-APAP titration require more technological and human resources together with a change in patient care management. In my opinion, according to the turning of hospital strategies to offer more ambulatory care [43], TM is going to evolve and it is likely that there will be an increasing development of tools and activities in all areas of sleep medicine, including remote continuous attendance of sleep tests and home-APAP. Despite these positive aspects, there are some limitations related to TM. Research findings were not able to show an improvement of CPAP adherence with telemonitoring, but there were marked savings in nurse time in two of the studies [31,32]. In future studies, other outcomes of telemonitoring should be assessed, such as cost savings, cost-effectiveness, long-term clinical control of comorbidities, etc. Secondly, TM is expensive, and this is related to the complex technology required to implement telehealth. Costs could be decreased by wider use of TM in the future. There are also still privacy concerns related to TM use. Privacy protection and security of medical data transmission are two key points to be strictly regulated and controlled to avoid ethical problems. Work is in progress, as regulations for TM deployment for European healthcare have been published last year [44]. In the United States, efforts have also been made, in many states, to regulate, but also to bill, telehealth [45,46]. To end, few research teams are reporting technical problems related with the use of TM. Medicine and automatization are two different worlds that do not usually coincide. How do you perform a teleconsultation if you are unable to log in on the platform or you experience a bug with your webcam, if CPAP data is not available on the secured platform? We know that informatics are excellent tools when working correctly, but real-life experiences demonstrate the need for an efficient helpdesk in order for the system to work proficiently. Therefore, it is apparent that TM will never be suitable for all patients. Conclusions There is growing evidence to support the implementation of TM in OSAS patients. The most studied tools include telemonitoring in CPAP-treated patients and teleconsultation. The impact of telemonitoring on short-term adherence is uncertain, but nurse saving time has been demonstrated. There are still some barriers to the implementation of remote attendance for sleep tests and home-APAP titration, but it is likely to change with the current trend to offer more outpatient care. Since TM will shape the clinical landscape of tomorrow, clinicians will have to adapt their practice to face technological progress whilst taking into account the limitations of these techniques.
2017-01-07T08:35:44.032Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "05f88cd57a35bd4507673fb2ea4f420dd002dcf7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/5/12/116/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05f88cd57a35bd4507673fb2ea4f420dd002dcf7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119369712
pes2o/s2orc
v3-fos-license
Spherical models of star clusters with potential escapers An increasing number of observations of the outer regions of globular clusters (GCs) have shown a flattening of the velocity dispersion profile and an extended surface density profile. Formation scenarios of GCs can lead to different explanations of these peculiarities, therefore the dynamics of stars in the outskirts of GCs are an important tool in tracing back the evolutionary history and formation of star clusters. One possible explanation for these features is that GCs are embedded in dark matter halos. Alternatively, these features are the result of a population of energetically unbound stars that can be spatially trapped within the cluster, known as potential escapers (PEs). We present a prescription for the contribution of these energetically unbound members to a family of self-consistent, distribution function-based models, which, for brevity, we call the Spherical Potential Escapers Stitched (SPES) models. We show that, when fitting to mock data of bound and unbound stars from an N-body model of a tidally-limited star cluster, the SPES models correctly reproduce the density and velocity dispersion profiles up to the Jacobi radius, and they are able to recover the value of the Jacobi radius itself to within 20%. We also provide a comparison to the number density and velocity dispersion profiles of the Galactic cluster 47 Tucanae. Such a case offers a proof of concept that an appropriate modelling of PEs is essential to accurately interpret Gaia data in the outskirts of GCs, and, in turn, to formulate meaningful present-day constraints for GC formation scenarios in the early universe. Globular clusters as quasi-isothermal systems Globular clusters (GCs) are ancient stellar systems orbiting around the centre of mass of their host galaxies. Their evolution is the result of two-body relaxation, stellar evolution, binary star evolution and the interaction with the galactic tidal field (e.g. Meylan & Heggie 1997). Despite the complex interplay of these processes, their present day properties are well captured by relatively simple dynamical models (e.g. Gunn & Griffin 1979). As progressively more accurate observational data unveils the complexities of GCs' structural and kinematic properties, advances to these simple models have E-mail: i.claydon@surrey.ac.uk (IC); m.gieles@surrey.ac.uk (MG) been required to accurately describe them. Understanding the physical processes that generate these complexities may hold the key to understanding the formation process and evolution of GCs. Dynamical models of GCs are usually of two types. First, evolutionary models, e.g. numerical simulations based on direct N -body (Nitadori & Aarseth 2012;Wang et al. 2015) and Monte Carlo approaches (Freitag & Benz 2001;Giersz 2006), include many of the complex aspects of GC evolution. These take into account, among other physical ingredients, the collisional nature of the systems, stellar evolution and the perturbations induced by the galactic environment, which provide a realistic description of these systems. However, million-particle N -body models tailored to describe the observational properties of individual clusters, although finally achievable (Heggie 2014; Wang et al. 2016), still require a significant investment of computational time. An alternative modeling approach is to use equilibrium models which describe the properties of clusters at a given time in their evolution. An example of these types of models are those defined by a distribution function (DF), describing the density of points in phase space. These models are faster to solve than evolutionary models, and provide a simple but physically justified description of the bulk internal properties of GCs. We refer to Hénault-Brunet et al. (2019) for a comparison of the performance of several equilibrium models (e.g., DF-based or moments-based) in the interpretation of mock surface brightness, radial velocity and proper motion profiles derived from a reference N -body model of the cluster M4 (Heggie 2014). The most popular class of DF-based models of GCs are the so-called 'lowered isothermal' models, which are approximately isothermal in the central regions, but have a finite escape velocity to mimic the effect of the energy truncation induced by the galactic tidal field. Anisotropy in the velocity distribution can be found in GCs as a consequence of their conditions at formation (Vesperini et al. 2014;Breen et al. 2017), or as a product of their evolution (Oh & Lin 1992;Baumgardt & Makino 2003), and recent numerical simulations of star clusters showed that anisotropy evolves during the lifetime of GCs, depending also on the initial conditions, including how compact the cluster is (Sollima et al. 2015;Tiongco et al. 2016b). For these reasons, including the presence of anisotropy in the models (e.g., in the way proposed by Eddington 1915;Michie 1963) can be important to accurately reproduce evolutionary effects (e.g., see Zocchi et al. 2016) and, most crucially, observations (e.g., see Anderson & van der Marel 2010, Watkins et al. 2015. The effects of mass segregation can be taken into account by incorporating several components in the models (Da Costa & Freeman 1976;Gunn & Griffin 1979), to describe the dynamics of stars with different masses. The possibility to have radially dependent mass-to-light ratios and anisotropy has proven important in the discussion on intermediate-mass black holes in GCs (Illingworth & King 1977;Zocchi et al. 2017;Gieles et al. 2018;Zocchi et al. 2019) and dark remnants (Sollima et al. 2015(Sollima et al. , 2016Peuten et al. 2017;Zocchi et al. 2019). Recently, developed the limepy family of models which are isothermal at low energies and polytropic near the truncation energy. The truncation of the models is controlled by the parameter g, allowing for the truncation prescription to vary smoothly between the ones proposed by Woolley (1954), King (1966) and (non-rotating) Wilson (1975) modes. The limepy models include a prescription for radial velocity anisotropy (for a test against N -body models, see Zocchi et al. 2016), and the possibility to consider multiple mass components, which accurately reproduce the phase-space distribution of multimass N -body models (Peuten et al. 2017). Additionally, the inclusion of differential rotation (e.g., by means of a prescription equivalent to the one adopted by Prendergast & Tomer 1970) is straightforward (Zocchi & Varri, in prep.), but comes with a higher computational cost because of the loss of spherical symmetry. Old and new observables and their possible dynamical interpretation Despite the developments summarised above, there are complexities in the observational data that can not be reproduced by existing models. These include a flattening of the velocity dispersion profile near the Jacobi radius rJ (Drukier et al. 1998;Scarpa et al. 2007;Lane et al. 2009), extended haloes (Côté et al. 2002;Olszewski et al. 2009;Carballo-Bello et al. 2012;Kuzma et al. 2016Kuzma et al. , 2018, and high velocity stars (Meylan et al. 1991;Lützgendorf et al. 2012;Kamann et al. 2014). Traditional expectations of Newtonian dynamics in the outskirts of GCs would suggest a decreasing velocity dispersion profile with increasing radius. Earlier and more recent empirical evidence suggests that, in some Galactic clusters, the velocity dispersion may be elevated and the surface density may be raised compared to this expectation (Drukier et al. 1998;Lane et al. 2011;Carballo-Bello et al. 2018) which has led some to propose the inclusion of additional physics beyond Newtonian predictions. These explanations include modified theories of gravity (Hernandez & Jiménez 2012), where once a star passes below a threshold value in acceleration (and also the clusters orbital velocity around the galaxy is below the same threshold) the star can enter a modified dynamics regime (Milgrom 1983), which can lead to a flat velocity dispersion at large distances from the cluster centre. Alternatively, some formation scenarios suggest that GCs could form within their own dark matter minihaloes, similar to dwarf galaxies (Peebles 1984;Mashchenko & Sills 2005;Trenti et al. 2015). If still present, this would elevate the velocity dispersion (e.g., see Ibata et al. 2013;Peñarrubia et al. 2017). Other formation theories, where GCs formed in gasrich discs and major mergers of galaxies do not require the presence of a dark matter halo (Kravtsov & Gnedin 2005). In this scenario, the peculiarities could be explained by the GCs still being in the debris of a disrupted dwarf galaxy after a merger (Carballo-Bello et al. 2018). Finally, the tidal field of the host galaxy introduces a spatial condition for escape in addition to a critical energy for escape. This leads to the presence of a population of stars with an energy above the critical energy for escape but still spatially bound to the cluster, and within rJ. The effects of these so-called PEs were first investigated with Nbody models after it was found that the dissolution time of simulations with large particles numbers were shorter than in scaled up simulations with smaller particle numbers (Fukushige & Heggie 2000). The expectation was that the dissolution time of clusters scales linearly with the half-mass relaxation time, t rh , because this is the time-scale for stars to be scattered above the critical energy for escape, Ecrit. However, a dependency of t 3/4 rh for the dissolution time was found in N -body models of star clusters evolving in steady tidal fields (Baumgardt 2001). This deviation from a linear dependence on t rh can be understood from the additional timescale of the spatial criteria for escape through one of the Lagrangian points (Baumgardt 2001), which is dependent on the cluster mass. This means PEs can persist within the cluster for a long time before escaping, where in ideal circumstances some can even remain inside a cluster indefinitely (Hénon 1969 Figure 1. Jacobi energy (E J ) normalised to the critical energy, as a function of the position of stars on the x-axis normalised to r J from an N -body models from C17 (ss3, as described in Section 2). Magenta points are bound stars (E J < E crit ), green points are PEs (E J > E crit and r < r J ) and red points are unbound stars (r > r J ). The potential of the King model (black) and the King model plus tides (yellow) are shown. The shaded cyan region shows the range of PEs which are included in the King model fit, but would no longer be included if the effects of the galactic potential are introduced. PEs dominate in the outer regions of clusters, and around a radius of half of the Jacobi radius (0.5 rJ), roughly 50% of the stars are PEs (Küpper et al. 2010). Claydon et al. (2017, from here on C17) showed that the total amount of PEs depends on the assumed shape of the stellar mass function and the galactic potential. Due to the fact that their energy is larger than that of the other stars in the cluster, PEs contribute to increasing the velocity dispersion profile and to extending the surface density profile beyond the extent of bound stars. The effects of PEs can only account for the behaviour of the surface density profiles inside of rJ. However, by including these effects in an equilibrium model such as the one described in this study, the resulting estimate of the Jacobi radius rJ is much more accurate than the one based on simpler, 'lowered isothermal' models (see de Boer et al. 2019). This makes them a possible explanation for the peculiarities in observational data (Küpper et al. 2010) and the amount of deviation from the Newtonian expectation for the velocity dispersion. In addition, their spatial properties can be used to infer properties of the dark halo of their host galaxy. Therefore, observationally determining if these peculiarities are due to PEs or dark matter can constrain the formation scenario and evolutionary processes that shape GC dynamics. ESA's Gaia mission is providing a revolutionary set of data, with positions and proper motions for a billion stars in the Galaxy. This includes the previously unprobed population of stars in the outskirts of GCs. It is therefore paramount to understand the effect that PEs can have on the observations, and to propose a model that accounts for their behaviour. Adding unbound stars to bound models Daniel, Heggie & Varri (2017) developed a family of DFbased models of GCs that include the effects of PEs, which are described in terms of approximate integrals of motion, as inspired by a family of periodic orbits of the circular Hill problem proposed by Hénon (1969). Unfortunately, such an approach does not allow to easily formulate a simple analytical expression of the DF, which, in turn, makes the derivation of a fully self-consistent solution of the relevant Poisson equation quite cumbersome. The family of models presented in this paper partly addresses these two limitations, although at the cost of introducing substantial simplifications in the phase space description of the PEs. Given the importance of PEs in N -body models, one may wonder why traditional models (without PEs), such as King's model, offer a satisfactory representation of the observational properties of many GCs. This is partially because most King model fits are done to data that does not extend all the way to rJ (Trager et al. 1995). Another reason is that the truncation energy of such models does not necessarily correspond to the critical energy of the systems they describe. This allows the existing models to account for the presence of some PEs inside of the model, albeit with incorrect underlying physics. This is because these models are isolated, and the effect of the tides is mimicked by 'lowering' the energy by a truncation energy φt = −GMc/rt, with G the gravitational constant, Mc the cluster mass and rt the truncation radius. This energy is larger than what the critical energy for escape would be if the effects of a galactic tidal potential are included (defined here as Ecrit). For a cluster on a circular orbit in a reference frame corotating with the orbit, a star has a Jacobi energy of where the terms on the right hand side are the kinetic energy, the potential energy due to the cluster, a contribution from the galactic tidal potential, and the centrifugal force (Fukushige & Heggie 2000). The critical energy of the system (Heggie & Hut 2003) is This means that an isolated model when fit to data from N -body simulations of GCs (orbiting in a corotating reference frame around a time-independent galactic potential) will describe some PEs with −1.5 EJ/φt −1 as bound members of the system. We illustrate this by comparing a King model to the energy of stars in a tidally limited N -body model. We fit a King model to a snapshot from an N -body model from C17, on a circular orbit around a singular isothermal galactic potential, and compare the potential from the model at any radius, φ(r), to the EJ of each star. We define bound stars as stars with EJ < Ecrit, PEs as stars with r < rJ and EJ > Ecrit and unbound stars as stars with r > rJ. have at that radius. The potential beyond rt is approximated as a point mass with φ(x) = −GM/x (dashed line). We also plot φ(x) + φT where we have added the tidal and centrifugal contribution: φT = −1.5Ω 2 x 2 (yellow line, see equation 1). PEs with Ecrit < EJ < φt (in the shaded cyan region) are included in the King model (∼ 73% of PEs), and the model could include more PEs by increasing rt. Therefore the model is able to reproduce the bulk properties of the data but does not have the correct underlying physics to describe the dynamics. The goal of this study is to develop a convenient, spherically-symmetric family of models that include an approximate description of the phase space contribution of PEs, which are within the Jacobi radius of GCs. Such a family is defined by a distribution function formulated as a simple analytical expression, which can agilely allow to derive a self-consistent solution of the corresponding Poisson equation. In Section 2 we describe the models and explore their properties. In Sections 3 and 4 we compare the models to N -body simulations and observational data, respectively. In Section 5 we discuss the strengths and limitations of the models and delineate how the approximations taken may be improved upon in future versions of the models. Distribution function One of the earliest lowered isothermal models used a purely isothermal DF, f (Ê) = A exp(Ê) forÊ > 0 and f (Ê) = 0 elsewhere (Woolley 1954 is the specific energy and stars with negativê E are assumed to instantaneously escape. This family of models is characterized by two physical scales: the normalisation of the DF (i.e., the free constant A), which sets the mass of the system, and a velocity scale s. This function is discontinuous atÊ = 0 (i.e. E = φt). To ensure a vanishing phase space density and a continuous DF atÊ = 0, a constant can be subtracted from the exponential, such that: The resulting density profiles in projection match the surface brightness profiles of GCs exceptionally well (Trager et al. 1995), which is why this approach has been the foundation of many further developments. The models proposed by Wilson (1975), as taken in the non-rotating and isotropic limit, include an additional energy term, f (Ê) = A exp(Ê) − 1 −Ê . This makes the derivative of f (Ê) continuous atÊ = 0, and leads to a more extended density distribution which has been shown to better fit observed density profiles of some GCs (see the discussion in McLaughlin & van der Marel 2005). Davoust (1977) and Hunter (1977) showed that Woolley, King and Wilson models are special members of a family of models with different orders of truncation of the isothermal DF; more recently, an updated formulation of the DF by Gomez-Leyton & Velazquez (2014) also allows to construct solutions in between these models. The DF is defined as f (Ê) = A exp(Ê) γ(g,Ê)/Γ(g), where γ(a, x) and Γ(x) are the lower incomplete gamma function and the gamma function, respectively. When considering g = 0, 1 and 2, the Woolley, King and Wilson models are obtained, respectively. This was further developed in the limepy family of models by , who added radial orbit anisotropy and multiple mass components and provided a python implementation 2 . The construction of DF-based models which include a contribution from a population of PEs may be conducted by adopting the following rationale. First, we rely on the simplifying assumptions of equilibrium and spherical symmetry. Second, concerning the representation of the phase space behaviour of the bound population, we choose to preserve some consistency with the class of lowered isothermal models described above, as they offer an empirically satisfactory description of the dynamics of the central regions of many Galactic globular clusters. We recognise that the assumption of dynamical equilibrium introduces a significant degree of idealisation in our description of the problem. Nonetheless, we emphasise that as shown in Baumgardt (2001), the distribution of PEs is relatively constant with time, with the predominant evolution being the width of the energy distribution above Ecrit. Therefore, we argue that a static model should be able to match the instantaneous behaviour of PEs at a given time, provided that a parameter setting the energy distribution width is included. The rationale adopted above allows us to provide a zeroth-order description of the effects induced by the presence of a population of unbound stars. To achieve this we develop a spherically symmetric distribution function that only depends on energy, which has the additional advantage of preserving a certain mathematical simplicity and rapidity 1 The King (1966) model is in fact an approximate steady state solution of the Fokker-Planck equation, when considering twobody relaxation and escape. A concise explanation of the physical justification of the King model can be found in King (2008). 2 limepy is available from https:/github.com/mgieles/limepy of numerical calculation. In the future, we intend to address the limitation of ignoring the anisotropy in the velocity dispersion and the deviations from spherical symmetry introduced by the effects of a galactic potential (as discussed also in Section 5), by considering the constructions of DF-based models which take into account the non-spherical nature of the external tidal field and other dynamical ingredients. Unfortunately, the limepy models are not suited to add an unbound population. Figure 2 shows the DF as a function of the energy, for several isotropic limepy models and different values of g (dashed lines). For values of g > 0, the DF vanishes atÊ = 0. This means a discontinuity would be introduced when including the effects of PEs. To solve this, in our definition of the DF we introduce the constants B and C, which control the value of the DF and its derivative atÊ = 0. This is an approach similar to the one used for Woolley (B = C = 0), King (B = 1, C = 0) and Wilson models (B = C = 1), but in this case we leave the values of these parameters free, to have a non-zero density atÊ = 0 for B < 1. This approach allows us to 'stitch' to the DF for bound stars the one for PEs, with the 'stitching' taking place at E = 0. The choice of the functional form adopted for the DF of PEs is motivated by numerical results from extensive direct N -body investigations. It has been shown that the evolution of the number of stars, N (Ê), forÊ < 0 is well described by a modified-Bessel function, when the effects of dynamical friction are not included (Baumgardt 2001). However it is not possible to derive an analytic expression for f (Ê) from this N (Ê). Therefore we approximate it by an isothermal model in the regimeÊ < 0. A more rigorous approach may be taken in future versions of the models, but the approximation of an exponential DF for the PEs is adequate for the fits shown in Section 3 and 4. The DF of the spes (Spherical Potential Escaper Stitched) family of models is where η 2 = s 2 pe /s 2 , where spe is the 1D velocity dispersion of the PEs. Once the stars become energetically unbound, their escape time te ∝Ê −2 (Fukushige & Heggie 2000). This means that te(Ê = 0) = ∞ and that the stars only slightly above the critical energy have very large escape times and the effects of the escape process are negligible. This suggests that the DF should therefore be continuous acrossÊ = 0 and also continuous in the derivative to ensure that the behaviour of stars slightly above and slightly belowÊ = 0 is similar. Enforcing continuity to further derivatives would overconstrain the model as the DF only has terms to second order. Additionally the simple isothermal model assumed for the DF forÊ < 0 is a reasonable approximation for the zeroth and first derivatives but it is likely inaccurate for further derivatives. If we then demand smoothness, we find A representative example of the behaviour of the DF for the case of a model with B = 0.9, η = 0.3 is illustrated in Fig. 2. In addition to the usual degree of freedom which controls the central concentration (φ0, e.g., see King 1966), the PE-specific parameters of the model are η and B, which define C via equation (4). Moreover, as in the case of the conventional 'lowered isothermal' models, two physical scales can be set by means of the free constants s and A (e.g., the velocity and mass scale for the system). Acceptable values for the η parameter are between 0 and 1, to ensure that the value of the velocity dispersion at the tidal radius of the model assumes values between 0 and (approximately) the central value of the velocity dispersion. Also, for the other parameter, we impose 0 B 1 so that the density at E = 0 can vary between a non-zero maximum (for B = 0) to zero (for B = 1). It follows that −∞ < C 1, and because the derivative of the DF atÊ = 0 is proportional to 1 − C, its range is 0 f (0) ∞. We note that the (non-rotating, isotropic) Wilson model is found for B = 1, regardless of the value of η, and the King model is recovered for B = 1 and C = 0. As the model is no longer 'truncated' in the same way as previous models, we refer to the critical radius rcrit as the radius where the specific potential reaches the critical value φ = φt which is therefore the maximum radius of bound stars. This parameter is comparable to rt of limepy models and rJ of data and N -body models. By analysing the outcome of numerical simulations, Claydon et al. (2017) showed that during the lifetime of the cluster the distribution of PEs within the Jacobi radius maintains the same shape, with only the width of the energy distribution above Ecrit changing significantly. This suggests that the spes models could be able to reproduce the instantaneous properties of PEs thanks to the parameter η, which is related to the width of the energy distribution. However, care must be taken when using this model as initial conditions for evolutionary modeling. By introducing this unbound contribution from the DF the model will no longer be in virial equilibrium, and the model is unstable unless the effects of a specific galactic potential are included. The only way to include a galactic potential is by including an impermeable boundary at rcrit and the model will then be in equilibrium when considering the total kinetic energy, K, the total potential energy, W , and a pressure term 3 : ptV = s 2 pe ρ(rcrit)(4/3)πr 3 crit , such that the condition for virial equilibrium is 2K − W − 3ptV = 0 (Lynden-Bell & Wood 1968). Properties of the models To compute the models, we define the dimensionless quantitiesφ = (φt − φ(r))/s 2 ,r = r/rs, andρ = ρ/ρ0 (see also King 1966;, where ρ0 is the central density and r 2 s = 9s 2 /(4πGρ0) is the (square of the) scale radius, or King radius. The radius at whichφ = 0 is rcrit. The Poisson equation for the dimensionless potentialφ can be written as: which can be solved by assuming the following boundary conditions atr = 0:φ =φ0, dφ/dr = 0, whereφ0 is a positive constant defining the dimensionless parameter which sets the central concentration of a model (this parameter is called W0 in King 1966). The density and pressure as a function ofφ can be found from and Here the integration over all velocities is split in a regime 0 v vesc for the bound stars and vesc < v < ∞ for the PEs. Here vesc is the escape velocity required to move from bound to potential escaper regime, i.e. vesc = 2(φt − φ(r)). We introduced dimensionless density and pressure integrals which are given by Here we used the previously introduced function Eγ(a, x) = exp(x)γ(a, x)/Γ(a) (see and introduce EΓ(a, x) = exp(x)Γ(a, x)/Γ(a), where Γ(a, x) is the upper incomplete Gamma function. The normalised density is found by dividing by the central density I ρ 0 = I ρ (φ0), and the velocity dispersion is obtained aŝ whereσ = σ/s 2 . If η is very small, this gives rise to a very large argument of the exponential resulting in numerical problems. Therefore, for values of x > 700 we replace EΓ(a, x) with its limiting behaviour for large x: x a−1 /Γ(a). We can also obtain surface density profiles Σ(R) and line-of-sight velocity dispersion profiles, σLOS(R), wherê r 2 =R 2 +Ẑ 2 andẐ is along the line-of-sight: and where we limit the integral alongẐ to a multiple ofrcrit by defining the fitting parameter D, which we discuss in S3.1. Limits and mass In the regime,r →rcrit,φ → 0, the density and velocity integrals are and lim φ→0 We can also solve the model beyondrcrit, whereφ 0, to compare the model to data including unbound stars beyond rJ. In this regime there is no contribution from f (Ê > 0) and the integration boundary vesc = 0, therefore the density and velocity dispersion simplify to: and such that the mean-square velocity is a constant:σ 2 (r > rcrit) = 3η 2 . Exploring the parameter space We solve Poisson's equation by splitting it into two first order ordinary differential equations, and by using a Runge-Kutta integrator with an adaptive step-size, dopri5 (Hairer et al. 1993). We consider different values of the parameterŝ φ0, η and B to investigate the behaviour of the model. All figures and analysis in this section are presented in the dimensionless model units. The parameter B controls the phase space density at E = 0, which, in turn, controls the truncation of the model. When increasing B (while keeping the other parameters fixed),rcrit and the mass increase (Fig. 3). The parameter η sets the ratio of the value of the velocity dispersion at rcrit to the velocity scale s, which for high values ofφ0 approaches the (one-dimensional) central velocity dispersion. However, because C also affects the truncation of the model and is a function of B and η, changing η will also varyrcrit and changing B can also changeMpe. For a fixed B, the PE fraction increases with increasing η. This is because increasing η makes the DF wider, thus increasing the mass in PEs. The dependence of the PE fraction on B is not as trivial. Because the phase space density at the critical energy is proportional to 1 − B, we expect the PE fraction to correlate with 1 − B. This is true for B 1, Figure 5. Values of the ratio between the half-mass and truncation radiusr hm /r crit against B, forφ 0 = 7 (dashed lines),φ 0 = 5 (solid lines) and η = 0.2 (magenta) and 0.4 (green). but for smaller B the two quantities anticorrelate (at fixed η), while for η 0, the fraction of PEs is approximately independent of B. The PE fraction depends also on C in the DF, which is determined by the demand for continuity and smoothness. To illustrate how the PE fraction depends on the model parameters, we show in Fig. 4 the fraction of PEs in models withφ0 = 5 (solid lines) andφ0 = 6 (dashed lines) and different combinations of B and η. Figure 5 shows the values of the ratio between the halfmass and truncation radiusr hm /rcrit against B, for different values of η andφ0. By inspecting the figure, it appears that this quantity is a monotonically decreasing function of B, with an additional dependence on η which is more significant for low values of B. FITTING TO N -BODY SIMULATIONS To test the performance of the spes models in describing globular cluster properties we fit the spes models to snapshots from N -body simulations of tidally limited star clusters. For comparison, we also fit all N -body models with limepy models. We consider the simulations presented in C17 which describe systems with N = 16384 equal-mass stars. The model clusters are evolved on a circular orbit around the centre of mass of their host galaxy, which is spherically symmetric and characterized by a power-law mass distribution Mg(< Rg) ∝ R λ g , where Rg is the galactocentric distance. We consider the cases where λ = 1, which correspond to singular isothermal sphere. The data from the simulations are analysied in a corotating reference frame, where the x-axis is along the direction linking the centre of the cluster and the centre of the galaxy and the y-axis is in the direction of the tangential component of the orbital angular velocity vector (Heggie & Hut 2003). The simulations were run using nbody6tt, which allows a functional input for the galactic potential (Aarseth 2003;Nitadori & Aarseth 2012;Renaud & Gieles 2015). The data from the simulations are in Hénon units (Hénon 1971), where G = 1, the initial mass of the clusters Mc0 = 1 and total energy of the cluster E0 = −1/4 . The analysis presented in this section is computed in these units. Fitting technique We calculate the velocity dispersion and density profiles by binning the data of four snapshots, corresponding to the moments during the lifetime of the simulations when the remaining mass is 0.8, 0.6, 0.4 and 0.2 of the initial mass. We compute the profiles by considering bins with equal numbers of stars and by taking into account all stars within the Jacobi radius, rJ, (ss1, ss2, ss3 and ss4) and for all stars within 2rJ (ss1.2rj, ss2.2rj, ss3.2rj and ss4.2rj). We fit the models to these density and velocity dispersion profiles by using a Markov Chain Monte Carlo technique, emcee (Foreman-Mackey et al. 2013), to explore the parameter space of the DF-based models. The best-fit values of the parameters are obtained by minimizing the associated χ-squared: where Oi are the data values, i are the errors on the data values, in this case the standard error from calculating the σ and ρ profiles from the N -body data, and Mi are the model values at the same radial position as the n data values. We calculate this both for the density and for the velocity dispersion. The parameters areφ0, B and η and two scale values to convert the model unitsM to Hénon units Mc, M scale = Mc/M , and r scale = r hm /r hm and we stop the model at the radius where the potentialφ = 0 which we Table 1. Properties of the best-fit models. For each model, indicated in the first column, we provide: the central potentialφ 0 , the model parameters g, η and B, the cluster mass Mc, the half-mass radius r hm , the Jacobi radius r J , the mass of bound stars M b , the mass of PEs Mpe, and the ratio of potential escaper mass to total cluster mass Fpe. Rows are the simulations (N -body), the best-fit limepy models, and the best-fit spes models to 3D data within r J , within 2r J and projected on the xy, yz and xz axes for each snapshot ss1, ss2, ss3 and ss4 from the C17 simulation. call the critical radius rcrit. When fitting to data beyond rJ, as the model is infinite this will elevate the surface density profile when projecting the model. Therefore we require a stopping radius further out than rcrit and we definê rstop = Drcritt and redefine r scale = r lb /rstop where r lb is the radius of the last bin of data. By fitting on D we can then allowrcrit to be any value less thanrstop.For each parameter, we determine the best-fit value as the median of the correspondent marginalised posterior probability distribution, and 1σ errors as the 16 and 84 per cent percentiles. Fitting to 3D profiles In this section we describe the results we obtained when fitting the models to the snapshots by considering 3D profiles. We conduct this test to assess the ability of the limepy model and of the spes model to reproduce the properties of the snapshots when having all the possible information, i.e. 6D data (all dimensions of the configuration space and velocity space) for the considered stars. Stars within rJ We first limit our analysis to stars within rJ of the N -body model. The first part of Table 1 shows the values of the best-fit parameters and the properties of the snapshots for comparison. The limepy model fits the data well for r 0.7rJ and it closely reproduces Mc and r hm , but it cannot account for the behaviour of the velocity dispersion profiles at radii towards and beyond rJ. Moreover, the resulting best-fit value for rcrit overestimates rJ by a factor of ∼ 1.2 − 1.8: this is a common issue when fitting these models to this kind of data, as men- . Velocity dispersion profile from ss3 (magenta points) against r, normalised to r J . The black and green lines represent the best-fit limepy and spes model, respectively. The shaded grey and green region represent models that occupy a 1σ region around the maximum likelihood, as identified by emcee. The vertical green and black dashed line indicate the best-fit truncation radii of spes and limepy models, respectively. The spes model is able to closely match r J whereas limepy overestimates it. tioned in Section 1. The spes model reproduces the innermost part of the density and velocity dispersion profiles as well as the limepy model, but, in addition, it is also able to account for the flattening near rJ, and to reproduce the correct radial extension of the data. To provide an immediate comparison of spes models to limepy models, we show an example of the results obtained with this fitting procedure. Figure 6 shows the velocity dispersion profile of the snapshot ss3 represented as a function of the radius, normalised to rJ. The best-fit limepy model is shown as a black line, and the grey shaded area represents models that occupy a 1σ region around the maximum likelihood. The best-fit spes model is shown in green, with the green shaded region again denoting models within a 1σ region around the maximum likelihood. Stars within 2rJ We fit the models to all the stars contained within 2rJ from the cluster centre in the same snapshots (ss1.2rj, ss2.2rj, ss3.2rj and ss4.2rj). This test is useful to understand whether the spes models are still able to reproduce rJ when fit to data which includes stars beyond rJ, even though the model does not include the underlying physical behaviour of spatially unbound stars. The second and third rows of each part of Table 1 show the best-fit values of the parameters of limepy and spes models compared to this data. The density and velocity dispersion profiles for each snapshot (magenta points) and the best-fit limepy (grey region, black line) and spes (green region, green line) models are shown in Fig. 7. The limepy models accurately recover the Mc and r hm , however they overestimate rJ even more (factor of ∼ 1.5 − 2.3) than when fit to data within rJ. Moreover, limepy models are unable to match the velocity dispersion and density profiles beyond rJ. The spes models perform equally well as the limepy models in reproducing the quantities Mc and r hm of the snapshots. However spes models are able to provide a better fit to the density profile and velocity dispersion profiles even beyond rJ. The spes model is not able to account for an increase in the outermost 2 or 3 bins, which are due to the motion of the stars within the tidal tails. On average, the spes model is also able to reproduce rJ (dashed green lines), although it underestimates it ∼ 20% initially and becomes more accurate for more evolved clusters. Fraction of PEs The spes model reproduces the velocity dispersion profile and density profile of the considered snapshots more accurately than models which do not include the contribution of PEs. However, the spes model underestimates the fraction of mass in PEs within rcrit, Fpe. Table 1 shows that when the spes model is fit to the ss.2rj snapshots, it consistently finds a Fpe that is approximately three times lower than the actual value (Table 1; displayed in the N -body rows for each snapshot). By separating the density profile into the contribution from bound stars (ρ b ) and PEs (ρpe) for both the best-fit models and the data, we can see that a large fraction of PEs are actually accounted for by ρ b (Fig. 8), even when fit to data truncated at rJ (ss3), therefore under-predicting Fpe by describing many PEs as bound stars. This underprediction of Fpe can be attributed to two limitations of the current implementation of the models. First, the approximate expression we assumed for the DF of the PEs does produce a density profile which is, by design, consistent with the density profile of the PEs resulting from direct N-body simulations. Therefore, such a choice may not offer an ideal representation of the behaviour of PEs that are only slightly above the critical energy. Second, the spes models still have a φt that is larger than Ecrit of the stars in the snapshots, as discussed in Section 1. Therefore, if rcrit = rJ then PEs with −3GMc/2rJ EJ −GMc/rJ will be represented by the bound part of the model, and consequently Fpe will be underestimated, even though the total mass is correct. This means that, if the models were to correctly reproduce Fpe without including the galactic potential, it would overestimate rJ. We conclude that an approximate upward correction of Fpe of a factor of three should be applied to spes result to get an estimate of the actual Fpe. Fitting to projected profiles In order to test the impact of projection effects on the ability of the models to reproduce the properties of the profiles derived from the N -body simulations, we calculate the projections of both the SPES models and the N -body data along the line of sight to generate surface density profiles Σ(R) and line-of-sight velocity dispersion profiles, σLOS(R), where r 2 = R 2 + Z 2 and Z is along the line-of-sight. We consider different directions for the line of sight, and, in particular, we consider the principal axes of our corotating reference frame (see Section 3.2) to obtain the profiles in the (x, y), (x, z) and (y, z) planes. . Surface density profiles (top) and projected velocity dispersion profiles (bottom) for the snapshot ss3, projected onto the (x, y) (magenta points), (x, z) (green points) and (y, z) (red points) planes. The best-fit spes models are shown as solid lines, and the shaded regions correspond to models within 1σ of the maximum likelihood; the dashed vertical lines denote the r crit of the best-fit models for each projection axis. The colors of the models match the respective data. Figure 9 shows the surface density profile (top panel) and line-of-sight velocity dispersion profile (bottom panel) for the N -body simulation data (points) and best-fit models (shaded region) on each projection plane. The observed differences in the profiles are due to the fact that the Nbody model shows deviations from the spherical symmetry and the density drops more sharply in the (y, z) plane. Indeed, when looking along the x-axis, the Jacobi surface only extends up to (2/3)rJ along the y-axis and ∼ 0.6rJ along z (Renaud & Gieles 2015). The non-spherical density distribution of the N -body model and the corresponding projection effects also produce a larger velocity dispersion in the outer parts, because the bins outside the Jacobi surface are dominated by PEs. As variation in the truncation of the density profiles for different projection angles is predominantly seen beyond rJ, the best-fit models therefore show little variation finding similar Mc and rcrit (Table 1). Also in this case, Mc and r hm are well reproduced and there is minimal variation in the rcrit and Fpe therefore the ability of the models to reproduce the global properties of the clusters is not severely affected by projection effects. Clusters on eccentric orbits To test how well our equilibrium models capture the effects induced by an external time-dependent tidal field in the distribution of PEs, we fit them to N -body data of a cluster on an eccentric orbit. Here we take a simulation from C17, with λ = 1 and eccentricity of the clusters orbit of = 0.5 where = (Rapo − Rperi)/(Rapo + Rperi), where Rperi and Rapo are the perigalactic and apogalactic distance, respectively. We consider three snapshots when the mass first reached approximately 0.4. The snapshots are at pericentre, apocentre and at the position in the orbit equidistant from these two. Figure 10 shows the velocity dispersion and density profiles for the data and the best-fit model for each snapshot. The recovered rcrit of the model is similar for all snapshots, showing that although the model will underpredict and overpredict rJ at apocentre and pericentre respectively, it is a good fit to the time-averaged behaviour over one orbit. This confirms a finding by Küpper et al. (2010), who used parametric fits to the density profiles of N -body models on an elliptical orbit. In this way they recovered an edge radius, and found that it was nearly constant along the orbit. In turn this may help to explain a result of Cai et al. (2016), namely that the evolution of a cluster on an eccentric orbit can be approximated by that of a cluster on a circular orbit with the same dissolution time, if the radius of orbit is chosen suitably (for modest eccentricities, roughly midway) between the apo-and pericentric distances of the elliptical orbit. OBSERVATIONAL DATA To provide a test of how the spes models perform when compared to observational data, we also conduct a preliminary comparison to the number density and the velocity dispersion profiles of the globular cluster 47 Tucanae (47 Tuc). The choice of 47 Tuc is motivated by the fact that it is a r Apocentre Figure 10. Surface density profiles (top) and projected velocity dispersion profiles (bottom) for three snapshots along one orbit of a simulation of a cluster on an eccentric orbit: pericentre (left-hand panel), apocentre (right-hand panel) and between the two (central panel). The best-fit spes models are shown as solid lines, and the shaded regions correspond to models within 1σ of the maximum likelihood; the dashed vertical lines denote the r crit of the best-fit models (green) and r J of the N -body data (magenta). well-known example of a cluster with kinematics that is inconsistent with existing dynamical models considered (Lane et al. 2012). In this comparison, we construct a number density profile by combining the surface brightness profile from Trager et al. (1995) and number density profile from the second data release of the Gaia mission, which are presented in de Boer et al. (2019). We also fit on the the line-of-sight velocity dispersion profiles (σLOS, in km s −1 ) from Baumgardt (2017) and Kamann et al. (2018). Figure 11 shows the best-fit spes model (green shaded region) compared to the surface brightness (top panel) and line-of-sight velocity dispersion (bottom panel) of 47 Tuc. The best-fit model parameters of the fit areφ0 = 9.3, η = 0.30, B = 0.88 with scale values Mc = 7.0 × 10 5 M and rcrit = 11.5 arcmin. The best-fit spes model reproduces the velocity dispersion profile well, but underestimates the last three bins of data. This is also seen in the number density profile, where the model overestimates the central value, and is not able to reproduce the exact shape in the outskirts. This leads to the model underestimating the rJ and overestimating the mass when compared to estimates from the Harris catalogue (Harris 1996). The best-fit Fpe = 0.038, but as shown in Section 3.2.3, this value can underestimate the fraction of PEs in the cluster by at least ∼ 70%. DISCUSSION & SUMMARY We have presented a novel way of including energetically unbound stars in dynamical models of GCs. Our prescription, although based on a simple phase space description of a population of PEs, allows a rapid and convenient self-consistent construction of spherically-symmetric equilibria. This modelling effort was motivated by the peculiarities in observational data in the outer regions of GCs, and in particular by the flattening observed in line-of-sight velocity dispersion profiles and in extended surface density profiles. With Gaia providing proper motions of stars in the outskirts of GCs, and allowing us to calculate membership likelihoods for these stars, it is paramount that models which include a description of these behaviours are developed. By including the effects of PEs in a self-consistent, distribution function-based model it is then possible to test if PEs are able to explain the observational data or if some alternative theory is needed, such as the presence of dark matter or of deviations from Newtonian gravity. By characterising the dynamics of stars in the outskirts of GCs, it may be possible to discriminate between these scenarios, and to find clues on the formation and evolution of GCs, which in turn can illuminate the formation and evolutionary processes that shape galaxies. Even though almost the totality of the 'lowered isothermal' models currently available in the literature are not designed to incorporate the presence of PEs, nonetheless the behaviour of some of these unbound stars is sufficiently well reproduced, albeit with incorrect underlying physics. This happens because these models describe isolated systems and therefore predict a higher critical energy with respect to the case in which the effects of a tidal potential are included. Therefore, when fitting on data from clusters embedded in external tidal fields, the models can include some PEs between Ecrit < EJ < φt. This means that a non-spherical model of GCs which includes the tidal potential of the host galaxy could have the correct Ecrit but as it would have no prescription for dealing with PEs, it would need to increase its rcrit more than the spherical model to include PEs in the fit by increasing φt. Here we developed a physically motivated DF-based model which includes a prescription for stars above the critical energy. This was achieved by including two constants in the bound part of the distribution function, which allow the model to have a non-zero density at the critical energy. The constants in the bound DF allow the enforcement of continuity and also smoothness across the critical energy and avoid a discontinuity in the mass distribution. We showed that the model accurately reproduces the properties of tidally-perturbed N -body models of star clusters, which naturally include PEs (we have conducted a comparison with selected direct N -body models with N =16384 equal-mass particles, originally presented in C17). The bestfit spes model is able to reproduce the mass and half-mass radius of the N -body cluster model, and matches the density and velocity dispersion profiles well, including the flattening near rJ, although is not able to account for increasing velocity dispersion profiles. The spes model closely reproduces rJ of the N -body model, even when data out to 2rJ is included in the fitting process. The spes models presented here have some limitations. Primarily the under-prediction of the fraction of cluster mass in PEs, Fpe. This is due to some assumptions that were required in the construction of the model. These include the assumption of sphericity and the absence of the Galactic tidal potential. This causes a large fraction of the unbound stars to actually be accounted for by the bound part of the chosen DF. We also adopted a simple exponential for the functional form of the unbound part of the DF, which has some implications on the requirement of continuity of the 'stitched' DF. Currently, the spes models are defined as continuous and smooth by construction; to force higher order derivatives to be continuous would either over-constrain the model or it would require a different functional form for the bound part, requiring a larger number of parameters. Adding the galactic tidal potential to the model may allow for a more accurate recovery of Fpe, because then Ecrit can be recovered more accurately. As part of the unbound population stars would no longer be accounted for by the bound part of the model this will motivate the need for an alternative, more accurate functional form of the DF that better fits the behaviour of these unbound stars. This could then allow for the continuity of higher order derivatives acrossÊ = 0. The current definition of the model assumes isotropy and does not account for the possibility of bulk motions of the PEs, which have recently been explored by means of N -body simulations (C17; Tiongco et al. 2016a). To include this additional layer of kinematic complexity would be an important further step towards a fully realistic description of the phase space behaviour of PEs in star clusters and improve the models ability to discriminate between bound stars and PEs. However, a model which does not include the effects of a galactic potential will not be able to recover both rJ and Fpe. Despite these limitations, the spes models are an improvement over existing DF-based models which are unable to account for the presence of a population of energetically unbound stars. As a proof of concept, we presented a preliminary application of the spes models to the number density and velocity dispersion profiles of the Galactic globular cluster 47 Tuc. By using the velocity dispersion data from Baumgardt (2017) and Kamann et al. (2018) and surface brightness profile from Trager et al. (1995) combined with recent Gaia DR2 data. We showed that the model recovers a Mc, r hm and M/L close to current estimates, although underestimates rJ. The cornerstone ESA mission Gaia finally enable us to access the phase space structure of the outskirts of several Galactic globular clusters, therefore it will be paramount to have physically accurate models that are able to describe in more detail and more realistically the expected behaviour of the outer parts of GCs, to be able to correctly infer the properties of the stellar clusters. de Boer et al. (2019) recently fit spes models to Gaia number density profiles of 81 globular clusters and find that they provide a better prescription near rJ than limepy models. This is a required first step to determine if any further physics will need to be invoked, such as modified gravity theories or dark matter haloes, to explain the observations. This will in turn provide a method for investigating and possibly discriminating between the formation scenarios and evolutionary behaviour of GCs.
2019-03-15T13:51:21.000Z
2019-03-14T00:00:00.000
{ "year": 2019, "sha1": "737d9fd30388919d24b53e5a61c37aab9f0f7c73", "oa_license": null, "oa_url": "https://www.pure.ed.ac.uk/ws/files/162821101/Claydon2019.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "737d9fd30388919d24b53e5a61c37aab9f0f7c73", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
42126436
pes2o/s2orc
v3-fos-license
E-cadherin-mediated Cell-Cell Attachment Activates Cdc42* E-cadherin is a transmembrane protein that mediates Ca2+-dependent cell-cell adhesion. Cdc42, a member of the Rho family of small GTPases, participates in cytoskeletal rearrangement and cell cycle progression. Recent evidence reveals that members of the Rho family modulate E-cadherin function. To further examine the role of Cdc42 in E-cadherin-mediated cell-cell adhesion, we developed an assay for active Cdc42 using the GTPase-binding domain of the Wiskott-Aldrich syndrome protein. Initiation of E-cadherin-mediated cell-cell attachment significantly increased in a time-dependent manner the amount of active Cdc42 in MCF-7 epithelial cell lysates. By contrast, Cdc42 activity was not increased under identical conditions in MCF-7 cells incubated with anti-E-cadherin antibodies nor in MDA-MB-231 (E-cadherin negative) epithelial cells. By fusing the Wiskott-Aldrich syndrome protein/GTPase-binding domain to a green fluorescent protein, activation of endogenous Cdc42 by E-cadherin was demonstrated in live cells. These data indicate that E-cadherin activates Cdc42, demonstrating bi-directional interactions between the Rho- and E-cadherin signaling pathways. teins. Briefly, MBP-WASP-GBD was produced by excising WASP-GBD from the GST plasmid with EcoRI; blunt ends were generated with Klenow. The pMAL-c2X vector (New England Biolabs), which encodes MBP, was digested with PstI and blunt-ended with T4 polymerase. Both the WASP-GBD fragment and pMAL-c2X vector were cut with BamHI. The 214-base pair WASP-GBD fragment was purified from a low melting agarose gel and was inserted into the pMAL-c2X vector. The MBP-WASP-GBD fusion protein was expressed in E. coli and purified over an amylose column. To generate GFP-WASP-GBD, the WASP-GBD was excised from the GST plasmid with BamHI and EcoRI. The resultant 214-base pair fragment was purified from a low-melting agarose gel and inserted into pEGFP-C1 (CLONTECH) at BglII and EcoRI sites. The sequence of the WASP-GBD in all of the fusion constructs was confirmed by restriction mapping and DNA sequencing. Assay for the Detection of Activated Cdc42-MCF-7 and MDA-MB-231 human breast epithelial cells were grown in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% (v/v) fetal bovine serum in a 37°C humidified incubator. L and EL cells were grown in the same medium supplemented with 400 g/ml hygromycin B. Cells were washed three times in phosphate-buffered saline (PBS) (145 mM NaCl, 12 mM Na 2 HPO 4 , and 4 mM NaH 2 PO4, pH 7.2) and quick-frozen in 500 l of lysis buffer (20 mM Hepes, pH 7.4, 150 mM NaCl, 1% (v/v) Nonidet P-40, 20 mM NaF, 20 M GTP, and protease inhibitors). Lysates were thawed, clarified by centrifugation at 15 000 ϫ g for 5 min at 4°C, and precleared by incubating 1.25 mg of lysate for 1 h at 4°C on a rotator with 40 l of glutathione-Sepharose. In selected experiments, 1 mM ATP, 20 M GDP, or 100 M GTP␥S was added to the lysates for 15 min. Protein concentrations were measured and samples analyzed directly as whole cell lysates (ϳ75 g), or equal amounts of protein were incubated with 40 g of GST-WASP-GBD for 2 h at 4°C. GST alone was processed in parallel as a control. Complexes were collected with glutathione-Sepharose, the beads were washed six times with PBS, and proteins were resolved by SDS-PAGE and transferred to PVDF. Blots were probed with anti-Cdc42 antibody followed by horseradish peroxidase-conjugated sheep anti-mouse antibody and developed by ECL. E-cadherin-mediated cell adhesion was performed essentially as described previously (20). MCF-7 cells were grown to confluence in 60-mm culture dishes. 6 to 8 h later, cells were serum-starved for 16 -18 h and incubated with DMEM containing 4 mM EGTA and 1 mM MgCl 2 for 30 min at 37°C. The EGTA chelates Ca 2ϩ , thereby disrupting E-cadherin homophilic interactions, and the Mg 2ϩ maintains integrin function. The Ca 2ϩ -free medium was removed, and DMEM containing 1.8 mM CaCl 2 was added to induce E-cadherin-mediated cell-cell interactions. In selected experiments, anti-E-cadherin (DECMA-1 clone) antibodies (1:100 dilution), 50 M LY294002, 500 nM wortmannin, or the appropriate vehicle were included in the medium. Cells were lysed at different time intervals after the addition of Ca 2ϩ . In all experiments, the time of addition of Ca 2ϩ was considered 0 min. Immunoprecipitation-MCF-7 cells were transfected with GFP or GFP-WASP-GBD using FuGENE 6 transfection reagent (Roche Molecular Biochemicals) according to the manufacturer's instructions. The following day, cells were incubated with vehicle or 200 nM bradykinin for 5 min and processed as described above. In selected samples, 100 M GTP␥S was added after lysis. Immunoprecipitation was performed essentially as described previously (21). Briefly, after preclearing, equal amounts of protein were incubated with anti-GFP or anti-myoglobin (an isotype-identical, irrelevant) monoclonal antibody for 3 h at 4°C. Immune complexes were collected for 2 h with 40 l of Gamma Bind G-Sepharose (Amersham Pharmacia Biotech) and washed five times with lysis buffer, and Western blotting was performed as outlined above. Immunofluorescence Staining and Confocal Microscopy-MCF-7 cells were removed from culture dishes with trypsin, washed twice with PBS, and allowed to attach to Permanox plastic slides overnight at 37°C in DMEM. Cells were serum-starved for 24 h. Following a 30-min incubation with DMEM containing 4 mM EGTA and 1 mM MgCl 2 , cells were incubated with DMEM containing 1.8 mM CaCl 2 . For immunofluorescence staining, cells were washed with PBS and fixed with 3.7% (v/v) formaldehyde for 20 min at 22°C. Permeabilization was performed by adding 0.5% (v/v) Triton X-100 in PBS for 10 min followed by blocking with 2% (w/v) bovine serum albumin in PBS for 1 h at 4°C. After two washes with PBS, cells were incubated for 1 h at 4°C with rhodamine phalloidin (Molecular Probes), or E4.6 anti-E-cadherin monoclonal antibody, followed by fluorescein isothiocyanate-labeled goat anti-mouse antibody. Slides were washed four times with PBS and mounted with Aqua Polymount (Polysciences, Inc.). The specificity of staining was verified by omitting the primary antibody. Confocal laser scanning microscopy was performed with the MRC-1024 Confocal Imaging System (Bio-Rad) as described previously (21). Live Cell Imaging-For live cell imaging, MCF-7 cells were plated on Lab-Tek II chambered coverglass (Nalge Nunc Int.). The following day, the cells were transiently transfected with GFP or GFP-WASP-GBD. Transfection efficiency was usually 40 -50%. E-cadherin-mediated cell adhesion was performed as described above. Cells were imaged at 1-min intervals for 70 min by time lapse video microscopy using a Zeiss Axiovert S100 microscope. Fluorescence images were collected with the MRC-1024 confocal imaging system and processed as described (21). Miscellaneous-Protein concentrations were determined with the DC protein assay (Bio-Rad). Densitometry of ECL signals, performed in triplicate, was analyzed with NIH Image. Statistical significance was evaluated by Student's t test, using InStat software (GraphPad Software, Inc.). RESULTS Assay of Active Cdc42-The specificity of the WASP-GBD constructs was evaluated with recombinant GTPases in vitro and with endogenous GTPases in cell lysates. Initial characterization was directed toward confirming the specificity of the WASP-GBD for the active (GTP-bound) conformation of the GTPases. A MBP fusion protein of WASP-GBD was incubated with GDP-and GTP␥S-bound forms of GST fusion proteins of small GTPases. The WASP-GBD bound specifically to active GST-Cdc42-GTP␥S (data not shown). Active Rac bound to WASP-GBD but with lower affinity. No specific binding was detected to the active forms of Rho or Ras nor to the inactive (GDP-bound) GTPases (data not shown). Analysis of endogenous GTPases revealed the presence of constitutively active Cdc42 in MCF-7 cell lysates (Fig. 1A). As anticipated, the addition of GTP␥S dramatically increased the 4) or GFP-WASP-GBD (lanes 1, 3, 5, and 6), followed by incubation with (lanes 4 and 5) or without (lanes 1, 2, 3, and 6) bradykinin (BK) for 5 min. Where indicated, GTP␥S was added after cell lysis. Equal amounts of protein lysate were immunoprecipitated (IP) with anti-GFP (␣GFP) or an isotype-identical irrelevant (␣myo) monoclonal antibody. Immunocomplexes were resolved by SDS-PAGE and transferred to PVDF membrane, and blots were probed with anti-Cdc42 or anti-Rac antibodies. amount of active Cdc42 in the lysate, whereas ATP and GDP had no significant effect (Fig. 1A). GTP␥S did not change the total amount of Cdc42 in the lysate (data not shown). No Cdc42 was detected in samples incubated with GST alone, even in the presence of GTP␥S, verifying that binding was specific for the WASP-GBD. Active Rac bound to the WASP-GBD only in samples spiked with GTP␥S (Fig. 1A). Incubation of MCF-7 cells with bradykinin, a known activator of Cdc42 (22), increased by 3-fold the amount of Cdc42 in GST-WASP-GBD pulldowns (Fig. 1A). By contrast, bradykinin had no effect on the amount of Rac that bound to WASP-GBD. A comparison of the relative amounts of active Rac and active Cdc42 with the total amounts of the GTPases (Fig. 1A) is consistent with WASP preferentially binding Cdc42 over Rac (12,23). Together, these analyses validate the proposition that the WASP-GBD, like the fulllength protein, interacts specifically with the active forms of Cdc42 and, to a substantially lesser extent, Rac. A construct of the WASP-GBD fused to GFP was created to examine Cdc42 activation in live cells (see below). The validity of the construct was assessed in cells transfected with GFP-WASP-GBD. Immunoprecipitation with anti-GFP antibody revealed that the construct bound Cdc42 (Fig. 1B, lane 3). Activation of Cdc42 with GTP␥S or bradykinin substantially increased the amount of Cdc42 that co-immunoprecipitated with GFP-WASP-GBD (Fig. 1B), demonstrating that the GFP did not interfere with the ability of WASP-GBD to bind active Cdc42. No Cdc42 was detected in anti-GFP immunoprecipitates from samples transfected with GFP alone (Fig. 1B, lanes 2 and 4), indicating that Cdc42 bound specifically to the WASP-GBD. No Rac was detected in the immunoprecipitates, even in the presence of GTP␥S (data not shown). E-cadherin-mediated Cell-Cell Adhesion Activates Cdc42-There is evidence that cell adhesion regulates the cytoskeleton (24) and that members of the Rho family modulate E-cadherin function (6,25). Therefore, we addressed the important question of whether E-cadherin-mediated adhesion would alter activation of Cdc42. E-cadherin-mediated cell-cell adhesion was abrogated by chelating Ca 2ϩ and then re-initiated by re-introducing Ca 2ϩ (26). This approach induced a time-dependent accumulation of E-cadherin at cell-cell junctions ( Fig. 2A). Adherens junctions were detected in MCF-7 cells 15 min after the addition of Ca 2ϩ , and junction formation was essentially complete by 90 min. This assay was used to explore whether Ecadherin modulates Cdc42 activation. Induction of E-cadherin homophilic adhesion by Ca 2ϩ significantly increased the amount of active Cdc42 in MCF-7 cell lysates (Fig. 2B). Activation of Cdc42 was time-dependent. Increased active Cdc42 was detected at 30 min, peaked at 2.2-fold at 60 min, and returned to basal values by 90 min. Total cellular Cdc42 did not change significantly during this time interval (Fig. 2B). Incubation with EGTA for 30 min altered neither the amount of total or active Cdc42 in the lysates (data not shown). Initiation of E-cadherin-mediated cell-cell adhesion in MDCK cells resulted in a time-dependent activation of PI3kinase and Akt kinase (20). Akt, a major downstream target of PI3-kinase, is activated by phosphorylation on Ser-473 and Thr-308 by phosphoinositide-dependent protein kinase(s) (27). Engagement of E-cadherin in MCF-7 cells activated endogenous Akt, as revealed by a significant increase in Akt phosphorylation (Fig. 2C). Interestingly, the time course and magnitude (peak increase of 2.0-fold at 60 min) of Akt activation are generally similar to those of Cdc42, except Akt remains increased at 90 min. E-cadherin-mediated Cell-Cell Adhesion Is Necessary for Cdc42 Activation-Although the above results implicate Ecadherin as the source of Cdc42 activation, other factors could be responsible. Several strategies were adopted to address this question. Analogous experiments were performed in MDA-MB-231 cells, a human breast epithelial cell line that does not contain E-cadherin (21). Incubation of MDA-MB-231 cells with Ca 2ϩ failed to increase active Cdc42 (Fig. 3A). In fact, the amount of active Cdc42 decreased with time. An increased in active Cdc42 was evident in MDA-MB-231 lysates spiked with GTP␥S that were processed in parallel (data not shown). Similarly, the absence of E-cadherin prevented the Ca 2ϩ -dependent cell-cell contact activation of Akt in MDA-MB-231 cells, whereas incubation with epidermal growth factor, a known activator of Akt (28), enhanced Akt phosphorylation (data not shown). A second control was the comparison of L fibroblasts (which lack endogenous E-cadherin) with EL cells, which are L fibroblasts stably transfected with E-cadherin (21). Ca 2ϩ induced a time-dependent activation of Cdc42 in EL cells, whereas, analogous to MDA-MB-231 cells, active Cdc42 decreased in L cells (Fig. 3B). Third, MCF-7 cells were incubated with anti-E-cadherin (DECMA-1) antibodies that block E-cadherin-mediated junction formation (29). As we observed in cells lacking E-cadherin, the antibodies abrogated the E-cadherinmediated increase in active Cdc42 (Fig. 3C). The antibody alone altered neither the amount of total nor active Cdc42. The reason for the decrease in active Cdc42 in the absence of E- cadherin function is unknown. Nevertheless, together these data strongly support the notion that E-cadherin homophilic association is responsible for the activation of Cdc42. E-cadherin Induces the Formation of Filopodia-Activation of Cdc42 in fibroblasts leads to the formation of filopodia or microspikes (5). Therefore, we examined the effect of E-cadherin on the actin cytoskeleton. E-cadherin-mediated cell-cell adhesion induced the formation of filopodia in MCF-7 cells (Fig. 4A). Multiple thin actin fibers are present at the cell membrane, both in the intercellular region at areas of cell-cell attachment (arrowheads in Fig. 4A; note also the higher magnification in panels 30b and 60b) and at the free edges of the cell (arrows in Fig. 4A). These filopodia failed to develop when analysis was performed in parallel in the presence of anti-Ecadherin antibody (Fig. 4B). Note that the changes in the actin cystoskeleton generated by E-cadherin are virtually identical to those produced by activation of Cdc42 in MCF-7 cells by bradykinin (Fig. 4C). Role of PI3-kinase in E-cadherin-mediated Activation of Cdc42-E-cadherin-mediated cell adhesion activates PI3-kinase (20). This observation, coupled with evidence implicating PI3-kinase in the activation of Cdc42 (30), prompted us to evaluate the participation of PI3-kinase in E-cadherin-mediated activation of Cdc42. Inhibition of PI3-kinase with LY294002 (Fig. 5A) or wortmannin (Fig. 5B) abolished the Ca 2ϩ -dependent cell-cell contact activation of Cdc42 in MCF-7 cells. These observations imply the involvement of a PI3-kinasedependent pathway in the activation mechanism. E-cadherin Induces Activation of Cdc42 in MCF-7 Cells-To visualize active Cdc42 in living cells, the WASP-GBD was fused to GFP and transfected into MCF-7 cells. E-cadherin homophilic interaction was induced with Ca 2ϩ , and the location of GFP-WASP-GBD was studied in real time with laser-scanning confocal microscopy. Both GFP-WASP-GBD and GFP were distributed throughout the cell initially (Fig. 6). Sixty-five minutes after the addition of Ca 2ϩ , a fraction of the GFP-WASP-GBD localized to the plasma membrane (Fig. 6A, arrows). This effect was not seen in isolated cells. Note that the peak of active Cdc42 in cell lysates was detected at 60 min (see Fig. 2B). The translocation required the WASP-GBD, as GFP alone did not accumulate at the plasma membrane (Fig. 6B). Importantly, the presence of anti-E-cadherin (DECMA-1) antibodies abrogated specific localization of GFP-WASP-GBD at the plasma membrane (data not shown). Injection of the Cdc42-binding domain of WASP blocked actin filament assembly by the Cdc42 guanine nucleotide exchange factor, FGD1 (31). To demonstrate that the low level of GFP-WASP-GBD expressed in MCF-7 cells did not inhibit Cdc42 function, we activated Cdc42 with bradykinin. There was no difference in the development of filopodia among cells transfected with GFP-WASP-GBD (Fig. 6C), GFP (data not shown), or nontransfected cells. DISCUSSION Rho proteins, which control the organization of the actin cystoskeleton in all eukaryotic cells (5), were recently identified as important regulators of cadherin-dependent contacts (13)(14)(15)(16)25). The important question of whether cadherin adhesiveness can trigger the activation of the GTPases (25) has been addressed in this paper. Using the knowledge that only active Cdc42, and to a lesser extent Rac, binds WASP with high affinity (12, 23), we developed an assay for GTP-bound Cdc42. Fusion proteins of the WASP-GBD bound specifically to active Cdc42, both in vitro and in cell lysates. Factors known to increase the amount of active Cdc42, namely GTP␥S and bradykinin, significantly augmented the amount of Cdc42 bound to the WASP-GBD. Recently, other investigators have used fusion constructs of the GTPase-binding domains of Ras family targets, namely Raf-1, Rhotekin, and p21-activated kinase, to assay activated Ras, Rho, and Rac/Cdc42, respectively (32)(33)(34). These reagents have proved to be valuable in enhancing our understanding of the regulation of small GTPases in cells. However, p21-activated kinase discriminates little between Cdc42 and Rac (35) and cannot be used in live cells. We therefore used WASP-GBD, which has a substantially higher affinity for active endogenous Cdc42 than for active endogenous Rac (see Fig. 1). Initiation of E-cadherin-mediated cell-cell attachment increased the amount of active Cdc42 in cell lysates. Activation peaked at approximately 60 min, which is consistent with the time course for establishment of adherens junctions in EL cells (36). E-cadherin homophilic interactions were required for activation of Cdc42 as demonstrated by the absence of activation under identical assay conditions in cells lacking E-cadherin. Moreover, preventing the formation of adherens junctions with anti-E-cadherin antibodies (29) abrogated activation of Cdc42. The extent of the increase in active Cdc42 that we observed is similar to that seen for Rac, Cdc42 (34), and Rho (33) in response to other stimuli. The amount of active Cdc42 reflects the entire cellular pool of Cdc42, and it is conceivable, perhaps even likely, that higher concentrations of active Cdc42 may be localized in discrete subcellular pools, particularly those associated with the plasma membrane. A recent study with MDCK cells revealed that E-cadherinmediated attachment enhanced endogenous Akt activity by Ͼ10-fold (20). The magnitude of the increase was greater than and observed slightly earlier than our results, which may be due to differences in the cell lines and assay methodology. We used an antibody specific for the Ser-473-phosphorylated form of Akt, a prerequisite for Akt activation (27). Although accepted as an assay of Akt activity (37), the phospho-specific antibody may underestimate Akt activity compared with direct measurements of kinase activity. This conjecture is supported by the detection of a 4-fold increase in Akt activity by epidermal growth factor under our conditions (data not shown) compared with Ͼ15-fold demonstrated by Pece et al. (20). It is also possible that differences between the cell lines may account for the variability in the extent of Akt activation in response to both E-cadherin and epidermal growth factor. The GFP-WASP-GBD construct we developed permitted for the first time the visualization of Cdc42 activation in live cells in real time. Beginning approximately 1 h after initiation of E-cadherin attachment, GFP specifically accumulated in discrete puncta at the plasma membrane. Controls revealed that both the WASP-GBD construct and E-cadherin homophilic attachment were necessary to detect this translocation. This observation validates in live cells the previous findings obtained by fractionation that activation induces the translocation of Rho family GTPases to the plasma membrane (38). Because of the diffuse distribution of the GFP construct throughout the cell, it was not possible to establish whether active Cdc42 increased at the region of cell-cell attachment. (Attempts to ascertain this using purified GFP-WASP-GBD protein as an indicator in fixed, permeabilized cells were not successful.) A conceptually analogous strategy, employing GFP fused to the pleckstrin homology domain of selected proteins, has been used to demonstrate that 3Ј-phosphoinositide prod-ucts of PI3-kinase are localized at the plasma membrane (39,40). Although it is not possible to verify unequivocally the target of the GFP-WASP-GBD, our data are consistent with specific identification of active Cdc42 in living cells. WASP is reported to be Cdc42-specific (12,41) and is inhibited in intact cells by dominant negative Cdc42 but not dominant negative Rac or Rho (12). Immunoprecipitation revealed that Cdc42, but not Rac, associated with GFP-WASP-GBD in the cell milieu. The solution structure of Cdc42 in complex with WASP-GBD provides insight into the ability of WASP to distinguish Cdc42 from Rac and Rho (42). Hydrogen bonding between side chains in activated Rac and Rho closes off the pocket contacted by Ile-233 of WASP and would disrupt the hydrophobic contacts to Lys-235 (42). The mechanism by which E-cadherin activates Cdc42 is unknown. There is no evidence for a direct interaction between Cdc42 and E-cadherin. A potential candidate that may provide a molecular link is IQGAP1, which binds directly to Cdc42 and E-cadherin (16,21,43). IQGAP1 competes with ␣-catenin for binding to E-cadherin/␤-catenin, leading to disruption of cellcell adhesion (43). Based on their observation that active Cdc42 inhibits this effect of IQGAP1, Kaibuchi et al. (16) propose that Cdc42 and IQGAP1 can serve as positive and negative molecular switches of cadherin activity, and they speculate that cell-cell contact would induce the activation of Rho family GTPases. Our results validate this supposition. However, we have no direct evidence that IQGAP1 participates in the Ecadherin-mediated activation of Cdc42, and there are no specific inhibitors nor dominant negative forms of IQGAP1 available to directly test this hypothesis. A second potential intermediary is PI3-kinase, which is activated by E-cadherin (20) and is reported to be both upstream and downstream of Cdc42 (44, 45). We did not detect E-cadherin-induced activation of Cdc42 in the presence of the PI3-kinase inhibitors wort- mannin or LY294002, a structurally distinct inhibitor, implicating the participation of a PI3-kinase-dependent pathway. The process by which PI3-kinase activates Rho family proteins is unknown. The pleckstrin homology domains, which bind phosphatidylinositol lipids and are found in GEFs, have been postulated as the link (46). A third possibility is modulation of Cdc42 cycling. Activation of a GEF or inhibition of a GAP would result in an increase in the amount of active Cdc42 in the cell. The proposed mechanisms are not mutually exclusive, and more than one pathway may be involved. For example, the ␣ isoform of p21-activated kinase-interacting exchange factor, PIX, a GEF for Rac and Cdc42, is activated by PI3-kinase (47). Regardless of the mechanism, our data indicate that E-cadherin activates Cdc42, yielding further evidence for outside-in signaling by E-cadherin. These results also provide the mechanism by which keratinocytes developed filopodia that were integral to the establishment of E-cadherin-mediated intercellular adhesion (48) and confirm the authors' postulation that Cdc42 was involved. Our findings establish bi-directional communication between Cdc42 and E-cadherin and identify an additional route by which intercellular interactions can influence intracellular signaling pathways.
2018-04-03T05:13:39.364Z
2000-11-24T00:00:00.000
{ "year": 2000, "sha1": "341ec0b8f6a6e1adde81685d2870227c2a12c02f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/275/47/36999.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0cef1d7968d45ffe6174936eb3ad885c8565c412", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49556484
pes2o/s2orc
v3-fos-license
Monitoring of wine aging process by electrospray ionization mass spectrometry Received 26/11/2009 Accepted 24/3/2010 (004532) 1 ThoMSon Mass Spectrometry Laboratory, Institute of Chemistry, State University of Campinas UNICAMP, CEP 13084-971, Campinas SP, Brazil, e-mail: franksawaya@terra.com.br 2 Department of Clinical Pathology, Faculty of Medical Sciences, State University of Campinas – UNICAMP, CEP 13083-970, Campinas, SP, Brazil 3 Department of Plant Biology, Institute of Biology, State University of Campinas UNICAMP, CEP 13084-971, Campinas, SP, Brazil 4 Food Science and Technology Department, Universidade Federal de Santa Maria UFSM, CEP 97105900, Santa Maria, RS, Brazil 5 Food Analysis Laboratory, Faculty of Food Science, State University of Campinas, UNICAMP, CEP 13083 862, Campinas, SP, Brazil *Corresponding author Monitoring of wine aging process by electrospray ionization mass spectrometry Introduction In order to improve the quality of wines, research in viticulture and enology is carried out in several fronts.Improved grape quality and ripening, the selection of yeast inoculums and enzymes, control of the conditions during the malolactic fermentation as well as the aging process, are all important aspects of the production process.A sound knowledge of wine chemistry is also necessary.Polyphenols that are released in the must, during the fermentation and pressing processes from different parts of the berry, undergo condensation and polymerization during winemaking and wine aging (FLAMINI, 2003). These compounds not only play an important part in the organoleptic characteristics of wine, but have also been related to the benefits of dietary wine consumption.Wine polyphenolics interact with reactive oxygen species and increase post-prandial total antioxidant capacity.These protective effects are especially displayed in people most likely to be under oxidative stress conditions, such as smokers and coronary heart disease patients (COVAS et al., 2010).The phenolic compounds related to these effects can be divided into two main groups.Non-flavonoid phenols, such as gallic acid and caffeic acid, and stilbenes, like resveratrol, compose one group; whereas flavonoids, such as quercetin, cathechins and anthocyanidins, are the second group (GERMAN;WALTZEN, 2000). Gas chromatography mass spectrometry (GC-MS) has been successfully used to study wine aroma.This technique, however, has not been successfully applied to the study of polyphenols, due to their low volatility.When derivatized to increase their volatility, their high molecular weight exceeds the mass range of most GC-MS systems (FLAMINI, 2003).Therefore, some of the first successful studies used hydrolysis and subsequent liquid chromatography (LC) methods to study wine phenolics.(WULF;NAGEL, 2000;HEBRERO et al., 1989) Later, the availability of effective interfaces for liquid chromatography mass spectrometric equipment (LC-MS) -some of which permitted multiple mass spectrometry experiments (MS/ obtained was used to group the samples and pinpoint the main changes in their composition. Samples of wine Samples of wine were obtained from 'Vinicola Velho Amâncio' winery near Santa Maria, in the State of Rio Grande do Sul, Brazil.Grape maturity was controlled by the winery (sugar content, acidity and pH) and the following varieties were used: Pinot noir, Cabernet Sauvignon and Merlot.The alcoholic fermentation was considered terminated when sugar content was equal to zero.After the malolactic fermentation, the wine was cooled, filtered and five replicate samples of each wine variety were collected (CS, ME, PN) in 2004.After one year's aging (CS1, ME1, PN1), five replicate samples of each wine variety were again taken.For Pinot noir, five replicate samples were taken after two years of aging (PN2).Aging was carried out in bottles at 12 o C. Aliquots of equal volumes of the PN1 and ME1 samples were mixed in the laboratory and analyzed (MIX). General experimental procedure Samples of wine were analyzed by direct infusion into the ESI source by means of a syringe pump (Harvard Apparatus) at a flow rate of 10 µL/minute.Negative mode electrospray ionization mass spectrometry [ESI(-)-MS] fingerprints and negative mode ESI-MS/MS (low energy CID) spectra were acquired using a hybrid high-resolution and high-accuracy (5 ppm) Micromass Q-TOF mass spectrometer; capillary and cone voltages were set to -3,000 V and -50 V, respectively, with a de-solvation temperature of 100 º C. Aliquots of 10 µL of each sample of wine were diluted in one mL of a solution containing 70 % (v/v) chromatographic grade methanol (Tedia, Fairfield, OH, USA) and 30% (v/v), deionized water, and 0.5% of ammonium hydroxide (Merck, Darmstadt, Germany).The negative ion mode was used because it is considered to be more adequate for the analysis of phenolic compounds in wine (COOPER;MARSHALL, 2001;CATHARINO et al., 2006). Statistical analysis of data Principal Component Analysis (PCA) was performed using the 2.60 version of Pirouette software, Infometrix, Woodinville, WA, USA.The mass spectra were expressed as the intensities of individual [M -H] -ions (i.e.variables).Ions with relative intensities of less than 10% were not included.The data was preprocessed using auto scale and the PCA method was run. Results and discussion Figure 1 shows the representative ESI-MS fingerprints of wine samples PN, ME and CS originated from the three varieties of grapes directly after the malolactic fermentation.The two diagnostic ions (m/z 439 -C 33 H 12 O 2 and m/z 559 -a trimeric sugar C 19 H 28 O 19 ) observed in all the wine samples at the end of the malolactic fermentation, mentioned in a previous study (CATHARINO et al., 2006), are also present in these samples. MS or MS n ) -supplied powerful tools for the analysis of wine and its components (GAMOH; NAKASHIMA, 1999;LA TORRE, et al., 2006). The characterization of whole samples by direct insertion electrospray ionization mass spectrometry (ESI-MS), without chromatographic separation, in a process denominated fingerprinting, is being applied to a constantly increasing array of analytes: natural products (MAURI; PIETTA, 2000;SAWAYA et al., 2004;ABREU et al., 2007); food and beverages (ARAÚJO et al., 2005;MOLLER;CATHARINO;EBERLIN, 2005EBERLIN, , 2007;;CATHARINO et al., 2005;DE SOUZA et al., 2007a,b); and petroleum and biodiesel (PORTER; MAYER; FINGAS, 2004;CATHARINO et al., 2007).It is a fast and reliable process especially applicable to the analysis of numerous samples and it is indicated for the qualitative distinction between samples.In the case of wine production, false information as to the type of grape, age and purity of wines could be used to mislead consumers.Therefore, a fast and robust method for the characterization of wine by grape variety and/or age could be applied to wine quality control. Few studies of wine by direct insertion ESI-MS were carried out.Five samples of bottled wine were analyzed by direct insertion electrospray ionization Fourier transform mass spectrometry (ESI-FT-MS) in order to obtain the elemental composition of specific components (COOPER; MARSHALL, 2001).The authors noted that the negative ion mode fingerprint showed greater variety in the composition and abundance of components in the analyzed wines and a lesser amount of adducts, as well as higher resolution.Negative ion mode electrospray ionization mass spectrometry [ESI (-)-MS] is selective for compounds with acidic or phenolic sites, and therefore, adequate for studying wine phenols and polyphenols.Direct infusion ESI (-)-MS was used to analyze samples of must of five varieties of grapes and follow the transformations that occur during the malolactic fermentation process (CATHARINO et al., 2006).The ESI (-)-MS fingerprints clearly showed which samples were must and which came from the wine at the end of the malolactic fermentation, despite the variety of grape used.Furthermore, the addition of sugar or must to the wine could be easily detected in the fingerprints.In a more recent study, direct infusion ESI (-)-MS was used to characterize the oligosaccharides in two varieties of red wine (DUCASSE et al., 2010). During aging, more subtle modifications occur, which are related to the final acceptability of mature wine.Changes in wine during maturation in new and used oak barrels or in tanks were followed using HPLC-UV and HPLC-MS, focusing on the anthocyanins (CANO-LOPEZ et al., 2010).Other authors have studied the effect of oxygenation on wine maturation using HPLC-UV and HPLC-MS (ATANASOVA et al., 2002).The question was whether direct infusion ESI (-)-MS (without chromatographic separation) would be capable of detecting differences in wine of different ages and different grape varieties.Therefore, in the present study, ESI (-)-MS is used to follow the transformations during the aging process of wine samples of three different varieties of grape, over a period of up to two years.Chemometric analysis of the negative mode fingerprints very intense in the fingerprints of wine before aging, they were identified as organic acids by the loss of 44 Da in their MS/MS. Comparing their fragmentation pattern to previous studies; m/z 133 was identified as malic acid, m/z 149 as tartaric acid, m/z 191 as quinic acid, and m/z 193 as ferrulic acid (ROESLER et al., These ions were also observed in negative ion fingerprints of wine by Cooper and Marshal (2001) and had the following structures assigned: m/z 439 -CHO and m/z 559 -a trimeric sugar CHO.Another oligosaccharide found in several samples of wine was the ion at m/z 605, found commonly in Merlot wine (DUCASSE et al., 2010).The ions of m/z 115, 133 and 149 are also (CS1, ME1 and PN1), and the two-year-old sample (PN2) is on the right.The loadings plot (Figure 2b) indicates that the ions responsible for grouping the young wines are m/z 115, 133 and 191, which are also evident in the fingerprints.As the wines mature, the decrease in the ions above, plus the increase in the intensity of the ions of m/z 193, 445 and (to a lesser extent) m/z 317, which are also observed in the fingerprints, place these samples closer to the center.The two-year-old sample is placed on the right due to the relatively lower intensities of m/z 149 and 115 and to the greater intensity of other ions of higher mass.The marker ions that denote the end of the malolactic fermentation (m/z 439 and 559) are present in all fingerprints.These results characterize the transformation which occurs during the aging process, that is: the reduction of low mass organic acids and phenolic compounds and the formation of high mass polyphenols.Most studies of wine aging have focused on the changes that occur in oak barrels and under micro-oxygenation conditions (ALCALDE-EON et al., 2006;CANO-LOPEZ et al., 2010), which may not reflect the conditions of bottle-aged wines.Furthermore, in fingerprinting studies, the separation of different types of sample, due to characteristic fingerprints, is of greater importance than the identification of individual compounds.Nevertheless, the possibility of identifying compounds by MS/MS fragmentation can add further information about the samples. The analysis of the fingerprints also indicates differences for each variety of grapes.For example, the fingerprints of Pinot noir wine contained the ions of: m/z 289 (catechin or epicatechin), m/z 577 (dimeric procianidin) and m/z 865 (trimeric procianidin), identified by comparison with literature (PÉREZ- MAGARIÑO et al., 1999;MONAGAS et al., 2003), as well as the ion of m/z 245, observed in the previous study (CATHARINO et al., 2006).Mixtures of one-year-old samples of Pinot noir and Merlot presented fingerprints as shown in Figure 1 MIX, contained ions observed in both varieties of wine.PCA placed these samples among those of the individual wines in the scores plot (Figure 2a), indicating that the fingerprint came from a wine of intermediate composition. Other compounds identified in several samples comparing their MS/MS spectra to those found in the literature were: m/z 227 -resveratrol, m/z 301-quercetin, and m/z 317 -myrcetin (LA TORRE et al., 2006).Flavonoids and other phenolic compounds, such as resveratrol, are important components of wine due to their beneficial antioxidant effects (COVAS et al., 2010). Conclusions Direct insertion ESI-MS fingerprints in the negative ion mode were able to detect transformations which occur during the aging process, as well as to detect variations in composition between wine made from different grape varieties and a mixture of two varieties prepared in laboratory.The analysis of large amounts of samples, in order to confirm the used grape variety and the age, is feasible through this fast and reliable process.Further studies with larger number of wine samples and other grape varieties of diverse geographic origin could be used to compile a library of fingerprints for the certification of origin of the individual samples. In the ESI-MS fingerprints of samples of one-year-old wines of the same three grape varieties (CS1, ME1, PN1), the diagnostic ion of m/z 439 can still be observed in all three varieties, but the ion of m/z 559 can be observed only in the CS1 fingerprint The importance of these modifications during the aging process can be confirmed by the PCA.In Figure 2a, the scores plot, the samples were clearly grouped according to the variety of grape and age.The samples of young wine (CS, ME and PN) are all placed on the top left side of the plot, whereas the one-year-old samples are placed in the center and at the bottom Figure 1 . Figure 1.ESI-MS fingerprints in the negative ion mode of the wine samples directly after the malolactic fermentation (PN, Pinot Noir; ME, Merlot; CS, Cabernet Sauvignon), after one year's aging (PN1, Pinot Noir one year; ME1, Merlot one year; CS1, Cabernet Sauvignon one year) and after two year's aging (PN2, Pinot Noir two years).MIX is a mixture of ME1 and PN1. . The ions of m/z 115 and 133 are much less intense and m/z 149 (although still the most intense in the fingerprints of PN1and ME1) is relatively less intense in relation to other ions in the fingerprint.The ion of m/z 191 (quinic acid) is less intense and the ions of m/z 193 (ferrulic acid) and m/z 195 are relatively more intense.The ions of m/z 445 and 317 are now clearly present in the fingerprints of the three varieties.In the ESI-MS fingerprint of the two-year-old Pinot noir wine (PN2), the diagnostic ions of m/z 439 can still be observed, but not the ions of m/z 133 and 191.Although the ions of m/z 115 and 149 are still the most intense in the fingerprints, they are now relatively less intense in relation to other ions in the fingerprint, such as m/z 193, 195, 439 and 445. Figure 2 . Figure2. A. scores plot and B. loadings plot of the PCA analysis of the ESI-MS data for the wine samples (for abbreviations see Figure1).The first two components (PC1 and PC2) retained 86% of the variation.
2018-07-02T17:41:28.599Z
2011-09-01T00:00:00.000
{ "year": 2011, "sha1": "8ff36c42eea826ef72486975f6cfd0827e68b708", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/cta/a/fW3Ym8czCLtPty3nHrXDbms/?format=pdf&lang=en", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "dfd5d6f6e5920970c394d8b0679837e72719a49d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
269532287
pes2o/s2orc
v3-fos-license
Analyzing Trends in Mental and Behavioral Health Support for Children: A Comprehensive Study Using National Survey of Children’s Health Database Objective This study aimed to explore mental and behavioral health support trends for children aged 3-17, analyzing treatment and counseling using United States data from the 2016-2020 National Survey of Children's Health (NSCH) database. Methods Employing a retrospective observational design, we systematically retrieved and analyzed NSCH Database data from 2016 to 2020. The focus was on understanding mental and behavioral health treatment percentages over time, specifically targeting demographic variations such as age groups, gender, race/ethnicity, and the federal poverty level percentage. Graphical representation utilized Excel, summarizing results based on aggregated data for distinct time intervals, highlighting the importance of mental and behavioral health support for children aged 3-17. Results The study identified significant temporal trends in mental and behavioral health treatment, revealing notable fluctuations across demographic and socio-economic variables. Of the 22,812 participants, 51.7% (CI: 50.2-53.1%, n=12,686) received treatment, exposing disparities. Gender differences were evident, with higher treatment rates in females (53.7%, CI: 51.6-55.9%, n=6,166) than males (50.1%, CI: 48.2-52.0%, n=6,520). Age-specific patterns indicated lower intervention rates in younger children (33.5%, CI: 28.6-38.8%, n=447, ages 3-5) compared to adolescents (58.1%, CI: 56.2-59.9%, n=8, 222 ages 12-17). Conclusion The conclusion highlights significant temporal fluctuations and pronounced demographic disparities. Findings underscore varying prevalence rates among age groups, genders, racial/ethnic backgrounds, and socio-economic status categories. This study provides valuable insights for policymakers, healthcare professionals, and researchers, informing targeted interventions to enhance mental and behavioral health support for United States children. Introduction Mental and behavioral health issues in children aged 3-17 encompass a spectrum of emotional, psychological, and social challenges that impact their overall well-being [1].These concerns include anxiety, depression, attention-deficit/hyperactivity disorder (ADHD), conduct disorders, and developmental disorders.Manifesting in diverse forms, such as behavioral difficulties, mood disturbances, or impaired social interactions, these issues significantly hinder a child's normal functioning and development [2].Early identification and intervention are crucial for providing appropriate support, counseling, and interventions, ensuring optimal mental health outcomes as children navigate the critical stages of their formative years [3]. Behavioral Health Support is crucial for addressing mental health challenges in children, offering therapeutic interventions, counseling, and personalized strategies to enhance resilience and emotional stability [4].Annually, approximately 20% of children experience mental disorders, with an expenditure of $247 billion for treatment.The substantial impact designates these issues as a significant public health concern, highlighting the need for effective, comprehensive strategies to address children's mental wellbeing [5].The study shows that 21.8% of United States children aged 3 to 17 face common mental, emotional, and behavioral health conditions, escalating with social and relational risks [6]. The 2021 National Health Interview Survey explores mental health treatment in children aged 5-17, covering medication use, counseling, or both within the past 12 months [7].Children's mental and behavioral health pathophysiology involves a complex interplay of genetic, environmental, and neurobiological factors [8]. Neurotransmitter imbalances, altered brain structures, and genetic predispositions contribute to conditions like ADHD and anxiety [9].Early-life stressors impact neural development, leading to persistent mental health challenges.Understanding this pathophysiology is crucial for developing targeted interventions and promoting healthy neurodevelopment, emphasizing the importance of individualized approaches to mental and behavioral health care [10][11]. Study design and participants This research employed a retrospective observational design, analyzing treatment and counseling trends for mental and behavioral health support in children aged 3-17.The study utilized data from the 2016-2020 NSCH Database, a comprehensive and nationally representative dataset focusing on the health and wellbeing of children in the United States.The data for this study were collected from the publicly available Data Resource Center for Child & Adolescent Health.The NSCH conducted the original survey, utilizing a combination of online and mail administration methods for sampling.Randomly selected addresses across the United States were mailed instructions to access the survey online, while some households also received a paper version of the screening questionnaire. The selected timeframe provided a recent and relevant snapshot of mental and behavioral health support trends among children aged 3-17.Datasets were systematically retrieved from the NSCH database following standardized protocols, with meticulous attention to data quality and consistency to ensure analytical accuracy.The total number of participants in the analysis was derived from the database records, including a sample count of 22,812. Study variables Key variables encompassed utilizing mental and behavioral health treatment and counseling services.This included types of mental and behavioral health interventions, frequency of counseling, and demographic information. Demographic Characteristics Variables such as age, gender, race/ethnicity, and family structure were included to examine potential disparities in mental and behavioral health support across different demographic groups. Socioeconomic Status Indicators such as household income were considered to elucidate the influence of socioeconomic factors on access to mental health services.The study used Federal Poverty Level (FPL) set income criteria for assistance eligibility, with 0-99% FPL indicating extremely low income, 100-199% FPL qualifying as lowincome, 200-399% FPL as moderate-income, and 400% FPL or more signifying higher income, making some ineligible for need-based programs but still economically relevant.Information on the child's overall health status, data based on adverse childhood experiences (ACE), and data based on received treatment or not provided context for understanding the intersection between physical and mental well-being. Data analysis Descriptive statistics were employed to characterize the study population and provide an overview of mental and behavioral health support utilization.Frequencies and percentages were computed using Excel to describe the prevalence of counseling trends.Subgroup analyses based on demographic, socioeconomic, and child health indicators were performed to identify variations. Ethical considerations The Institutional Review Board (IRB) is known to view the examination of anonymized, publicly accessible data that does not contain any personal identification information as not falling under the category of human subjects research as per the standards outlined in 45 CFR 46.102.Consequently, such an analysis does not necessitate a review by the IRB. Results The study included 22,812 patients during the specified time frame.Overall, the data revealed that 51.7% (CI: 50.2 -53.1%, n=12,686) of children in the sampled population received some form of mental or behavioral health treatment or counseling. Based on gender In our study, which examined gender-based disparities in the treatment of mental and behavioral conditions among children, a concerning percentage did not receive adequate care.Alarmingly, the results indicated that a higher proportion of females, 53.7% (CI: 51.6-55.9%,n=6,166), received treatment or counseling compared to males, 50.1% (CI: 48.2-52.0%,n=6,520).The results also indicated that a higher proportion of males, 49.9% (CI: 48-51.8%,n=5,841), did not receive treatment or counseling compared to females, 46.3% (CI: 44.1-48.4%,n=4,285) (Table 1).This disparity underscores potential gender-specific barriers to accessing mental health support for children.In our study, we observed varying rates of treatment or counseling receipt among children with mental and behavioral conditions stratified by family income.Notably, 48.6% (CI: 45.0-52.2%,n=1,702) of children from families with incomes below 100% of the FPL received such services.The percentage increased marginally to 50.4% (CI: 47.3-53.5%,n=2,126) for those in the 100-199% FPL range and slightly decreased to 49.2% (CI: 46.6-51.8%,n=3,690) for families in the 200-399% FPL category.Surprisingly, the highest rate was found among children in families with incomes at or above 400% FPL, with 57.9% (CI: 55.8-59.9%,n=5,078) receiving treatment or counseling (Figure 4).These findings underscore the complex interplay between socioeconomic factors and access to mental health services for children. FIGURE 4: Children with a mental and behavioral condition who receive treatment or counseling based on FPL FPL-Federal poverty level, 0-99% FPL indicating extremely low income, 100-199% FPL qualifying as lowincome, 200-399% FPL as moderate-income, and 400% FPL or more signifying higher income Based on the received treatment In our study, 52.4% (CI: 50.9-53.8%,n=12,314) of children with identified mental and behavioral conditions reported receiving treatment or counseling.This finding suggests a noteworthy proportion benefited from therapeutic interventions.However, a concerning 39.8% (CI: 32.2-47.9%,n=347) of children did not receive any form of treatment, indicating a substantial treatment gap (Table 2).These results underscore disparities in access to mental health care for children with such conditions.Further exploration of contributing factors, such as socio-economic status and geographical location, is warranted to inform targeted interventions.Addressing this treatment gap is crucial to ensuring comprehensive and equitable mental health support for all affected children. Discussion This comprehensive analysis of treatment and counseling trends from the NSCH provides critical insights into the mental and behavioral health support landscape for children in the United States.The findings illuminate disparities across various demographic factors such as age groups, gender, race/ethnicity, socioeconomic factors (FPL), and adverse childhood experiences, shedding light on the multifaceted challenges faced by children and their families. Our study revealed gender-based disparities in the treatment of mental and behavioral conditions among children.Alarmingly, a higher proportion of females received treatment or counseling compared to males, indicating potential barriers that disproportionately affect boys.This finding raises questions about the factors contributing to this gender-specific gap in accessing mental health support [7,[13][14].Future research should investigate the underlying causes, considering societal expectations, stigma, or differential symptom recognition.Culturally sensitive interventions tailored to the unique needs of both genders are imperative to ensure equitable access to mental health care. The variation in treatment or counseling rates across age groups emphasizes the importance of age-specific approaches in addressing children's mental health needs.While adolescents demonstrated a higher treatment rate, younger children (ages 3-5) exhibited a lower percentage of receiving intervention.This underscores the need for targeted interventions tailored to the developmental stages of children, acknowledging the evolving nature of mental health challenges as they grow.Initiatives focusing on early intervention for the youngest age group are crucial to prevent long-term consequences and promote overall well-being [2,7,14]. Significant disparities were observed in the percentages of children receiving treatment or counseling across different racial and ethnic groups.Native Hawaiian/Other Pacific Islander, non-Hispanic children had the lowest rates, emphasizing the need for culturally sensitive interventions in these communities.Conversely, American Indian/Alaska Native and non-Hispanic children exhibited higher rates, suggesting potential protective factors or increased awareness.The varying percentages among different racial and ethnic groups underscore the need for targeted strategies that consider cultural nuances to ensure equitable access to mental health services [14,15]. Our study revealed compelling patterns concerning treatment and counseling rates in relation to ACEs.Notably, children with two or more ACEs demonstrated a significantly higher treatment rate, highlighting the critical role of targeted mental health support for those exposed to multiple adversities.Early intervention strategies are paramount in mitigating the impact of childhood adversities, emphasizing the need for comprehensive and trauma-informed approaches in mental health care [14]. The complex interplay between family income and access to mental health services is evident in our findings.Surprisingly, the highest rate of treatment or counseling was observed among children in families with incomes at or above 400% of the FPL.This result prompts further investigation into potential factors, such as healthcare disparities or variations in help-seeking behaviors.Addressing the disparities in mental health support across different income brackets requires a nuanced understanding of the socio-economic factors influencing access to care. While our study indicates that a significant proportion of children with identified mental and behavioral conditions benefited from treatment or counseling, a concerning treatment gap exists, with 39.8% of children not receiving any form of intervention.Exploring contributing factors, such as socio-economic status and geographical location, is essential to inform targeted interventions addressing this gap. Comprehensive and equitable mental health support for all affected children necessitates addressing the root causes of the treatment disparities identified in our study. While the analysis of the NCHS Database provides valuable insights, it is essential to acknowledge certain limitations that may impact the generalizability and depth of the findings.Firstly, the data rely on selfreported information from parents or caregivers, introducing response bias and relying on the accuracy of their perceptions and memories.Moreover, the survey's cross-sectional nature limits the ability to establish causal relationships or assess the longitudinal impact of treatments over time.The database predominantly captures information broadly, potentially overlooking specific nuances in treatment modalities and counseling approaches.Additionally, the study does not account for cultural and socio-economic factors that could influence access to and utilization of mental health services, limiting the generalizability of the findings across diverse populations.The limited time frame of 2016-2020 might not fully capture the dynamic changes in mental health support systems, which could have significant implications for children's mental health. Furthermore, the survey may not encompass emerging trends or innovative interventions that have evolved in the field of child mental health after 2020.While the database offers a comprehensive overview, the absence of certain variables, such as the quality of therapeutic relationships or the presence of comorbidities, hinders a holistic understanding of the complexities involved in children's mental health treatment.Therefore, researchers and practitioners should interpret the findings with caution and consider these limitations when applying the results to inform policy, practice, and future research in the realm of children's mental and behavioral health. Conclusions This analysis of mental and behavioral health support for children reveals crucial insights into treatment and counseling trends.Dynamic temporal fluctuations underscore the influence of demographic and socioeconomic factors on children's mental health experiences.Disparities are evident across genders, with higher treatment rates in females, warranting exploration of potential barriers affecting boys.Age-specific trends emphasize the need for tailored early interventions, especially for younger children.Pronounced disparities across racial/ethnic backgrounds and socio-economic categories highlight the necessity for targeted strategies.Unexpectedly higher treatment rates in families with incomes above 400% of the federal poverty level challenge assumptions about socio-economic influences on mental health support.These findings advocate for nuanced, targeted interventions to address gaps in mental and behavioral health support, contributing valuable insights for policymakers, healthcare professionals, and researchers.The study promotes a more equitable approach to children's mental well-being in the United States. FIGURE 1 : FIGURE 1: Children with a mental and behavioral condition who receive treatment or counseling based on age FIGURE 2 : FIGURE 2: Children with a mental and behavioral condition who receive treatment or counseling based on race FIGURE 3 : FIGURE 3: Children with a mental and behavioral condition who receive treatment or counseling based on adverse childhood experiences TABLE 1 : Treatment and counseling rates for Children (Ages 3-17) with mental/behavioral conditions by demographic variables These findings underscore the importance of age-specific approaches in addressing children's mental health needs, emphasizing the need for targeted interventions tailored to the developmental stages of this vulnerable population. TABLE 2 : Treatment and counseling rates for children (Ages 3-17) with mental/behavioral conditions by socioeconomic and child health indicator variables Note-Data presented as N(%): Sample size (Percentage); CI: Confidence interval; FPL: Federal Poverty Level
2024-05-04T15:11:27.743Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "3353d9c0f356e1c52c7e5ff880bf154c37626a27", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/225612/20240502-5904-1g862wj.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd9a14671cbf23e58a331c722e700ec4b757c586", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212424833
pes2o/s2orc
v3-fos-license
Option Pricing for Path-Dependent Options with Assets Exposed to Multiple Defaults Risk In the present paper, we derive analytical formulas for barrier and lookback options with underlying assets exposed to multiple defaults risks which include exogenous counterparty default risk and endogenous default risk. +e endogenous default risk leads the asset price drop to zero and the exogenous counterparty default risk induces a drop in the asset price, but the asset can still be traded after this default time. An original technique is developed to valuate the barrier and lookback options by first conditioning on the predefault and the afterdefault time and then obtaining the unconditional analytic formulas for their price. We also compare the pricing results of our model with the default-free option model and exogenous counterparty default risk option model. Introduction Barrier and lookback options are among the most popular path-dependent derivatives traded in exchanges and over-thecounter markets worldwide. e barrier option is a financial derivative contract that is activated or deactivated when the price of the underlying asset crosses a certain level (called barrier) from above or below. And the standard floating lookback call (put) option confers the holder the right to buy (sell) an asset at its lowest (highest) price during the life of the contract. For a complete description of these and other related contracts, refer to Hull [1]. ere has been extensive research in option pricing for path-dependent option. For example, Merton [2] and Goldman et al. [3] derive closed-form solutions for barrier and lookback options in the standard Black-Scholes model. Davydov and Linetsky [4] derive analytical formulas for the prices of path-dependent options, such as barrier and lookback options, with the asset price process under Constant Elasticity of Variance (CEV) diffusion model. Kou et al. [5] present analytical solutions for two-dimensional Laplace transforms of barrier option prices under a double exponential jump diffusion model. Multiple defaults risks include exogenous counterparty default risk and endogenous default risk, where the exogenous counterparty default risk may not be unique. However, for simplicity, we assume that the exogenous counterparty default risk is sole in the following parts. In the financial market, an exogenous counterparty default usually has important influences in various contexts. In terms of credit spreads, one observes, in general, a positive jump of the default intensity which is called the contagious jump (see [6]). According to asset values for a firm, the default of a counterparty will in general induce a drop of its value process (refer to [7]). Jiao et al. analyze the impact of the single exogenous counterparty risk and the multiple exogenous counterparty risk on the optimal investment problem, we can refer to [8,9] for more detail. In this paper, we study the impact of the multiple defaults risk on option pricing problem. In particular, we focus on the pricing of path-dependent option with the underlying asset subject to multiple defaults risk such that the instantaneous loss of the asset at the exogenous counterparty default time and the asset price instantaneously become zero at endogenous default time. Ma et al.,in [10], obtain that the explicit valuation of European options with the asset exposed to exogenous counterparty default risk. Yan derives analytical formulas for lookback and barrier options on underlying assets that are subject to an exogenous counterparty risk in [11]. e explicit pricing formulas for European option with asset exposed to multiple defaults risk is given by He [12]. However, to the best of our knowledge, the derivation of the analytic formula for pricing barrier and lookback options under the multiple defaults risk model has not been performed in the previous literature. e main difficultly lies in that the distribution of the first passage time is difficult to derive owing to the multiple defaults and the continuous trading of the underlying asset after the exogenous counterparty default time. We use the conditional density approach of default, which is particularly suitable to study what goes on after the default and was adopted by Jiao and Pham [8] for the optimal investment problem, to derive the explicit distribution of the first passage time and then obtain the analytic formulas for valuation of the barrier and lookback options. We also compare the pricing results of the multiple defaults risk model with Merton's [2] default-free option model and Yan's [11] exogenous counterparty default risk option model. e organization of the rest of this paper is as follows. First, a financial model is introduced in Section 2. Next, an analytical formula for barrier option on underlying asset that can be exposed to multiple defaults risk is derived in Section 3. en, in Section 4 we derive the formula for pricing lookback options under this model. Finally, we conclude the paper in final Section. Financial Model In this section, we consider a financial market model with a risk asset (stock) subject to multiple defaults risks. We denote the stock by (S t ) t∈ [0,T] , the dynamic of the stock is affected by not only the possibility of the exogenous counterparty default but also the possibility of the endogenous default. However, this stock still exists and can be traded after the exogenous counterparty default. Assume (Ω, G, P) is a complete probability space satisfying the usual conditions. Let (W t ) t∈[0,T] be a Brownian motion with horizon T < ∞ on the probability space (Ω, G, P) and denote by F � (F t ) t∈[0,T] the natural filtration of W. Let τ 1 and τ 2 be both, almost surely, nonnegative random variables on (Ω, G, P), representing the stock of the exogenous counterparty default time and the endogenous default time, respectively. en, ( , then the progressively enlarged filtration G � (G t ) t∈[0,T] � F ∨ H 1 ∨ H 2 , representing the structure of information available for the investors over [0, T]. e market model is given by the following stochastic differential equation (SDE): where μ t , σ t , and c t are G-predictable processes. μ t and σ t are the drift rate and volatility rate of the stock S, respectively, and c t is the (percentage) loss on the stock price induced by the defaults of the counterparty. At default time τ 1 , the stock price S is reduced by a percentage of c t . However, the stock price S falls to zero at default time τ 2 . Let us define the following (mutually exclusive and exhaustive) events ordering the default times: (2) en, according to Pham [13], the dynamic of stock price process (1) can be decomposed to the following four situations under physical measure: Situation 1: if the stock is in absence of any default in the life of the option, i.e., the default times satisfy A ∪ B, then we have Situation 2: if the default times satisfy C ∪ D, then we have When the counterparty default, the drift, and volatility coefficient (μ, σ) of the stock price switch from (μ F t , σ F t ) to (μ , the after default coefficients may depend on the default time τ 1 . However, when the stock itself default, the drift and diffusion coefficient (μ, σ) of the stock price switch from (μ F t , σ F t ) to (0, 0) due to the stock price identically vanishing. Here, for simplicity we assume that with μ 1 , σ 1 , μ 2 , and σ 2 are nonnegative constants and the distribution of c(c < 1) fixed. Moreover c, τ 1 , τ 2 , and W t are independent and τ 1 and τ 2 are all the exponential variables with parameter λ 1 and λ 2 , respectively. For more details refer to Jiao and Pham [8]. Assume that r is a risk-free interest rate and denote n � E(c). Let us define the G-adapted process: By assuming E[ T 0 (1/2)|θ t | 2 dt] < ∞, we define a probability measure Q which is equivalent to P on (Ω, G) with Radon-Nikodym density: under which, by Girsanov's theorem, W t � W t + t 0 θ u du is a (Q, G)-Brownian motion. And thus we can rewrite (1) as follows: that is, by changing measure, the four situations to decompose of the stock price S t under the physical measure P can be transformed into the corresponding following four forms under the equivalent martingale measure Q: Situation I: if the stock is in absence of any default in the life of the option, i.e., the default times satisfy A ∪ B, then we have Situation II: if the default times satisfy C ∪ D, then we obtain Situation III: if the stock has only exogenous counterparty default in the life of the option, i.e., the default times satisfy E, then we have Situation IV: if the stock has both endogenous default and exogenous counterparty default in the life of the option and the exogenous default time is early than the endogenous default time, i.e., the default times satisfy F, then we obtain In practice, we may assume c is a discrete random variable to simplify the computation; in what follows, we further assume that c takes value c i with probability p i for i � 1, 2, 3, where 0 < c 1 < 1 (loss), c 2 � 0 (no change), and c 3 < 0 (gain). Analytic Formula for Pricing Barrier Options In this section we derive an analytic formula for pricing barrier options under the model (1). e barrier options include up-and-out, up-and-in, down-and-out, and downand-in puts and calls. Since the approaches for deriving the formulas for pricing these kinds of barrier options are similar, we only study the up-and-out barrier call in this section. Consider an up-and-out barrier call option expiring at time T, with strike price K and barrier level B. We assume that K < B and denote the maximum of the stock price up time to T by en, the option knocks out (i.e., payoff equals to 0) if and only if Y T > B, on the other hand, the option pay off is In other words, the payoff of the option is Discrete Dynamics in Nature and Society us, the risk-neutral price of the up-and-out barrier call option at initial time is According to compute (17), we obtain the risk-neutral price of the up-and-out barrier call option at time 0 under multiple defaults risks model as follows. then the risk-neutral price of an upand-out barrier call option at time 0 under model (1) is given by where Φ is the standard normal distribution function and 4 Discrete Dynamics in Nature and Society Proof. See Appendix A. Remark 1 (1) If λ 2 � 0, i.e., there is no exist endogenous default risk in model (1), then the risk-neutral price of the up-and-out barrier call option at time 0 under this model becomes where It is obvious that the value of up-and-out barrier call option at time 0 under model (1) is the same as the one at time 0 with the stock exposed to counterparty risk (see Yan [11]). (2) If λ 1 � 0 and λ 2 � 0, i.e., there is no any default in model (1), then the risk-neutral price of up-and-out barrier call option at time 0 under this model becomes with δ Analytic Formula for Pricing Lookback Options In this section, we price a floating strike lookback option, whose payoff is the difference between the maximum asset price over the life time and the asset price at expiration. By formula (15), the risk-neutral price of the lookback option at initial time can be written as follows: According to the calculation of (25), we obtain the following theorem. Theorem 2. e risk-neutral price of the lookback option at initial time under model (1) is given by where with Proof. See Appendix B. Remark 2 (1) If λ 2 � 0, i.e., there is no exist endogenous default risk in model (1), then the risk-neutral price of the lookback option at time 0 under this model becomes where 6 Discrete Dynamics in Nature and Society with It is obvious that the value of lookback option at time 0 under model (1) is the same as the one at time 0 with the stock exposed to counterparty risk (see Yan [11]). (2) If λ 1 � 0 and λ 2 � 0, i.e., there is no any default in model (1), then the risk-neutral price of lookback option at time 0 under this model becomes with α 1 � (1/σ 1 )(r − (σ 2 1 /2)). It is obvious that (32) is standard Black-Scholes formula for lookback option. Conclusions e explicit analytical formulas for European call and put options with asset exposed to multiple defaults risks have been derived. However, it is still very challenging to obtain the explicit analytical formulas for path-dependent options under this model. is is because the multiple default risks cause the difficultly in deriving the density of the first passage time for the maximum asset price. In this paper, the conditional density approach, which is developed by Jiao and Pham [8] for optimal investment, is utilized to overcome the difficulty and derive the formulas for lookback and barrier options when the underlying asset is subject to multiple defaults risks. Future research lies in deriving analytic formulas for the path-dependent options with two underlying assets exposed to loop contagion risks. A. Proof of Theorem 1 We can rewrite (17) as follows: If the default times satisfy situation I, then the dynamic of stock price process takes the form as (11). By Ito's lemma and (11), we can obtain where W F t � α 1 t + W t with α 1 � (1/σ 1 )(r + λ 1 n + λ 2 − (σ 2 1 / 2)). We define M F T � max 0≤t≤T W F t , so by (15) we derive that e first term on the right-hand side of (A.1) can be calculated as Notice that V uo 0 corresponds to the case when there is no any default. en, we use the following identity: Discrete Dynamics in Nature and Society and a similar technique as in [14], V uo 0 can be calculated as where δ ± 1 (s) is defined in eorem 1. If the stock has endogenous default in the life of the option, i.e., the default time satisfy situation II and situation IV, then the price of the stock at expiration T is zero, and thus we obtain erefore, the second term on the right-hand side of (A.1) is equal to 0. If the default times satisfy situation III, then the dynamic of stock price process such that (13). Using Ito's lemma, the solution to SDE (13) for the stock price is It is obvious that S d 1 where k � (1/σ 2 )ln(K/x) and b � (1/σ 2 )ln(B/x). Notice that the expectation Discrete Dynamics in Nature and Society corresponds to the case when there is no default. erefore using the techniques as in calculate (A.6), we have where δ ± 2 (s) are defined in eorem 1. en, the third term on the right-hand side of (A.1) can be calculated as follows: According to Shreve [14], the joint density function under P of (W Substituting (A.11) and x � S d 1 t (t) into (A.9) and using (A.13), we can continue to calculate (A.12) and obtain where φ(c i , w) is defined in eorem 1. Combining (A.6) and (A.14), we obtain formula (18). us, the proof of eorem 1 is complete. B. Proof of Theorem 2 e risk-neutral price of the lookback option (25) can be rewritten as Discrete Dynamics in Nature and Society 9 According to [12], the distribution function of the stock price at expire time T is given by with a(t) � (r + λ 1 n + λ 2 − (σ 2 1 /2))t + (r + λ 2 − (σ 2 2 /2))(T − t) and b(t) � ������������ � σ 2 1 t + σ 2 2 (T − t). Combining the distribution function F in (B.2) and the following identity the second term on the right-hand side of (B.1) can be calculated as follows: where we use 3 i�1 p i � 1 and 3 i�1 p i c i � n to obtain the last equality in formula (B.4). Next we aim to calculate the first term on the right-hand side of (B.1): If the default times satisfy situation I, then by (11) and Ito's lemma, we can obtain (A.2) and (A.3). us, the first term of the right-hand side of (B.5) can be calculated as According to Shreve [14], the density function of M F T under P is given by
2020-02-20T09:18:30.057Z
2020-02-12T00:00:00.000
{ "year": 2020, "sha1": "5e7f9687204307c87e85d45b962c72b45473c7d9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/2418620", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1889e5c03fe348e01108abc9554823ddec53eafd", "s2fieldsofstudy": [ "Mathematics", "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
8138138
pes2o/s2orc
v3-fos-license
The Effect of Complementary Therapy for Hospital Nurses with High Stress Objective: This study was to examine the effect of complementary therapy (CT) for nurses with high stress levels. It was taken before we employ this technique for cancer survivors because cancer patients are a heterogeneous group that requires substantial resources to investigate. Methods: A quasi-experimental design with five groups was employed for this study. The groups were examined whether there were effects for reducing the stress and the differences in effectiveness among four intervention groups and a nonintervention group. Stress relief was measured using pulse rate and blood pressure measurements and the short form of the profile of mood states (POMS-SF). The participants practiced the therapy for 20 min twice per week for 3 weeks. A two-way factorial analysis of variance was used to analyze the data. Results: The study enrolled 98 nurses (92 female and 6 male) with a mean age of 37.3 ± 10.5 years (range: 22–60 years). Fifty-nine nurses had 10 or more years of nursing experience. There were significant differences in pulse rate and the POMS-SF scores. All groups were effective for reducing the stress level of high-stress nurses, whereas four intervention CT groups were not more effective than nonintervention group. Conclusions: The complementary therapies were useful for nurses with high stress levels. Thus, they can be used as a self-management tool for such nurses. Afterward, we will use the CT for cancer survivors to determine whether it can improve the quality of life of cancer patients. Introduction Complementary therapy (CT) is a part of complementary and alternative medicine (CAM) and can be a useful practice facilitated by nurses as part of a holistic care approach for maintaining a high quality of life (QoL) for patients. [1] This research aims to determine the effects of CT for cancer patients to help them maintain high QoL. However, we examined its psychological and physiological effects in nurses with high stress levels before we employed this technique to cancer survivors because cancer patients are a heterogeneous group that requires long study duration and substantial resources for investigation. Thus, this study refers to cancer survivors. Nurses working at hospitals experience physical and psychological stress because they frequently encounter stressful situations, such as advanced, complicated medical treatments, high number of elderly patients, and potentially fatal cases. Therefore, reducing stress is important for nurses to maintain their QoL and work performance. The number of long-term cancer survivors in Japan increases with the advancements in medical treatments. Limited research has focused on educating patients beyond the treatment phase. Many patients are anxious about cancer recurrence, the uncertainty of death, and the side effects of cancer treatment, which lower their QoL. [2][3][4][5] On the other hand, 44.6% of cancer patients use CAM. Most patients use natural products, such as vitamins, minerals, traditional Chinese medicine, and probiotics. The second most frequently utilized CT techniques are mind and body practices and other body-based practices. [6] Thus, CT could be used to help cancer patients maintain their QoL. The mind and body CT practices can be useful for nurses to offer as part of a holistic care approach. The oncology nursing researchers Lengacher et al. and Wyatt et al. have demonstrated the effectiveness of mindfulness-based stress reduction and reflexology. Interventions of Lengachers et al., which involve meditation, yoga, body scanning, and walking meditation, are effective for stress reduction and symptom management among 350 breast cancer survivors, [7] whereas Wyatt et al. found that reflexology improved physical functioning and reduced cancer-related symptoms among 451 patients. [8] Some articles written by oncology nurses described the importance and the effectiveness of CAM for cancer patients as cancer care. [9][10][11][12] Yokoi demonstrated [13] that acupressure, music therapy, aromatherapy, deep breathing, and massage, including lymphoid massage, were effective for Japanese patients with motor nerve dysfunction caused by chronic diseases and the final stage of cancer. Case studies presented in the study report of stress nursing [14] presented that progressive muscle relaxation (PMR), relaxation music therapy, exercise therapy, and aromatherapy effectively reduce stress and manage chronic symptoms. Moreover, some studies mentioned that bed rest is beneficial for those experiencing fatigue, which is related to QoL; some physicians recommended bed rest/relaxation for cancer-related fatigue. [15,16] However, psychological reaction to CT was readily revealed by measurement instruments, such as psychological scales, whereas a physiological reaction was not apparent. Thus, the psychological measures provided statistically significant evidence of the effectiveness of these CTs. [17,18] Thus, explaining CTs using physiological data is difficult. Long-term studies with large sample sizes are needed to determine the physiological effectiveness of CTs. Moreover, CT can be used in nursing practice because it does not have any detrimental side effects. CT is a concept of stress reduction. Hence, CT is applied to relieve psychological distress, physical tension, and fatigue by stimulating the hypothalamus, cerebral cortex, and limbic system. [19] Pulse rate and blood pressure measurements and the short form of the profile of mood states (POMS-SF) questionnaire were used to assess the effectiveness of CT for stress reduction. The present study had methods such as relaxing music (RM), electrical heat stimuli (EHS), aroma foot bathing (AFB), and PMR by CT intervention groups for stress reduction because they were easy and harmless to use. Resting on Bed (RB) was used by control group to compare with CTs. Thus, this study aims to examine the effects of CTs for high-stress nurses. The following research questions were explored: • Are there differences in effectiveness before and after practicing CTs of RM, EHS, AFB, PMR, and RB for stress reduction? • Are there differences in effectiveness between the four CTs of RM, EHS, AFB, and PMR as intervention groups and RB as control group? Term explanation including the techniques and methods is as follows: • RM: Participants listen to RM with earphones for 20 min on a bed • EHS: Special EHS machine stimulates six meridian points (Chinese medicine term) on the arms and legs with 40-42°C. Participants apply six electrical stimulating buttons on the six points of arms and legs for 20 min on a bed • AFB: Hot water with 40-42°C is prepared with a few drops of participants' preferred aroma oil in a special foot bathing bucket. Participants put their feet into the bucket for 20 min on a bed • PMR: Participants listen and practice the guided PMR exercise that is played on a compact disc on a bed for 20 min • RB: Participants rest on a bed for 20 min. Study design A quasi-experimental design utilized four intervention CT groups, and one nonintervention group was employed for this study. RM, EHS, AFB, PMR, and RB were first examined whether they were effective for stress reduction. The four intervention CTs of RM, EHS, AFB, and PMR and nonintervention RB for the control group were employed to compare the reduction of the stress levels of high-stress nurses. The participants practiced for 20 min twice a week for 3 weeks, which is a total of six practice sessions. Pulse rate and blood pressure were used to determine the physiological effects, and the Japanese version of the POMS-SF was used to study the psychological effects. The measures were conducted before and after performing each practice for 20 min. Ethics This study was approved by the Ethical Review Board of Mie University. The investigator explained the purpose and methods of this study to each participant and the participants provided written informed consent. Participants The eligible participants were nurses with high stress levels caused by busy university hospital work. Nurses who had been taking some medication, such as a tranquilizer, a painkiller, a hypotensive drug, or a drug for mental disorder, and who could not join the study for 3 weeks continuously, were excluded. The study included 98 nurses working at a University Hospital in Central Japan. The study was conducted between September 2011 and July 2012. Procedure and setting The 110 nurses were recruited through advertisements. The research investigators included four faculty members at the nursing school, Mie University, and one research assistant. On their first meeting, the study procedures were explained, and each nurse drew lots to select one of the four CTs and RB to practice. Twenty-two nurses were randomly assigned to each of the five groups: RM, EHS, AFB, PMR, and RB. The participants came to a room at the university after their hospital shift twice per week for 3 weeks to practice one of the interventions and nonintervention with the support of the investigators. Each participant practiced on the prepared bed for 20 min. Data points are in Table 1. Instruments Pulse rate and blood pressure measurements and the POMS-SF subscales were used to examine the effects of RM, EHS, AFB, PMR, and RB. The POMS-SF has been translated into Japanese version [20,21] and is commonly used as a measure of psychological distress. This self-report instrument has achieved wide acceptance as a measure for assessing psychological distress in a variety of healthy and physically and mentally ill populations in Japan. The POMS-SF consists of 30 items grouped into six subscales, including tensionanxiety, depression-dejection, anger-hostility, vigor, fatigue, and confusion. The standardized scores for each item range from 20 to 85 using a 5-point Likert scale. The reliability of the POMS-SF and its subscales was estimated by Cronbach's alpha values, which range from 0.57 to 0.88 (P < 0.01) in a study by Yokoyama, [21] which indicates that POMS-SF has fairly high reliability. The internal consistency of the estimates for this study was quite high across all samples and subscales. Cronbach's alpha was 0.94 for the total mood disturbance score and ranged from 0.84 to 0.95 for each of the six subscales. Thus, the POMS-SF was reasonable to use for this study. Data collection and analysis Each participant performed his or her practice for 20 min twice a week for 3 weeks. Pulse rate and blood pressure were measured by a research assistant, and the data of the POMS-SF were taken by nurses' self-recordings. The data were collected before and after each practice, which is a total of 12 data entries for each nurse. A two-way factorial analysis of variance was used to compare pulse rate and blood pressure measurements and the POMS-SF subscales to determine whether differences are found between measurements taken before and after the practices in 6 time points. Moreover, a two-way factorial analysis of variance was used to compare the mean pre-post differences of the POMS-SF to investigate the differences in effectiveness among the four CTs and RB. A one-way factorial analysis of variance was used to test the baseline conditions of the participants in each group. The data were analyzed using IBM SPSS Statistics Desktop, Version 22. Demographic data The demographic data of the subjects are shown in Table 2. A total of 12 out of 110 nurses enrolled in the study nurses withdrew their participation because they could not continue the practices after their hospital shifts for 3 weeks. Some of them were too ill to attend, some were too late to join the practice time, and some forgot to practice. The remaining 98 nurses consisted of 92 females and 6 males with a mean age of 37.2 ± 10.5 years (range 22-60). About 59 nurses had 10 or more years, 32 nurses had <5 years, and 10 nurses had 3-5 years of nursing experience [ Table 2]. After participant drop-out, 19, 20, 20, 20, and 19 nurses practiced RM, EHS, AFB, PMR, and RB, respectively. Pulse rate and blood pressure measurements and the POMS-SF subscales before the practice were compared among five groups to consider the baseline conditions of the nurses in each group. No significant differences were found among them (P = 0.213-0.899). Effects of relaxing music, electrical heat stimuli, aroma foot bathing, progressive muscle relaxation, and resting on bed The effects of the five groups on pulse rate and blood pressure measurements and the POMS-SF subscales are shown in Tables 3 and 4. A significant difference was found in the participants' pulse rate before and after practicing each of the five groups in the main effect with practices (P = 0.001-0.017). Significant differences in blood pressure before and after the practices were only seen for AFB (P < 0.001 for systolic, P = 0.036 for diastolic) in the main effect with practices, although the blood pressure measurements of other groups tended to be low after practicing [ Table 3]. Significant differences were observed in all six POMS-SF subscales (P = 0.001-0.008) except for vigor (P = 0.297) of AFB in the main effect with practices when values were compared before and after the practice of each of the five groups. Moreover, significant differences were observed in some of the six POMS-SF subscales in the main effect with times and interaction [ Table 4]. Relationships between four complementary therapies and resting on bed The changes in the six POMS-SF subscales after the practices in 6 time points are shown in Table 5. A significant difference in the POMS-SF subscales was not seen between the four CTs of RM, EHS, AFB, and PMR as intervention groups and RB as a control group in the main effect with practices, except for vigor (P = 0.006) between AFB and RB. However, a trend of difference was identified in confusion (P = 0.077) between AFB and RB in main effect with practices. Moreover, significant differences were found in some of the six POMS-SF subscales in the main effect with time. Demographic characteristics The majority of nurses in this study are females (93.9%) and had 10 or more years of nursing experience (60.2%). The sample represents what is common in Japan. About 94.4% of all Japanese nurses are females. [22,23] Nurses at university hospitals tend to remain in hospitals and gain many years of experience in their career [24] like our study nurses having more than 10 years of experience. The nurses at the university hospital who participated in this study may be interested in practicing this type of therapy to reduce stress or for other reasons. Moreover, the baseline conditions of the nurses in each group were almost the same to participate in the study. Effects of relaxing music, electrical heat stimuli, aroma foot bathing, progressive muscle relaxation, and resting on bed Before and after the 20 min practice for RM, EHS, AFB, PMR, and RB, the significant differences in pulse rate and the POMS-SF subscales in 6 time points indicated that the four CTs and RB were effective for decreasing pulse rate and reducing tension-anxiety, depression-dejection, anger-hostility, fatigue, and confusion, as well as vigor excluding AFB. The significant difference in blood pressure showed that only AFB was effective for lowering both systolic and diastolic blood pressures. However, four CTs tended to have low blood pressures after practice. Therefore, AFB affects both physiological and psychological relaxation. Moreover, AFB did not decrease vigor significantly. Vigor is an inverse concept, which means that a high score is beneficial. However, the decrease in vigor after practice may reflect a state of deep relaxation brought about by the practice. The nurses were tired from the hard physical work performed during their hospital shift. Thus, AFB may minimally calm down vigor and keep it moderate and not falling into deep relaxation. Thus, AFB may be most effective for the nurses to relieve themselves after stressful work. All four CTs and RB were effective for stress reduction. The results are consistent with the following studies. Miki found that the stress levels of 19 third year nursing students who practiced PMR for 3 weeks were effectively reduced; the effectiveness of PMR was measured using the POMS-SF. [25] The present study showed that the six subscales' scores of POMS-SF decreased after practicing PMR. The Complementary and Alternative Medicine Guide Book of Cancer in Japan shows that the POMS-SF scale was useful to measure psychological states in describing the case studies of aroma therapy and music therapy. [6] A study of music therapy showed that nurses working in stressful situations at a hospital achieved a reduction in stress after listening to classical music during their day shift break time; their blood pressure lowered after the intervention. [26] Hence, music therapy was effective physiologically by lowering blood pressure; this finding differs slightly from our results. This difference could be because of the differences between RM and classical music or because of other reasons. Further study on music therapy is needed because it involves various influencing factors. EHS was used for 12 breast cancer patients with peripheral neurological numbness caused by chemotherapy treatment. [27] The study found that EHS was not significantly effective for improving physiological and psychological parameters although the patients were satisfied with EHS therapy. The study shows the same physiological result as the present study, but a slightly different psychological result in the POMS-SF. Further study of EHS with a large sample is needed to investigate psychological and physiological effects. Relationships between four complementary therapies and resting on bed No significant differences were observed in the changes of the POMS-SF subscales in 6 time points between the four CTs of RM, EHS, AFB, and PMR as intervention groups and RB as a control group. The four CTs of RM, EHS, AFB, and PMR were not more effective than RB, whereas RM, EHS, AFB, PMR, and RB were effective for stress reduction. However, differences were found in vigor and confusion of AFB unlike with RB. Therefore, AFB was a more effective CT than RB in vigor and confusion for stress reduction. Thus, AFB is most useful for stress reduction. Conclusion RM, EHS, AFB, PMR, and RB were effective for reducing the stress level of highly stressed nurses based on both physiological (i.e. pulse rate and blood pressure) and psychological (i.e. POMS-SF subscales) measures, whereas RM, EHS, AFB, and PMR were not more effective than RB. The four CTs were found to relieve tension-anxiety, depression-dejection, anger-hostility, fatigue, confusion, and vigor in high-stress nurses. AFB was most effective when considering all the psychological and physical measurements included in this study. CTs could be used for highly stressed nurses as a form of self-management and as a nursing skill to reduce stress in distressed patients. Given the results of this study, hospitals may provide CT for nurses with high stress levels to improve their psychological and physiological states. This study has some limitations. The 3-week duration of the CT practice was a long time for busy nurses to practice twice per week after their hospital shift. Thus, more sensitive ways of providing CT to patients will be considered because they are in a more delicate psychological and physical state. Other limitations include the small number of participants in each CT group. Therefore, a large sample will be taken in future studies to improve precision. In the future, the CTs will be studied for their effectiveness in improving the QoL of cancer survivors. Financial support and sponsorship A Grant-in-Aid for Scientific Research (A) from Japan Society for the Promotion of Science 21249095. Conflicts of interest There are no conflicts of interest.
2018-04-03T06:19:00.558Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "81e99742ceb9b60f818b340dbab3117979b1575d", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2347-5625.189810", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81e99742ceb9b60f818b340dbab3117979b1575d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254307057
pes2o/s2orc
v3-fos-license
Azimuth Full-Aperture Processing of Spaceborne Squint SAR Data with Block Varying PRF The block varying pulse repetition frequency (BV-PRF) scheme applied to spaceborne squint sliding-spotlight synthetic aperture radar (SAR) can resolve large-range cell migration (RCM) and reduce azimuth signal non-uniformity. However, in the BV-PRF scheme, different raw data blocks have different PRFs, and the raw data in each block are insufficiently sampled. To resolve the two problems, a novel azimuth full-aperture pre-processing method is proposed to handle the SAR raw data formed by the BV-PRF scheme. The key point of the approach is the resampling of block data with different PRFs and the continuous splicing of azimuth data. The method mainly consists of four parts: de-skewing, resampling, azimuth continuous combination, and Doppler history recovery. After de-skewing, the raw data with different PRFs can be resampled individually to obtain a uniform azimuth sampling interval, and an appropriate azimuth time shift is introduced to ensure the continuous combination of the azimuth signal. Consequently, the resulting raw data are sufficiently and uniformly sampled in azimuth, which could be well handled by classical SAR-focusing algorithms. Simulation results on point targets validate the proposed azimuth pre-processing approach. Furthermore, compared with methods to process SAR data with continuous PRF, the proposed method is more effective. Introduction Spaceborne synthetic aperture radar (SAR) is an indispensable imaging technology for acquiring two-dimensional (2-D) high-resolution images of the Earth's surface [1].Highresolution spaceborne SAR has been widely applied to ship detection [2][3][4][5] in both civilian and military marine monitoring tasks [6][7][8] such as illegal stowaway, maritime management and coastal defense reconnaissance.The geometric resolution is one of the most important aspects of spaceborne SAR.The sliding-spotlight mode [9], which is achieved by steering the azimuth beam from fore toward aft to make the illuminated area move with a speed less than the radar platform, can apparently extend the synthetic aperture time to improve the azimuth resolution.However, in addition to the desired higher azimuth resolution, some areas are required to be clarified for future spaceborne SAR missions [10].The squint sliding-spotlight mode [11,12] can obtain multiple images of a desired area with a fine azimuth resolution and different observation angles [13,14], and it will be widely adopted then future spaceborne SAR missions. Usually, the sliding-spotlight SAR system working with a large squint angle will induce large-range cell migration (RCM) [15].Many researchers have carried out a series of studies on the mentioned problem, but some issues still exist.Firstly, echo data with fixed pulse-repetition frequency (PRF) cannot be fully obtained, and the effective range swath width would be obviously reduced [16][17][18][19].To resolve this problem, in [20], a continuous PRF variation scheme is proposed to achieve high-resolution wide-swath imaging.The continuously varying PRF (CV-PRF) technology applied to the squint SAR is proposed in [21] to resolve the effect of the large RCM on the reduced swath width.However, there are too many azimuth sampling intervals in the CV-PRF transmission scheme, which makes the subsequent azimuth signal uniform reconstruction difficult and require a lot of computing resources.Furthermore, the block varying PRF (BV-PRF) transmission scheme is proposed in [22] to avoid the above mentioned two problems simultaneously.The existing BV-PRF scheme research applied to squint SAR only involves the analysis of the basic principle; there is no complete theoretical research on its specific design and application.Specifically, in the squint sliding-spotlight mode, when the squint angle increases/decreases, the whole echo data of the imaged swath moves forward/backward in the echo-receiving window within a single pulse repetition interval (PRI).If the scattered echoes of the whole swath are received completely in the echo-receiving window within a single PRI, its corresponding PRF value remains unchanged, and the pulse transmission with the same sampling interval forms a burst block.If some of the scattered echoes move out of the front edge or trailing edge of the available echo-receiving window, its corresponding PRF value will be changed to obtain all the scattered echoes of the whole imaged swath.Since the instantaneous echoreceiving duration of the whole imaged swath in the spaceborne squint sliding-spotlight mode is relatively small and the RCM caused by the steering squint angle during the whole acquisition interval is much larger [23], the CV-PRF transmission scheme is no longer appropriate.Therefore, compared with the fixed PRF scheme, the BV-PRF transmission scheme makes the position of the blind area change in blocks along the azimuth direction; thus, it can solve the obviously reduced swath width in the fixed PRF scheme.Furthermore, compared with the CV-PRF scheme [24,25], the BV-PRF transmission scheme makes full use of the advantages of the relatively small instantaneous echo duration and the relatively large pulse repletion interval in spaceborne squint sliding-spotlight SAR, which can also greatly reduce the non-uniformity of azimuth signal in the CV-PRF scheme [26]. In this paper, the concrete design of the BV-PRF scheme applied to the spaceborne squint sliding-spotlight mode is proposed.The design of the BV-PRF scheme is mainly divided into four steps: (1) initial PRF setting, (2) the calculation of the instantaneous echoreceiving window position, (3) the determination of the sampling frequency range during azimuth beam steering and (4) judgment criterion and sampling frequency increment design.However, the azimuth processing for echo data formed by BV-PRF scheme brings two problems.Firstly, the BV-PRF scheme will cause different azimuth data blocks to arise with different PRFs.Secondly, the Doppler spectrum aliasing caused by the squint angle and azimuth beam steering in each data block will be introduced in the 2-D frequency domain [27][28][29].To resolve the above two problems, a novel azimuth pre-processing method is proposed, and the key point of the method is the resampling of block data with different PRFs and the continuous splicing of azimuth data.However, the azimuth sampling frequency of each data block is insufficient due to the squint angle and azimuth beam steering.Firstly, de-skewing and de-ramping in the range frequency domain are performed to eliminate 2-D spectrum skewing caused by the squint angle and the extended bandwidth caused by azimuth beam steering, respectively.Afterwards, the total Doppler bandwidth of the raw data in each block is completely limited within the designed azimuth sampling frequency.Consequently, azimuth data in each block are resampled to obtain the same uniform azimuth sampling interval and facilitate the following azimuth data combination.Furthermore, an appropriate azimuth time shift should be introduced to ensure the continuity of the whole azimuth signal.Finally, re-skewing is introduced to recover the skewed 2-D spectrum.Compared with azimuth reconstruction methods of NUDFT [30], BLU [31], sinc interpolation [32] and multi-channel reconstruction method [33] with the CV-PRF scheme, the proposed azimuth pre-processing method for the squint sliding-spotlight SAR raw data with BV-PRF scheme is more efficient, since the proposed method uses only complex multiplication and fast Fourier transform (FFT) operations, without any matrix inversion and interpolation operations.Therefore, the proposed azimuth reconstruction method has the advantages of low computation, flexible processing and avoiding obvious system performances reduction. This article is organized as follows.In Section 2, three pulse transmission sequences including fixed PRF, CV-PRF and BV-PRF in squint sliding-spotlight SAR are compared and discussed.The BV-PRF design is introduced in detail, and echo signal properties in the squint sliding-spotlight SAR are analyzed in Section 3. The proposed azimuth fullaperture pre-processing method for azimuth sufficient and uniform sampling is presented in Section 4. In Section 5, a simulation experiment on point targets is carried out to validate the proposed pre-processing method.Finally, this paper is discussed and concluded in Sections 6 and 7. Range Cell Migration Analysis for Squint SAR The geometric model of squint sliding-spotlight SAR is shown in Figure 1.Assuming that the azimuth beam scanning direction from aft to fore is in position, the instantaneous squint angle θ sq (η) changes from large to small during the whole acquisition interval T as shown in Figure 1a.In Figure 1b, H is the track height, R e is the earth radius, η is azimuth time, θ is the looking angle, γ is the incident angle, α is the geocentric angle, and R is the distance from the radar to the target. Sensors 2022, 22, x FOR PEER REVIEW 3 of 22 the proposed method uses only complex multiplication and fast Fourier transform (FFT) operations, without any matrix inversion and interpolation operations.Therefore, the proposed azimuth reconstruction method has the advantages of low computation, flexible processing and avoiding obvious system performances reduction.This article is organized as follows.In Section 2, three pulse transmission sequences including fixed PRF, CV-PRF and BV-PRF in squint sliding-spotlight SAR are compared and discussed.The BV-PRF design is introduced in detail, and echo signal properties in the squint sliding-spotlight SAR are analyzed in Section 3. The proposed azimuth fullaperture pre-processing method for azimuth sufficient and uniform sampling is presented in Section 4. In Section 5, a simulation experiment on point targets is carried out to validate the proposed pre-processing method.Finally, this paper is concluded in Section 6. Range Cell Migration Analysis for Squint SAR The geometric model of squint sliding-spotlight SAR is shown in Figure 1.Assuming that the azimuth beam scanning direction from aft to fore is in position, the instantaneous squint angle ( ) sq  changes from large to small during the whole acquisition interval T as shown in Figure 1a.In Figure 1b, H is the track height, e R is the earth radius,  is azimuth time,  is the looking angle,  is the incident angle,  is the geocentric angle, and R is the distance from the radar to the target.As shown in Figure 1, the range history from radar to target can be calculated as: , , 2 cos , , with , , arccos cos cos As shown in Figure 1, the range history from radar to target can be calculated as: γ eq (γ, η, ∆θ a ) = arccos cos γ cos θ sq + ω r η + ∆θ a where γ ∈ [γ near , γ far ] is the looking angle, γ near and γ far are the near and far looking angles, respectively, θ sq is the squint angle in the middle of the acquisition interval, ω r is the azimuth beam rotation rate, ∆θ a ∈ [−θ a /2, θ a /2] indicates the relative position in the illuminated azimuth beam, and θ a is the exploited azimuth beam interval. The echo duration of the whole desired range swath at any azimuth time η can be expressed as follows: where τ r is the transmitted pulse duration.The echo duration during the whole azimuth acquisition interval can be computed as • To demonstrate the large RCM caused by azimuth beam steering during the whole acquisition interval, two proportion factors, Γ 1 = τ echo •PRF and Γ 2 = τ whole •PRF, are shown in Figure 2, where swath width is 20 km and γ near is 32 • .The factor Γ 1 demonstrates the ratio between the instantaneous echo duration and the pulse repletion interval (PRI), while Γ 2 indicates the ratio between the whole echo duration and the PRI.As shown in Figure 2, the instantaneous echo duration is much smaller than the PRI, and the whole echo duration of the swath would exceed the PRI, especially for the large squint angle. To demonstrate the large RCM caused by azimuth beam steering during the whole acquisition interval, two proportion factors,  demonstrates the ratio between the instantaneous echo duration and the pulse repletion interval (PRI), while 2  indicates the ratio between the whole echo duration and the PRI.As shown in Figure 2, the instantaneous echo duration is much smaller than the PRI, and the whole echo duration of the swath would exceed the PRI, especially for the large squint angle. (a) (b) Using the fixed PRF sequence to obtain the squint sliding-spotlight mode data, it is easy to slide out the receiving window, as shown in Figure 3a.The CV-PRF transmission sequence was proposed to resolve the large RCM caused by large squint angle in the squint sliding-spotlight mode, and its operated PRF varies continuously with the squint angle to keep the instantaneous echo in almost the same position, as shown in Figure 3b.However, because of the energy constraint in the spaceborne mission, the swath width in the sliding-spotlight mode is usually a little more than 10 km, and this CV-PRF transmission scheme is not necessary in the squint sliding-spotlight mode.In addition, CV-PRF has various sampling frequencies in azimuth, which requires complex computation for subsequent reconstruction to obtain uniform sampled azimuth signal.The BV-PRF transmission scheme, which is a compromise between the fixed PRF and CV-PRF schemes, as Using the fixed PRF sequence to obtain the squint sliding-spotlight mode data, it is easy to slide out the receiving window, as shown in Figure 3a.The CV-PRF transmission sequence was proposed to resolve the large RCM caused by large squint angle in the squint sliding-spotlight mode, and its operated PRF varies continuously with the squint angle to keep the instantaneous echo in almost the same position, as shown in Figure 3b.However, because of the energy constraint in the spaceborne mission, the swath width in the slidingspotlight mode is usually a little more than 10 km, and this CV-PRF transmission scheme is not necessary in the squint sliding-spotlight mode.In addition, CV-PRF has various sampling frequencies in azimuth, which requires complex computation for subsequent reconstruction to obtain uniform sampled azimuth signal.The BV-PRF transmission scheme, which is a compromise between the fixed PRF and CV-PRF schemes, as shown in Figure 3c, can extend the imaged swath and reduce the non-uniformity of azimuth signal in the CV-PRF scheme simultaneously.Assuming that the central squint angle in the sliding-spotlight mode is positive, the RCM value gradually decreases, and the echo data move forward along the receiving window while steering the azimuth beam from fore toward aft.When the front edge of echo data do not reach the trailing edge of the transmitted pulse, the same PRF value is adopted; otherwise, the operated PRF is changed in order to prevent the desired echo data moving out the receiving window.In terms of the non-uniformity, the BV-PRF scheme is weaker than the CV-PRF scheme, and the echo data of the whole imaged swath can be acquired with a limited number of PRFs.Consequently, for the BV-PRF transmission scheme, the azimuth processing flexibility of echo data would be greatly improved. muth signal in the CV-PRF scheme simultaneously.Assuming that the central squin gle in the sliding-spotlight mode is positive, the RCM value gradually decreases, an echo data move forward along the receiving window while steering the azimuth b from fore toward aft.When the front edge of echo data do not reach the trailing ed the transmitted pulse, the same PRF value is adopted; otherwise, the operated P changed in order to prevent the desired echo data moving out the receiving windo terms of the non-uniformity, the BV-PRF scheme is weaker than the CV-PRF scheme the echo data of the whole imaged swath can be acquired with a limited number of P Consequently, for the BV-PRF transmission scheme, the azimuth processing flexibil echo data would be greatly improved. Design of the BV-PRF Scheme The flowchart of the design of the BV-PRF sequence in the spaceborne squint slid spotlight mode is shown in Figure 4.As mentioned, the BV-PRF design adds a cr equation for whether the scattered echo front edge exceeds the receiving window to trol the variation of PRF with the squint angle.At first, according to the requiremen the range swath width and azimuth resolution, the initial pulse repetition frequ ( ) ( ) where    denote the floor operation, and c is the light speed. Afterwards, the range of PRI will need to be determined according to the follow Design of the BV-PRF Scheme The flowchart of the design of the BV-PRF sequence in the spaceborne squint slidingspotlight mode is shown in Figure 4.As mentioned, the BV-PRF design adds a criteria equation for whether the scattered echo front edge exceeds the receiving window to control the variation of PRF with the squint angle.At first, according to the requirements of the range swath width and azimuth resolution, the initial pulse repetition frequency PRF ini of the system should be determined.The initial PRF ini determined by the Doppler bandwidth B a can be calculated as follows: where α s is the azimuth oversampling factor, v s is the platform speed, and L a is the azimuth antenna length.Afterwards, the minimum value of R(γ near , −θ a /2; η) can be determined.Considering the case in which the echo data are received after the n s transmitted pulses are transmitted, the number n s can be expressed as follows: where • denote the floor operation, and c is the light speed.Afterwards, the range of PRI will need to be determined according to the following: where τ p is the guard interval. As the squint angle gradually changes from large to small, the pulse repetition interval of the system changes from PRI max to PRI min , and the range of pulse repetition frequency is [PRF min , PRF max ].Assuming that the initial scanning angle is 28.2 • and the terminal scanning angle is 21.8 • , the position of echo data in the receiving window gradually moves forward during azimuth beam scanning.The pulse interval PRI remains unchanged if the pulse signal transmitted by the radar meets the following condition: Sensors 2022, 22, 9328 6 of 21 When the front edge of reflected echo arrives at the front edge of the receiving window, the judgment condition of (10) will be not applicable.Therefore, the value of PRI will need to be decreased by ∆PRI to meet the judgment condition corresponding to the pulse repetition interval PRI − ∆PRI; the judgment condition can be rewritten as follows: It should be noted that ∆PRI cannot exceed PRI max − PRI min .As the squint angle gradually changes from large to small, the pulse repetition interval of the system changes from When the front edge of reflected echo arrives at the front edge of the receiving window, the judgment condition of (10) will be not applicable.Therefore, the value of PRI will need to be decreased by PRI  to meet the judgment condition corresponding to the pulse repetition interval PRI PRI − ; the judgment condition can be rewritten as follows: PRF design results of BV-PRF and CV-PRF are given in Figure 5.As shown in Figure 5a, the number of PRFs in the BV-PRF transmitting scheme is 3, and the scanning angle ranges corresponding to 2462 Hz, 2511 Hz and 2562 Hz are 26.3 • ~28.2 • , 24.2 • ~26.2 • and 21.8 • ~24.1 • , respectively.While the operated PRF of CV-PRF scheme changes continuously from the initial scanning angle 28.2 • to the terminal scanning angle 21.8 • , as shown in Figure 5b, and when the squint angle is about 26.3 • , the PRF is 2462 Hz; when the squint angle is about 24.2 • , the PRF is 2511 Hz. In order to validate the raw-data-obtaining capacities of both BV-PRF and CV-PRF transmitting schemes, SAR raw data simulation experiments were carried out, and the designed scene is shown in Figure 6a.SAR raw data simulation results with fixed PRF, CV-PRF and BV-PRF are shown in Figure 6b-d, respectively.As the RCM is increased rapidly with the squint angle, the SAR raw data with the fixed PRF of the whole imaged scene cannot be fully obtained for targets located in the edge of the swath, as shown in Figure 6b.The distortion of the raw data with CV-PRF is removed, and the resulting raw data are shown in Figure 6c.However, the continuously varying PRF makes the subsequent azimuth signal uniform reconstruction difficult.The raw data of the whole scene with BV-PRF scheme can be obtained by changing PRF three times, as shown in Figure 6d.The BV-PRF takes advantage of instant short echo duration time to reduce the number of PRF changes, which makes following azimuth data reconstruction easy. It should be noted that PRI  cannot exceed max min PRI PRI − .PRF design results of BV-PRF and CV-PRF are given in Figure 5.As shown in Figure 5a, the number of PRFs in the BV-PRF transmitting scheme is 3, and the scanning angle ranges corresponding to 2462 Hz, 2511 Hz and 2562 Hz are 26.3°~28.2°,24.2°~26.2°and 21.8°~24.1°,respectively.While the operated PRF of CV-PRF scheme changes continuously from the initial scanning angle 28.2°to the terminal scanning angle 21.8°, as shown in Figure 5b, and when the squint angle is about 26.3°, the PRF is 2462 Hz; when the squint angle is about 24.2°, the PRF is 2511 Hz.In order to validate the raw-data-obtaining capacities of both BV-PRF and CV-PRF transmitting schemes, SAR raw data simulation experiments were carried out, and the designed scene is shown in Figure 6a.SAR raw data simulation results with fixed PRF, CV-PRF and BV-PRF are shown in Figure 6b-d, respectively.As the RCM is increased rapidly with the squint angle, the SAR raw data with the fixed PRF of the whole imaged scene cannot be fully obtained for targets located in the edge of the swath, as shown in Figure 6b.The distortion of the raw data with CV-PRF is removed, and the resulting raw data are shown in Figure 6c.However, the continuously varying PRF makes the subsequent azimuth signal uniform reconstruction difficult.The raw data of the whole scene with BV-PRF scheme can be obtained by changing PRF three times, as shown in Figure 6d.The BV-PRF takes advantage of instant short echo duration time to reduce the number of PRF changes, which makes following azimuth data reconstruction easy. Properties of Echo Signal with BV-PRF The imaging geometry of spaceborne squint sliding-spotlight SAR data with BV-PRF is illustrated in Figure 7.During the whole raw data acquisition interval, the azimuth beam scanning from front to back makes the beam footprint move with a speed less than the radar platform.θ sq,m is the central squint angle of the m-th data block, P is a point target located in the position (X, R 0 ) in the imaged swath, R 0 and R rot are the slant ranges from the sensor path to the imaged target and the virtual rotation center, respectively, and T m is the whole acquisition interval of data block corresponding to PRF m .The azimuth beam scanning at a constant rotation rate ω r leads to a steering factor A. It is defined as follows: where v g is the ground velocity, and v f is the moving speed of the azimuth antenna beam center in the imaged scene. Properties of Echo Signal with BV-PRF The imaging geometry of spaceborne squint sliding-spotlight SAR data with BV-PRF is illustrated in Figure 7.During the whole raw data acquisition interval, the azimuth beam scanning from front to back makes the beam footprint move with a speed less than the radar platform. where g v is the ground velocity, and f v is the moving speed of the azimuth antenna beam center in the imaged scene.The third-order Taylor expansion of instantaneous slant range R m (η b,m ) can be written as follows: where R c = R 0 /cos θ sq,m is the slant range from the satellite platform to the center of the imaged scene, η b,m = (−N a,m /2, . . . ,N a,m /2 − 1)/PRF m + t m , where N a,m is the number of azimuth samples of the m-th azimuth block and t m is the time shift of the azimuth time of the m-th block data relative to the entire azimuth signal, η x = X/v g shows the azimuth position of the target, and p cubic,m is the coefficient of the cubic term for the slant range expansion. The azimuth signal of point target P(X, R 0 ) of the squint sliding-spotlight mode corresponding to the m-th block data is expressed as [34]: The cubic-order term in ( 13) is neglected for simplicity and without losing the rationale of the discussion, and the cubic-order term is still compensated in the following 1-D azimuth signal analysis.Using the principle of stationary phase (POSP), the azimuth signal spectrum can be expressed as: where A 1 is a complex constant, f η,m is the Doppler frequency of the m-th data block, and B f is the azimuth beam bandwidth exploited for azimuth focusing.In the squint sliding-spotlight SAR, the total Doppler bandwidth will be increased owing to the azimuth beam scanning, and the instantaneous azimuth beam Doppler center varying rate k rot,m of the m-th data block is given as follows: According to (17), variation curves of the instantaneous Doppler center varying rate within an appropriate azimuth beam scanning range under the side-looking and the squint are shown in Figure 8, respectively.It can be seen in Figure 8 that when the squint angle is 0°, the change in instantaneous Doppler center varying rate f  in the m-th data block can be computed as: where c f is the carrier frequency, and is the range frequency.According to (17) and ( 18), the azimuth time-frequency diagram of spaceborne squint sliding-spotlight SAR data with BV-PRF is shown in Figure 9.It can be seen in Figure 8 that when the squint angle is 0 • , the change in instantaneous Doppler center varying rate k rot can be ignored; when the squint angle is 25 • , the change in instantaneous Doppler center varying rate k rot reaches 160 Hz/s.Therefore, as the squint angle changes, the change in k rot needs to be considered and changes nonlinearly within an appropriate azimuth beam scanning range. The Doppler frequency f η,m in the m-th data block can be computed as: where f c is the carrier frequency, and f τ ∈ [−B r /2, B r /2] is the range frequency. According to ( 17) and ( 18), the azimuth time-frequency diagram of spaceborne squint sliding-spotlight SAR data with BV-PRF is shown in Figure 9. ( ) where c f is the carrier frequency, and is the range frequency.According to (17) and ( 18), the azimuth time-frequency diagram of spaceborne squint sliding-spotlight SAR data with BV-PRF is shown in Figure 9.The total Doppler bandwidth of the squint sliding-spotlight SAR data with BV-PRF can be computed as The total Doppler bandwidth of the squint sliding-spotlight SAR data with BV-PRF can be computed as where B r is the transmitted pulse bandwidth.From (19), it can be seen that the total Doppler bandwidth of each block data is composed of three main parts: azimuth beam bandwidth B f ,m , extended Doppler bandwidth B rot,m caused by azimuth beam steering, and additional bandwidth B sq,m caused by the squint angle.In order to analyze the influence of bandwidth B rot,m and bandwidth B sq,m on the total Doppler bandwidth and azimuth spectrum aliasing in each raw data block, the ratio between the additional Doppler bandwidth to the azimuth beam bandwidth varying with the instantaneous squint angle under different pulse bandwidths is shown in Figure 10. where r B is the transmitted pulse bandwidth.From (19), it can be seen that the total Dop- pler bandwidth of each block data is composed of three main parts: azimuth beam bandwidth Generally, the azimuth over-sampling rate between PRF and azimuth beam bandwidth in spaceborne sliding-spotlight SAR is set to 1.3~1.5.When the transmitted pulse bandwidth is greater than 150 MHz, the sum of squinted additional bandwidth and azimuth beam bandwidth is greater than the azimuth sampling frequency of the system, as shown in Figure 10.For the prior data block with the central squint angle of 27.3°, when the pulse bandwidth is 150 MHz, the total Doppler bandwidth is 7176 Hz, which is greater than the operated PRF.For the latter data block with the central squint angle of 25.3°, when the pulse bandwidth is 150 MHz, the total Doppler bandwidth is 7225 Hz and is also greater than the PRF.Therefore, the Doppler spectrum aliasing of each block data Generally, the azimuth over-sampling rate between PRF and azimuth beam bandwidth in spaceborne sliding-spotlight SAR is set to 1.3~1.5.When the transmitted pulse bandwidth is greater than 150 MHz, the sum of squinted additional bandwidth and azimuth beam bandwidth is greater than the azimuth sampling frequency of the system, as shown in Figure 10.For the prior data block with the central squint angle of 27.3 • , when the pulse bandwidth is 150 MHz, the total Doppler bandwidth is 7176 Hz, which is greater than the operated PRF.For the latter data block with the central squint angle of 25.3 • , when the pulse bandwidth is 150 MHz, the total Doppler bandwidth is 7225 Hz and is also greater than the PRF.Therefore, the Doppler spectrum aliasing of each block data caused by the squint angle and azimuth beam steering must be eliminated before azimuth combination.Because the cubic term function of azimuth time in the expansion of the range history will influence azimuth focusing, the azimuth signal of the m-th data block, which takes the cubic term function into account, can be rewritten as (neglecting the constants and azimuth amplitude weighting) [16]: with The block diagram of 1-D azimuth signal processing for spaceborne squint slidingspotlight SAR azimuth echo data generated by the designed BV-PRF scheme is shown in Figure 11, which mainly includes three parts: phase compensation, azimuth resampling and azimuth data combination.Firstly, the first-and third-term phase compensation should be performed in azimuth time domain.Afterwards, azimuth resampling operation is required to obtain the signal of azimuth uniform sampling.Furthermore, phase-shift compensation is executed in azimuth frequency domain to guarantee the continuous azimuth combination. sin cos The block diagram of 1-D azimuth signal processing for spaceborne squint slidingspotlight SAR azimuth echo data generated by the designed BV-PRF scheme is shown in Figure 11, which mainly includes three parts: phase compensation, azimuth resampling and azimuth data combination.Firstly, the first-and third-term phase compensation should be performed in azimuth time domain.Afterwards, azimuth resampling operation is required to obtain the signal of azimuth uniform sampling.Furthermore, phase-shift compensation is executed in azimuth frequency domain to guarantee the continuous azimuth combination. Figure The block diagram of 1-D azimuth pre-processing of the proposed method. To eliminate the range walk term caused by the first term about azimuth time in (20), the phase compensation function To eliminate the range walk term caused by the first term about azimuth time in (20), the phase compensation function g m (η b,m ) in the m-th data block is multiplied and expressed as follows Afterwards, to carry out matched filtering successfully, the cubic term function about azimuth time in (20) must be compensated.The phase compensation function h m (η b,m ) for the m-th data block is as follows After linear and cubic term phase compensation, different data blocks with different PRFs need to be resampled to obtain uniform sampling rate.For continuous azimuth combination of different azimuth data blocks, the azimuth time shift should be introduced in the azimuth time domain after azimuth resampling.The azimuth time shift is operated in the Doppler domain and multiplied by the phase shift function as follows: where t uni,m is the time shift of the time center of the m-th block data relative to the signal center of the whole bandwidth after the azimuth resampling, and f uni,m is the azimuth frequency after azimuth resampling. Figure 12 shows the results of 1-D azimuth compression without azimuth resampling and with azimuth resampling of the proposed method.Without removing the azimuth non-uniform sampling before the azimuth combination, the corresponding amplitude spectrum is discontinuous, as shown in Figure 12a, and the azimuth compression result shows pairs of false targets, as shown in Figure 12b.However, after the azimuth resampling for data block with different PRFs, the spectrum is well-reconstructed, and the false targets are suppressed, as shown in Figure 12c,d. Azimuth Pre-Processing in the 2-D Domain The squinted additional bandwidth and beam rotation bandwidth of raw data with BV-PRF spaceborne squinted sliding-spotlight mode gradually increases with the squint angle, which makes the total Doppler bandwidth of each sub-block data span over several designed azimuth sampling frequencies.In addition, the BV-PRF scheme results in azimuth non-uniform sampling.However, the azimuth up-sampling technology, the traditional two-step pre-processing algorithm and full-aperture focusing method [14] will become inapplicable.To solve the above two problems, an azimuth pre-processing approach combining the BV-PRF and full-aperture processing is proposed in this section.The block diagram of the proposed pre-processing approach is shown in Figure 13.The proposed azimuth pre-processing method is mainly divided into four processing steps: range frequency dependent de-skewing and de-ramping, azimuth signal resampling, azimuth data combination and Doppler history recovery. The 2-D echo signal of the m-th data block is expressed as follows: where r is the frequency modulation rate of the transmitted pulse. Since the Doppler center of each data block changes with the range frequency, the spectrum in the 2-D frequency domain is skewed.This means that the de-skewing operation must be implemented in the range frequency domain.The following transfer function is multiplied after range Fourier transform in each sub-block to remove the Doppler bandwidth caused by the spaceborne squint angle: After the de-skewing processing, the distorted 2-D spectrum becomes flat, and the total Doppler bandwidth of squint sliding-spotlight SAR becomes the sum of the additional bandwidth introduced by the Doppler center varying and the azimuth beam bandwidth.When the duration time of an arbitrary data point in BV-PRF scheme is too long, the remaining Doppler bandwidth can be still greater than the sampling frequency of each block after the de-skewing.Therefore, the range frequency-dependent de-ramping operation must be performed.The de-ramping function can be shown as follows: After the range-frequency-dependent de-ramping processing, the total Doppler bandwidth of the spaceborne squint sliding-spotlight raw data with BV-PRF is limited within the sampling frequency PRF, as shown in Figure 14.Afterwards, the resampling operation, which is used to transform the azimuth non-uniform data block corresponding to different PRFs into uniform data, must be performed in order to smoothly combine each block data point in azimuth. ( Since the Doppler center of each data block changes with the range frequency, the spectrum in the 2-D frequency domain is skewed.This means that the de-skewing operation must be implemented in the range frequency domain.The following transfer Because the range-frequency-dependent de-skewing and de-ramping processing of the signal will introduce additional phase terms, it is necessary to eliminate the redundant phase terms in the subsequent processing to restore the original Doppler history of the signal.Then, the re-ramping operation can be performed by multiplying the re-ramping function with the signal after the azimuth resampling of each sub-block data.The reramping function can be written as where η u,m = η uni,m + t uni,m , η uni,m = (−n a,m /2 : n a,m /2 − 1)/PRF uni , n a,m is the number of the signal sampling of the m-th block data after the azimuth resampling.PRF uni is the uniform sampling frequency after the azimuth resampling. Since azimuth resampling introduces azimuth time shift in the azimuth time domain, it is necessary to phase shift in the azimuth frequency domain to continuously combine each block data in the azimuth.The phase shift function can be expressed as: After the azimuth combination processing, the full-aperture data with azimuth uniform sampling will be obtained.Afterward, the Doppler histories should be recovered by multiplying the following re-skewing function: where η = (−N a /2 : N a /2 − 1)/PRF uni , and N a is the number of the total signal sampling after the azimuth combination. After the de-skewing processing, the distorted 2-D spectrum becomes flat, and the total Doppler bandwidth of squint sliding-spotlight SAR becomes the sum of the additional bandwidth introduced by the Doppler center varying and the azimuth beam bandwidth.When the duration time of an arbitrary data point in BV-PRF scheme is too long, the remaining Doppler bandwidth can be still greater than the sampling frequency of each block after the de-skewing.Therefore, the range frequency-dependent de-ramping operation must be performed.The de-ramping function can be shown as follows: ( ) After the range-frequency-dependent de-ramping processing, the total Doppler bandwidth of the spaceborne squint sliding-spotlight raw data with BV-PRF is limited within the sampling frequency PRF, as shown in Figure 14.Afterwards, the resampling operation, which is used to transform the azimuth non-uniform data block corresponding to different PRFs into uniform data, must be performed in order to smoothly combine each block data point in azimuth.Because the range-frequency-dependent de-skewing and de-ramping processing of the signal will introduce additional phase terms, it is necessary to eliminate the redundant phase terms in the subsequent processing to restore the original Doppler history of the signal.Then, the re-ramping operation can be performed by multiplying the re-ramping function with the signal after the azimuth resampling of each sub-block data.The re-ramping function can be written as ( ) where n is the number of the signal sampling of the m-th block data after the azimuth resampling. uni PRF is the uniform sampling frequency after the azimuth resampling.Different from the conventional sliding-spotlight mode, the total Doppler bandwidth of the complete raw data in the spaceborne squint sliding-spotlight mode after the reskewing operation is still back-folded.Therefore, an azimuth data mosaic operation to resolve the problem of residual Doppler spectrum back-folding should be introduced. At first, multiple replications of azimuth data are arranged together in the Doppler domain to resolve the aliased Doppler spectrum.The number of replications is where B f and B sq are the azimuth beam bandwidth and the squint additional bandwidth after azimuth combination, respectively.After the azimuth mosaic operation in the 2-D frequency domain [34], the following range-frequency-variant Doppler filter is applied to remove the redundant spectrum and to obtain the 2-D spectrum of the desired raw data. with After Doppler filtering, the Doppler spectrum of the original echo data without aliasing is obtained, but the number of refreshed azimuth samples is obviously increased.In order to improve the efficiency of the proposed algorithm, the redundant spectrum at both ends of azimuth frequency domain needs to be deleted, the new azimuth sampling frequency is updated as follows: Finally, the raw 2-D spectrum with sufficient sampling frequency is obtained. Simulation Experiments In this section, a designed simulation experiment on three point targets is carried out to validate the proposed pre-processing method, and simulation parameters are shown in Table 1.The designed scene is shown in Figure 15, and the squint observation angle in the azimuth middle time is 25 The real parts of echo data of three point targets with different PRFs are shown in Figure 16a.Their corresponding 2-D spectra are shown in Figure 16b, and the 2-D spectrum of raw data in each block is aliased in the Doppler domain.Afterwards, the nonaliased 2-D spectrum of raw data in each block is obtained after the de-skewing, rangefrequency-dependent de-ramping and azimuth resampling, as shown in Figure 16c.In Figure 16a-c, the first block contains the echo data and spectrum of P1 and P2; the middle block contains the echo data and spectrum of P1, P2 and P3; and the third block contains the echo and spectrum of P2 and P3.Consequently, the reconstructed signal of the whole scene in the 2-D time domain has uniform sampling frequency after azimuth combination, as shown in Figure 16d, and its corresponding spectrum is shown in Figure 16e.Finally, the original 2-D spectrum with sufficient sampling frequency is well-recovered by azimuth re-skewing and range-frequency-dependent filtering, as shown in Figure 16f.The real parts of echo data of three point targets with different PRFs are shown in Figure 16a.Their corresponding 2-D spectra are shown in Figure 16b, and the 2-D spectrum of raw data in each block is aliased in the Doppler domain.Afterwards, the non-aliased 2-D spectrum of raw data in each block is obtained after the de-skewing, range-frequencydependent de-ramping and azimuth resampling, as shown in Figure 16c.In Figure 16a-c, the first block contains the echo data and spectrum of P 1 and P 2 ; the middle block contains the echo data and spectrum of P 1 , P 2 and P 3 ; and the third block contains the echo and spectrum of P 2 and P 3 .Consequently, the reconstructed signal of the whole scene in the 2-D time domain has uniform sampling frequency after azimuth combination, as shown in Figure 16d, and its corresponding spectrum is shown in Figure 16e.Finally, the original 2-D spectrum with sufficient sampling frequency is well-recovered by azimuth re-skewing and range-frequency-dependent filtering, as shown in Figure 16f.The imaging result of the proposed method is shown in Figure 17, while the interpolated contour plots of three points are shown in Figure 17b-d, respectively.It can be seen that each target is well-focused with the proposed approach; corresponding performance indicators for measuring imaging quality, including resolution (res.), peak-side-lobe ratio (PSLR) and integrated-side-lobe ratio (ISLR), are computed and listed in Table 2. PSLR represents the ratio of the main lobe peak intensity to the maximum side lobe peak intensity, and ISLR represents the ratio of side lobe energy to main lobe energy.All simulation results in Figures 16 and 17 and Table 2 validate the proposed azimuth pre-processing method to handle the raw data of squint sliding-spotlight SAR with BV-PRF for azimuth data uniform resampling capacity.The simulation results of fixed PRF and CV-PRF schemes are shown in Figure 18.As shown in Figure 18a, in the fixed PRF scheme, the echo data of P 1 and P 3 targets located in the edge of the scene cannot be completely obtained.Therefore, the resolution of P 1 and P 2 targets in imaging results decreases as shown in Figure 18c.The raw data of the whole scene can be successfully obtained by the CV-PRF scheme, as shown in Figure 18b, and the three targets are also well-focused in Figure 18d.However, the computational complexity of the CV-PRF scheme is approximately dozens of times greater than that of the proposed method.Therefore, the proposed approach is more effective. Discussion For the large range swath and azimuth scanning angle, targets located in the edge of the swath cannot be fully obtained by the fixed PRF due to the large RCM.Therefore, for the small scene and scanning angle, the fixed PRF scheme is more appropriate. The CV-PRF can disperse the position of the blind areas due to transmitted pulses along the azimuth, so the skewed SAR raw data are completely rectified.Therefore, the CV-PRF scheme is suitable for the large imaging swath and azimuth scanning angle.However, the duration of the instantaneous echo receiving window is long in many imaging modes.The BV-PRF scheme can not only solve large RCM, but also greatly reduce the non-uniformity of azimuth sampling.Therefore, for the large imaging scene and azimuth scanning angle, the echo data of the whole imaging scene can be successfully obtained by BV-PRF scheme, and the subsequent azimuth data reconstruction also becomes efficient. Conclusions An azimuth full-aperture processing method for processing squint SAR raw data formed by BV-PRF scheme is proposed, which makes the whole raw data set have a sufficient and uniform azimuth sampling frequency.In a large imaging scene and azimuth scanning angle, the raw data of the whole swath with BV-PRF scheme can be completely obtained using a limited number of PRFs.Therefore, the BV-PRF scheme can be preferably used in spaceborne squint sliding-spotlight mode.However, when the number of samples in the designed BV-PRF scheme is too small and too large at the same time, there can be redundant operations in the proposed azimuth full-aperture processing method.In future research, an equal and sufficiently small sample number in the BV-PRF scheme should be designed.The de-ramping operation can be omitted in the proposed approach, which further reduces the calculation of the system.Furthermore, azimuth sub-aperture processing is also a strategy for processing the echo data generated by the BV-PRF scheme. 2 , where swath width is 20 km and near  is 32°.The factor 1 Figure 2 .of 1  and 2  Figure 2. Comparison of 1  and 2  with different squint angles.(a) The squint angle in the middle acquisition interval is 25°; (b) the squint angle in the middle acquisition interval is 30°. Figure 2 . Figure 2. Comparison of Γ 1 and Γ 2 with different squint angles.(a) The squint angle in the middle acquisition interval is 25 • ; (b) the squint angle in the middle acquisition interval is 30 • .  ini PRF of the system should be determined.The initial ini PRF determined by the Do bandwidth a B can be calculated as follows: is the azimuth oversampling factor, s v is the platform speed, and a L i azimuth antenna length.Afterwards, the minimum value of ( ) .Cons ing the case in which the echo data are received after the s n transmitted pulses are t mitted, the number s n can be expressed as follows:  is the guard interval.Imaging geometry relationship Calculation of the range of PRI value according to the near range and ns Determination of the number of spanned pulses ns from transmitting to receiving Design of the BV-PRF sequence Yes No Initial pulse repetition frequency PRFini setting Calculation of the near range and the far range of the any time  Figure 4 . Figure 4. Flowchart of the design of the BV-PRF sequence. the range of pulse repetition frequency is   min max PRF , PRF .Assuming that the initial scanning angle is 28.2° and the terminal scanning angle is 21.8°, the position of echo data in the receiving window gradually moves forward during azimuth beam scanning.The pulse interval PRI remains unchanged if the pulse signal transmitted by the radar meets the following condition: Figure 4 . Figure 4. Flowchart of the design of the BV-PRF sequence. Figure 6 . Figure 6.Echo simulation with different pulse transmission sequences.(a) Scene distribution of targets; (b) the fixed PRF; (c) the CV-PRF; (d) the BV-PRF. sq,m  is the central squint angle of the m-th data block, P is a point target located in the position 0 ( , ) XR in the imaged swath, 0 R and rot R are the slant ranges from the sensor path to the imaged target and the virtual rotation center, respectively, and m T is the whole acquisition interval of data block corresponding to PRF m .The azimuth beam scanning at a constant rotation rate r  leads to a steering factor A. It is defined as follows: Figure 7 . Figure 7.The imaging geometry of spaceborne squint sliding-spotlight SAR with BV-PRF.Figure 7. The imaging geometry of spaceborne squint sliding-spotlight SAR with BV-PRF. Figure 7 . Figure 7.The imaging geometry of spaceborne squint sliding-spotlight SAR with BV-PRF.Figure 7. The imaging geometry of spaceborne squint sliding-spotlight SAR with BV-PRF. Figure 8 . Figure 8. Variation curves of the instantaneous Doppler centroid varying rate rot k under the sidelooking and the squint.(a) rot k within 3.2  under the side-looking; (b) rot k within 3.2  under rotk can be ignored; when the squint angle is 25°, the change in instantaneous Doppler center varying rate rot k reaches 160 Hz/s.Therefore, as the squint angle changes, the change in rot k needs to be considered and changes nonlinearly within an appropriate azimuth beam scanning range.The Doppler frequency ,m Figure 8 . Figure 8. Variation curves of the instantaneous Doppler centroid varying rate k rot under the sidelooking and the squint.(a) k rot within ±3.2 under the side-looking; (b) k rot within ±3.2 under the squint angle 25 • . Figure 9 . Figure 9. Azimuth time-frequency diagram of squint sliding-spotlight SAR data with BV-PRF. Figure 9 . Figure 9. Azimuth time-frequency diagram of squint sliding-spotlight SAR data with BV-PRF. BFigure 10 . Figure 10.Ratios of the squint additional bandwidth to the azimuth beam bandwidth in adjacent data blocks.(a) The ratio  in the prior data block; (b) the ratio  in the latter data block. Figure 10 . Figure 10.Ratios of the squint additional bandwidth to the azimuth beam bandwidth in adjacent data blocks.(a) The ratio Φ in the prior data block; (b) the ratio Φ in the latter data block. Figure 11 . Figure 11.The block diagram of 1-D azimuth pre-processing of the proposed method. Figure 12 . Figure 12.One-dimensional azimuth compression results of the proposed method.(a) Azimuth spectrum without azimuth resampling; (b) the azimuth compression result of (a); (c) Doppler spectrum after azimuth resampling; (d) the azimuth compression result of (c). K is the frequency modulation rate of the transmitted pulse. Figure 13 . Figure 13.The block diagram of 2-D azimuth pre-processing of the proposed method. Figure 13 . Figure 13.The block diagram of 2-D azimuth pre-processing of the proposed method. Figure 15 . Figure 15.The designed imaging scene with three point targets. 4 16Figure 16 . Figure 16.Simulation results of the proposed method.(a) The real part of echo data in three blocks; (b) 2-D spectra of (a); (c) 2-D spectra in three blocks before azimuth combination; (d) echo data of the whole imaged scene after azimuth combination; (e) 2-D spectrum of (d) before re-skewing; (f) the recovered 2-D spectrum. Figure 17 . Figure 17.Imaging results on three point-targets handled by the proposed method.(a) Imaging results with three points; (b) contour plot of target P 1 ; (c) contour plot of target P 2 ; (d) contour plot of target P 3 . 6 18Figure 18 . Figure 18.Simulation results of the fixed PRF and CV-PRF schemes.(a) The real part of echo data with the fixed PRF scheme; (b) the real part of echo data with the CV-PRF scheme; (c) imaging results with the fixed PRF scheme; (d) imaging results with the CV-PRF scheme. Table 1 . Simulation parameters.Figure 15.The designed imaging scene with three point targets. Table 2 . Performance indicators of three point-targets of the proposed method.
2022-12-07T19:13:30.383Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "9cab0c860e11ee3a5140728f2ad1033b063c36ae", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/23/9328/pdf?version=1669952259", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77829000fb50498a5876b1c05efeedfda6d95e02", "s2fieldsofstudy": [ "Mathematics", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
33065802
pes2o/s2orc
v3-fos-license
Do the benefits outweigh the side effects of colorectal cancer surveillance? A systematic review quences of follow-up itself, and the downstream impact of false positive or false negative tests. Accordingly, the potential survival benefits of CRC follow-up must be weighed against these potential negatives. The present review compares the benefits and side effects of CRC follow-up, and we propose future areas for research. Abstract Most patients treated with curative intent for colorectal cancer (CRC) are included in a follow-up program involving periodic evaluations. The survival benefits of a follow-up program are well delineated, and previ-ous meta-analyses have suggested an overall survival improvement of 5%-10% by intensive follow-up. However, in a recent randomized trial, there was no survival benefit when a minimal vs an intensive follow-up program was compared. Less is known about the potential side effects of follow-up. Well-known side effects of preventive programs are those of somatic complications caused by testing, negative psychological conse-REVIEW INTRODUCTION Colorectal cancer (CRC) is the third most common cancer in the western world, and surgery is the only curative treatment. Approximately one-third of those surgically resected will experience recurrent disease with an expected survival of less than two years [1] . Patients treated with curative intent are usually included in some form of Abstract Most patients treated with curative intent for colorectal cancer (CRC) are included in a follow-up program involving periodic evaluations. The survival benefits of a follow-up program are well delineated, and previous meta-analyses have suggested an overall survival improvement of 5%-10% by intensive follow-up. However, in a recent randomized trial, there was no survival benefit when a minimal vs an intensive follow-up program was compared. Less is known about the potential side effects of follow-up. Well-known side effects of preventive programs are those of somatic complications caused by testing, negative psychological conse-preventive follow-up program involving periodic evaluations. Reviews comparing various follow-up programs have suggested that more intensive follow-up strategies tend to increase the five-year survival rate by 5%-10% [2,3] . Most national follow-up programs recommend intensive follow-up. However, there exist controversies on how to define an "intensive" follow-up program. This is mirrored in the fact that two identical national follow-up programs do not exist. In general, an intensive follow-up program consists of regular testing (usually every 3 mo the first two years) and consultations, whereas a low intensive follow up program is defined as no regular testing and consultations. In addition, most national follow-up programs make a distinction between rectal cancer and colon cancer surveillance, which is reflected in the difference of recommended radiological test modalities. However, all preventive programs have the potential to harm patients [4][5][6] . The potential survival benefits of a follow-up program for CRC cancer patients have been well described, but much less is known about the potential negative effects accruing to patients and their families [2,3] . Patients surgically treated for CRC have to decide in partnership with the treating surgeon or family physician, whether they should participate in a CRC follow-up program. In making this personal decision, it is important to know not only the magnitude of potential benefits, but also the magnitude and likelihood of the potential adverse and unintended effects [5] . Firstly the survival benefits of intensive CRC followup must be delineated. In general, the benefits of preventive programs can be described as: (1) relative reduction of mortality rate; (2) absolute reduction of mortality; (3) the number of patients needed to prevent one adverse event; (4) evaluation of treatment effect; (5) reassurance by follow-up leading to improved quality of life (QoL); and (6) detection of other diseases [4] . In this paper we will further elaborate these terms. Secondly, the side effects of CRC follow-up must be compared to the survival benefits. Well-known side effects of preventive programs are (1) over-diagnosis; (2) somatic complications caused by testing; (3) negative psychological consequences of follow-up; and (4) impact of a false positive (leading the patient to believe that he or she has recurrent disease) or false negative (leading to a potential diagnostic delay) tests. Thirdly, the net benefits of follow-up must be con-sidered in light of the associated economic costs. The United Kingdom's National Institute for Health and Clinical Excellence (http://www.nice.org.uk) has proposed a societal willingness-to-pay of £40000 per life year gained, but this upper limit is controversial. In the case of CRC follow-up, it means that the long-term benefits of a follow-up program (i.e., the attempted curative resection of recurrent disease and resulting gains in survival) have to be balanced against society's willingness to pay for such a service. To our knowledge, a systematic comparison of the benefits vs side effects of CRC follow-up has not been performed. Thus, the objective of this paper is to summarize the existing evidence regarding the benefits and side effects of CRC follow-up. An overview of the potential benefits and harms of CRC follow-up is provided in Table 1. RESEARCH We performed a systematic PubMed search with the medical subject heading (MeSH) keywords "colorectal" in combination with the keywords "follow-up", "surveillance", "cancer recurrence"; "risk benefit assessment" and "false positive reactions". Inclusion of papers was decided by discussion among authors. All reference lists of included publications were searched for relevant publications. Finally we identified relevant publications from the author's personal databases. This resulted in 60 publications included in the review. Benefits of colorectal follow-up Benefit: Improved survival: The recurrence rate in CRC has been reported to be 30%-40% within 5 years ( Figure 1) [1] . This means that all follow-up programs must focus on the early detection of recurrent cancers, aiming to offer curative metastases surgery to as many patients as possible. Two contemporary meta-analyses revealed that intensive and less intensive follow-up led to detection of Figure 1 Overall survival of colon cancer dukes A-D. Eighty percent of the recurrences occur within the 3 first years after initial treatment, which is used as an argument to perform intensive surveillance the first 3 years. After 5 years, the survival curve is steady with few deaths caused by colon cancer. Courtesy of the Norwegian Cancer Registry (http://www.kreftregisteret.no/en/). a similar number of recurrences but that detection occurred between 5.91 mo (95%CI: 3.09-8.74) and 6.75 mo (95%CI: 2.44-11.06) earlier with intensive follow-up. Both analyses also found that curative reoperation for metastasis was significantly more likely in those subjects who were followed up intensively (Tjandra et al [2] : OR = 2.41, 95%CI: 1.63, 3.54. Jeffery et al [3] : OR = 2.81, 95%CI: 1.65-4.79). The survival benefits of intensive CRC followup has been reported to be a 5%-10% reduction in the total cohort mortality rate. The increased overall survival, earlier detection of recurrence, and higher reoperation rates observed provide only circumstantial evidence that intensive follow-up extends life by making cure of recurrent disease more likely. Neither meta-analyses found that cancer specific survival was improved by intensive followup. However, there exists limited data regarding the relative reduction in mortality or number of patients who must be followed intensively in order to save one life from recurrent cancer death. Factors other than intensive follow-up have been postulated to contribute to the mortality reduction associated with CRC follow-up. Some combination of increased psychological well-being, improved health behavior, and improved treatment of coincidental disease may contribute to the mortality benefit. This issue represents an important direction for future studies [7] . Recently, the results from the follow-up after colorectal surgery (FACS) trial were reported [8,9] . The factorial randomized trial design, with independent allocation to the carcinoembryonic antigen (CEA) and computed tomography (CT) interventions, meant that patients received 1 of 4 types of follow-up: (1) CEA follow-up: measurement of blood CEA every 3 mo for 2 years, then every 6 mo for 3 years, with a single chest, abdomen, and pelvis CT scan at 12 to 18 mo if requested at study entry by hospital clinician; (2) CT follow-up: CT of the chest, abdomen, and pelvis every 6 mo for 2 years, then annually for 3 years; (3) CEA and CT followup: both blood CEA measurement and CT imaging as above; and (4) Minimum follow-up: no scheduled follow-up except a single CT scan of the chest, abdomen, and pelvis at 12 to 18 mo if requested at study entry by the hospital clinician. Interestingly, there were no differences seen in overall or cancer-specific mortality between any of the intensive arms and the minimum follow-up group. Most patients with recurrence suffered from incurable disease. In fact, only 71 (5.9%) of 1202 patients followed were suitable for potentially curative treatment. Significantly more patients were treated with curative intent in the intensive follow up groups compared to minimalist follow-up, but there were no difference in the number of total deaths in the two groups. These data argue against very intensive follow-up schedules. In conclusion, although two meta-analyses have reported a 5%-10% reduction in overall mortality among patients undergoing intensive follow-up, the existing evidence of any benefit in terms of cancer-specific survival is limited. The results from the FACS trial did not show any compelling evidence of a significant survival benefit of CRC follow-up. Hopefully, the final results of the ongoing COLOFOL trial will help answer the debate regarding which follow program enables the highest survival [10] . A summary of randomized controlled trails and their potential survival benefit is provided in Table 2. Benefit: Control of treatment effects: There exist several international controversies around treatment (drains vs no drains, laparoscopic technique vs open technique among others) and follow-up of patients with CRC [11,12] . There are for instance no similarly designed followup program at an international level [13][14][15][16] . It is therefore imperative for improved CRC treatment quality that the effects of radio-chemotherapy, surgical technique and postoperative follow-up are continuously evaluated, and a structured follow-up program might be a way to perform such a quality control [17,18] . Benefit: Reassurance of follow-up: There is no existing evidence that participation in a follow-up program leads to increased personal well-being. Some researchers have investigated the psychological effects of CRC follow-up [19][20][21][22] . None of the resulting studies have found improvement in the patient QoL with follow-up. Harms of CRC follow-up Harm: False positive tests: Table 3 summarizes the false positive rates of the most commonly used CRC follow-up tests. As an illustration, consider a patient followed according to the most recent United States followup recommendations from the National Comprehensive Cancer Network [16] . Based on the most optimistic estimates in Table 3 the annual probability of at least one false positive test for a patient with no actual recurrence would be 41% in each of years one and two, and 28% in each of years three, four, and five. Over the entire fiveyear period, the probability of at least one false positive would be 87%. Given their high likelihood, it is important to consider the possible consequences of false positive follow-up tests. Primarily, these can come in the form of economic costs and psychological impact. None of the prospective studies or economic models focusing on CRC recurrence have reported the economic costs of false positive follow-up tests, but quantifying these costs could provide important perspective. While no studies appear to have specifically addressed the psychological or quality-of-life impact of false positive follow-up tests in colorectal or other types of cancer, a small number of investigators have examined the quality-of-life impact of false positive cancer screening tests. In general, these studies have shown increased anxiety following false positive screening results for as long as 18 [23] to 24 [24] mo after the false positive result [23,25,26] . This data comes from populations who have not previously metachronous CRC's (normally representing between 1.6% and 7.4% of CRC recurrences) or adenomas with advanced features [2,32,33] . The relatively invasive procedure has sensitivity of 95% and specificity of 100% for detecting high-risk polyps or tumours, however the major complication rate has been reported as 0.2%-1.2% [34][35][36] . To date, no trial has reported increased survival associated with colonoscopy follow-up after CRC resection. Because of the unproven benefit and non-trivial risk, some have argued against routine endoscopic follow-up after curative CRC resection [37][38][39] . Further study is needed to explore whether CT Colonography may eventually provide a better balance of risks and benefits [38] . Harm: QoL implications: There is limited evidence showing that enrolment in a follow-up program improves QoL among CRC survivors. In fact, available data from breast follow-up trails could be used to support the opposite viewpoint: such follow-up programs and tests might negatively influence QoL [40][41][42] . It is often claimedand some evidence corroborates [22] -that follow-up tests can be reassuring for patients, and this may be true if all of the tests are completely normal every time. However, equivocal test results such as a slightly elevated CEA level, or questionable shadows on CT are quite common, and they commonly spur additional testing. This period between initial suggestive test result and subsequent conclusive work-up can be a stressful one for patients [21] . Some researchers have investigated the psychological effects of CRC follow-up [19][20][21][22] . None of the resulting studies have found improvement in the patient QoL with been diagnosed with and treated for cancer, so the results are difficult to extrapolate to CRC survivors. Harm: Somatic complications caused by tests: Aside from any unlikely negative sequelae of CT radiation exposure, colonoscopy related colonic perforation and post-procedure bleeding represent the most likely serious complications arising from CRC follow-up. Endoscopic follow-up is endorsed in most comprehensive follow-up recommendations [16,[27][28][29][30][31] primarily as a means to detect Based on specificity estimates from individual studies of 89% [55] (n = 24), 95% [58] (n = 115), 72% [56] (n = 87), and 91% [57] (n = 100); 2 Based on specificity estimates from individual studies of 96% [60] (n = 68), 96% [57] (n = 99), and 67% [59] (n = 56) subjects. The last was the only to employ intraoperative confirmation of hepatic metastases. The annual probability of at least one false positive test for a patient with no actual recurrence would be 41% in each of year one and two, and 28% in each of year three, four, and five. Over the entire five-year period, the probability of at least one false positive would be 87%. CT: Computed tomography; CEA: Carcinoembryonic antigen. follow-up. In a recent published randomized trial comparing general practitioner vs surgeon-organized followup, there were no differences between the two groups in QoL measured by ERTOC-QLQ C30 and EQ-5D [21] . In fact, both groups had similar QoL levels as the general United Kingdom population at baseline (1 mo postoperatively). Results from a similar 2006 trial by Wattchow et al [19] told a similar story. There, study patients remained in the normal range for depression and anxiety with no difference between the two groups at either 12 or 24 mo [19,20] . In recent meta-analyses, it has been shown that anxiety rather than depression was a major problem among long-term cancer survivors. It is however unknown what impact an organized cancer follow-up program has on anxiety [43] . It has been shown that 46 percent of patients reported physiological distress while awaiting the results of a potential cancer diagnosis [44] . This and other trials suggest that tests recommended by a cancer screening or preventive program cause harm in terms of physiological distress [44][45][46] . The only survey showing a slight improvement in QoL among CRC survivors with intensive follow-up was published in 1997 [47] . This survey included 350 Danish participants who reported a small but significant increase in QoL associated with more frequent follow-up, as measured by the Nottingham Health Profile. In conclusion, there exists very limited evidence that CRC follow-up improves QoL among CRC survivors. Further research is needed, in particular, to address the impact of a false positive follow-up test on QoL among CRC survivors. From breast cancer follow-up trials, there is compelling evidence that postoperative follow-up does not improve QoL and that follow-up testing might cause physiological distress [48] . Factors that may impact QoL in a positive or negative way among colon cancer survivors enrolled in a follow-up program are shown in Figure 2. DIRECTION OF FUTURE RESEARCH According to the World Health Organisation, the success of preventive programs depends on three fundamental principles (www.who.int/cancer/detection/variouscancer/en/): The target disease should be a common form of cancer, with high associated morbidity or mortality; Effective treatment, capable of reducing morbidity and mortality, should be available; Test procedures should be acceptable, safe, and relatively inexpensive. In CRC follow-up these principles are fulfilled: (1) CRC is the third most common cancer disease, and the risk of recurrence is as high as 30 to 40 percent; (2) if successful, metastasectomy can be curative (i.e., R0 resections); and (3) the tests in most programs are acceptable, relatively safe and relatively inexpensive. However, as discussed, there are several potential side effects of CRC follow-up; future research much be directed at further exploring these harms and weighing them against the expected survival benefit. Recently, a survey published in British Medical Journal found that the harms of screening and preventive programs were poorly reported [49] . Healthcare decision makers, surgeons, and patients therefore cannot make informed choices. Personalized medicine is defined as a medical model that proposes the customization of healthcare, with medical decisions, practices and tests being tailored to the individual patient. To our knowledge there exist no individual risk stratification in the different national colorectal follow-up guidelines, and this is an area of future research. Firstly we believe that genetic testing and biological determinants of tumor recurrence will gain increasingly importance [50,51] . The individualization of cancer care requires a deep understanding of tumor biology and the identification of tumor subsets that offer targets for tumor specific treatment. Of specific interest for CRC follow-up programs, are the promising results of the 12-gene recurrence score (RS), which is a quantitative assay integrating stromal response and cell cycle gene expression. It is shown that the 12-gene RS predicts recurrence in stage Ⅱ colon cancer. This tool appears promising as a means to inform decision making around adjuvant chemotherapy following resection of stage Ⅱ colon cancer. The use of the tool in planning post-treatment follow-up does not appear to have been explored, however [52] . Secondly, test intensity, test modality and the risk of false positive events has to be discussed in details with the patient. As shown in Table 3, the probability of at least one false positive event during a five-year followup program might be as high as 87%. High-test intensity programs should be offered to patients with a high probability of recurrent cancers, but this must be weighed against the patient's preferences of experiencing a false positive test. Finally, research must be aimed to identify the optimal combination of test, blood samples and clinical examinations that creates the highest possible overall follow-up sensitivity and specificity. CONCLUSION Any survival benefit (or lack of benefit) of the CRC fol- low-up must be considered along with the views of the patients to ensure that follow-up programs are accessible and acceptable, and that they address all patient needs and concerns. However, the problem of postoperative cancer follow-up is that a vast majority of patients must undergo a large number of tests without any benefit, or even with some harm, to identify a small number of patients with curable recurrence. Patients with asymptomatic but incurable disease (10%-20% of all recurrences) likely represent the group with the most potential to be harmed by follow-up [21,53] . In conclusion, little is known about the potential harms of CRC follow-up, especially when it comes to the impact of false positive tests. Tailored follow-up programs based on the individual's risk of cancer recurrence and likely metastatic spread pattern must be developed. Further research is needed to settle these controversies, and new methods of decision-analytic modeling in combination with the emerging data from COLOFOL must be applied [9,10] .
2018-04-03T02:21:00.784Z
2014-05-15T00:00:00.000
{ "year": 2014, "sha1": "9a29d11874dcaa518dc3d27fcf2912005cf65d5c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4251/wjgo.v6.i5.104", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ae9ad983d1866012b8ba29ea6654b60e88a83d46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244263860
pes2o/s2orc
v3-fos-license
The prevalence of vascular injury utilising the lateral parapatellar approach for malignant distal femoral tumour resections: a case series Distal femoral tumour resections are mostly performed through a medial or anteromedial approach. The lateral parapatellar approach is an alternative method. This case series assessed vascular complications during the resection of malignant distal femoral tumours via the lateral parapatellar approach. A retrospective case series at a private practice in Pretoria was performed. All patients who underwent malignant distal femoral tumour resections through a lateral parapatellar approach between 2001 and 2019 were included in the study. All cases were performed by a single surgeon. An analysis of the patients’ files was performed, to determine if there were any intraoperative or immediately postoperative vascular complications. Thirty-six patients were identified who underwent resection of their malignant distal femoral tumours via the lateral parapatellar approach. Osteosarcoma was the most prevalent bone tumour (81%). All resection margins were clear on histology reports. The vascular complication rate was 3% (95% CI 0–8%). Twelve patients demised over the 18-year period (33%). The findings suggest that a low risk of vascular complications can be expected when resecting malignant distal femoral tumours through a lateral parapatellar approach. This rate of vascular injury is comparable to other studies that also performed distal femoral tumour resections through other approaches. Introduction Malignant bone tumours of the distal femur often abut and at times even encase surrounding neurovascular structures. Due to the hypervascularity associated with these malignant bone tumours, vascular complications of dissection may include vessel laceration, venous or arterial intimal damage, arterial thrombosis with resultant limb ischaemia or venous thrombosis with possible thromboembolic events. 1 Surgery of malignant bone tumours has developed significantly over time but requires a high level of skill. It is a great responsibility for the tumour surgeon to provide a functional solution to the patient in the presence of such a devastating diagnosis. Amputation of a limb would have been accepted as an appropriate outcome in the past; however, the aim is now to salvage the limb and ultimately improve the patient's functionality, satisfaction and quality of life. 2 Myers et al. have shown that limb salvage is more cost-effective when compared to amputation in the long run. 3 Different surgical approaches to access the distal femur have been described. The most often used surgical approach and gold standard of accessing the distal femur for tumour resection is the anteromedial approach. This approach exposes the anterior aspect of the femur and gives access to the popliteal fossa. It identifies the neurovascular bundle in Hunter's canal and allows it to be mobilised and protected throughout the procedure. With this approach, additional soft tissue cover is seldom needed during distal femur resections; soft tissue cover is, however, frequently indicated during proximal tibia resections. 4 The anteromedial approach is advised for experienced surgeons in the field of tumour and sepsis surgery. The lateral approach to the femur is the most used approach relating to benign bone lesions in the distal femur. It is considered to be surgically less demanding as there is no need for neurovascular dissection. The option of proximally extending the incision is readily available and holds benefits for future surgery. 4 In the series described in this article, the lateral parapatellar approach was routinely used for the biopsy and resection of distal femoral tumours. There is theoretically an increased risk for vascular injury with these procedures, due to the proximity of the vascular structures and due to the neo-vascularisation associated with malignancy in bone. There is limited data in the literature regarding the incidence of vascular injuries when the lateral parapatellar approach to the distal femur for tumour resections is used. [5][6][7] The available literature mostly reviews patient outcomes and endoprosthetic survival. The rarity of these procedures is confirmed by the long-term follow-up and extended time frames in which results were recorded. 2,3,8,9 This study aimed to determine the prevalence of intraoperative and immediate postoperative vascular complications when resecting distal femoral tumours through a lateral parapatellar approach. It is based on the third author's experience (TLBR) with 36 consecutive patients who had distal femoral tumour resections through a lateral parapatellar approach. Materials and methods A retrospective case series at a private orthopaedic practice in Pretoria was conducted. Ethical approval was obtained prior to the commencement of data collection from patient records. All cases performed between January 2001 and July 2019 were scrutinised. Patients included in the study were those that had a malignant primary or metastatic distal femoral tumour lesion that was resected (Figure 1), and a prosthesis inserted through the lateral parapatellar approach (Figure 2). An experienced tumour surgeon performed all cases. Complete records of cases were mandatory with a minimum follow-up of six months. Patients were excluded from the study if a distal femoral tumour resection was done via any approach other than the lateral parapatellar approach and if the extent of the tumour resulted in a non-salvageable resection. Benign distal femoral tumours were excluded from the study. All tumour resections in this series were performed via a longitudinal lateral parapatellar approach to the femur. All resected tumours were sent for histological analysis. The patients were reviewed daily by the same surgeon until discharge from hospital. They were then followed-up at two weeks and six weeks postoperatively. Treatment in the oncology unit continued as per protocol. Surgical procedure The surgical procedure is routinely performed in the supine position. The foot is placed on a bolster, and the thigh is supported against a limb positioner. The patient's MRI is used to determine the level of the resection on the femur. The affected limb is draped with the Charnley double-drape technique and prepared from the iliac crest to the foot. No tourniquet is used, and preoperative antibiotics are administered. The surgical incision is marked, including an ellipse around the previous biopsy area on the lateral aspect of the knee ( Figure 2). The knee is flexed, and this position allows for the soft tissue and vascular structures in the popliteal fossa to 'fall with gravity' away from the surgical area. A longitudinal incision is made in the midline from the tibial tubercle and extended as proximal as needed, utilising a lateral parapatellar approach and dislocating the patella medially. The femur is measured and marked at the level of the resection with a constant reference point on the tibia. This pre-resection measuring must be accurate as the aim is to restore the leg length when inserting the prosthesis. The incision can be extended proximally to the tip of the greater trochanter or the anterior superior iliac spine in cases where a total femur endoprosthesis is inserted (Figure 3). An elliptical resection of the lateral biopsy site is performed with extension to the underlying tumour, including a 2 cm margin in all planes. This crucial part of the procedure emphasises the importance of placing the biopsy tract accurately in order to prevent tumour spillage. The skin, subcutaneous tissue and fascia lata are incised in line with its fibres. The perforators are ligated and managed. The vastus lateralis can be transposed anteriorly over the femur after all perforators are tied off. A margin of muscle can be left on the tumour for histological purposes, and the entire length of the femur can be exposed. The cruciate ligaments, collateral ligaments and posterior capsule are cut and the specimen is dissected from distal to proximal. Particular attention is paid to the artery when the tumour is excised as it can be pulled towards the tumour. The vascular structures in Hunter's canal are not routinely identified on the medial aspect. The femur is resected at the predetermined level, and a sample of the proximal medullary canal is taken with clean instruments. A pathologist performs a frozen section on the specimen of the medullary canal to confirm tumour-free margins. The dorsalis pedis pulse is routinely checked at this stage of the surgery. After a glove change, the tibia is prepared using routine steps with clean instruments, while awaiting the pathology report. The procedure continues in a stepwise fashion after that. The wound is closed routinely in layers over a drain. All patients are observed in a high care facility postoperatively to optimise pain management and close monitoring of the limb. The same histopathologist who performed the frozen section is responsible for the formal pathology report of the excised tumour. Statistical analysis Continuous variables were described using mean and median with a range. Categorical variables were described using frequency and proportions. The rate of complications was expressed as a proportion of all cases with a 95% confidence interval (CI). All analyses were conducted in Excel 2013. Results Thirty-six patients over an 18-year period were included in the study. The study population consisted of 23 male and 13 female patients. The median age was 23.1 years (7-69 years) with interquartile range (IQR) of 14-38 years, mean=27.2 years, SD=17.7. The median time of follow-up was 19.4 months (6-94 months), IQR=10-41 months, mean=29.7 months, SD=25.7. Nine patients were still being actively followed up at the time of this report. Three patients had their follow-up elsewhere after they completed their six-months follow-up at the practice. Histologically, the majority of tumours were osteosarcoma (29/36; 81%) ( Table I). All resection margins on the final histological reports were clear. No patients required additional soft tissue cover. Over the 18 years, 12 patients died (33%). One vascular complication was recorded during the study period. An overall vascular complication rate of 3% (95% CI 0-8%) was thus reported. The vascular injury occurred in a 47-year-old male who presented with telangiectatic osteosarcoma. The vascular injury was recognised intraoperatively; the artery was injured on the medial border of the distal femur due to tumour displacement. An immediate arterial repair was performed by a general surgeon on call for the hospital. Limb perfusion was, however, inadequate as measured by Doppler flow studies. The patient was transferred to a specialist vascular surgeon, and a femoral-popliteal bypass procedure was performed within 12 hours of the injury. The limb was salvaged. The patient died six years later due to metastatic disease. Discussion Malignant primary bone tumours occur most commonly in the distal femoral area and in the proximal tibia. 4 Bone sarcomas account for 0.2% of all malignancies with an adjusted incidence rate of 0.9 per 100 000 per year for all bone and malignant joint tumours. 10 Primary bone tumours are a scarce entity. Distal femoral tumours predominantly consist of osteosarcoma, chondrosarcoma and Ewing's sarcoma. 11,12 Neo-adjuvant chemotherapy, surgical resection with a wide margin and adjuvant chemotherapy is considered to be the mainstay of treatment of osteosarcoma and Ewing's sarcoma. 13 Reconstructive options have evolved significantly and differ for various age groups. These include osteoarticular allograft, allograft arthrodesis, prosthetic arthrodesis, rotationplasty and endoprosthetic replacement surgery. 1 Each tumour resection is individualised with regard to its location, histological type and extension into the soft tissues. 3,4,9 In this series, 81% of the tumours resected were osteosarcoma, of which conventional osteosarcoma was the most prevalent. A recent local paper also reported that osteosarcoma accounted for 72.6% of all primary malignant bone tumours, of which the distal femur was the most common site (44.7%) with a slight male predominance. The average age in this study population was 27 years (range 7-69) and the mean age of diagnosis in the osteosarcoma group was 20.1 years (range 7-47). This is in keeping with findings in the literature, which shows a higher prevalence of osteosarcoma in the second decade of life. 11 Results in this series did not reflect a bimodal distribution. Contrary to the literature, which shows a slight male predominance, this series had a slight female-to-male predominance ratio of 1.1:1. 4,11,14 Telangiectatic osteosarcoma represents 3-10% of all osteosarcoma. 15 The age distribution for this subtype of osteosarcoma tends to be younger than conventional osteosarcoma. 15,16 In this series, the patient who had the vascular injury had a telangiectatic osteosarcoma, which is rare at the age of 47 years. The series described in this article had a vascular complication rate of 3% (95% CI 0-8%). This complication rate is higher than the reported incidence rate found in a large study done by Natarajan et al. In their series of 246 patients, they had one vascular complication which led to an amputation. Their complication rate of 0.4% is lower than the current series and could be due to the large series they examined as well as the fact that they included benign and malignant bone tumours. 8 A comparison of vascular complications pertaining to the lateral parapatellar approach could not be made due to the paucity in literature regarding this approach. Accardo and colleagues investigated the outcomes of a quadriceps-sparing lateral approach to the distal femur for tumour resection and reported no vascular complications when the lateral approach was used. They stated that an added benefit to this approach was that the incision can be extended to the proximal femur to provide improved exposure if needed. 17 All tumours were successfully resected with the approach used in this study and the complete resections were confirmed by the final histology report. There was no need for additional soft tissue cover or local flaps in this series. When the anteromedial approach was used, Bickels et al. reported the need for 25 gastrocnemius flaps in a series of 110 patients, and Capanna et al. reported the need for rotational or free flaps in three of their 14 patients utilising either an anteromedial or anterolateral approach. 2,6 The main limitation of this study is its retrospective nature. All patients were operated by a single surgeon from a single institution and could therefore be subjected to bias. The small sample size of 36 patients being operated over an 18-year period is considered another limitation. There was no control group in this study. Other outcome measures such as wound complications, patient outcome and functionality scores were not assessed. Conclusion In this series, all distal femoral tumours were accessible and completely resected via the lateral parapatellar approach. The approach avoids dissection of the neurovascular bundle by staying lateral to the bundle, which reduces the risk of iatrogenic injury to vascular structures. This approach had a low vascular complication rate and proved to be safe and reliable. It should be in the orthopaedic surgeon's armamentarium when resecting malignant bone tumours of the distal femur. Ethics statement The authors declare that this submission is in accordance with the principles laid down by the Responsible Research Publication Position Statements as developed at the 2nd World Conference on Research Integrity in Singapore, 2010. Ethical approval was obtained from the Research Ethics Committee of the University of Pretoria prior to the commencement of data collection (546/2019). All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008.
2021-10-19T16:02:48.793Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "78ce503bf0dc41c14f5f88a587c344f300b0675c", "oa_license": "CCBY", "oa_url": "http://www.scielo.org.za/pdf/saoj/v20n3/04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1469334339c084c6b55c097e6f95998491e3eafa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218719433
pes2o/s2orc
v3-fos-license
High-velocity interstellar absorption associated with the supernova remnant W28 We present an analysis of moderately high resolution optical spectra obtained for the sight line to CD-23 13777, an O9 supergiant that probes high velocity interstellar gas associated with the supernova remnant W28. Absorption components at both high positive and high negative velocity are seen in the interstellar Na I D and Ca II H and K lines toward CD-23 13777. The high velocity components exhibit low Na I/Ca II ratios, suggesting efficient grain destruction by shock sputtering. High column densities of CH+, and high CH+/CH ratios, for the components seen at lower velocity may be indicative of enhanced turbulence in the clouds interacting with W28. The highest positive and negative velocities of the components seen in Na I and Ca II absorption toward CD-23 13777 imply that the velocity of the blast wave associated with W28 is at least 150 km/s, a value that is significantly higher than most previous estimates. The line of sight to CD-23 13777 passes very close to a well-known site of interaction between the SNR and a molecular cloud to the northeast. The northeast molecular cloud exhibits broad molecular line emission, OH maser emission from numerous locations, and bright extended GeV and TeV gamma-ray emission. The sight line to CD-23 13777 is thus a unique and valuable probe of the interaction between W28 and dense molecular gas in its environs. Future observations at UV and visible wavelengths will help to better constrain the abundances, kinematics, and physical conditions in the shocked and quiescent gas along this line of sight. INTRODUCTION The remnants of supernova explosions are shaped by the interstellar environments in which the explosions occur. While the majority of supernova remnants (SNRs) are classified as shell-type (Green 2019), meaning that their X-ray and radio morphologies have a shell-like appearance, a significant fraction of remnants belong to a class of "mixed-morphology" SNRs (Rho & Petre 1998). These remnants, also known as "thermal composites" (Jones et al. 1998), have shell-like radio morphologies but their X-ray emission is centrally concentrated. In contrast to other types of composite remnants where the center-filled X-ray emission is nonthermal in origin and powered by a central pulsar, the X-ray emission from mixed-morphology SNRs is primarily thermal. Various models, invoking phenomena such as cloud evaporation (White & Long 1991) and thermal conduction Shelton et al. 1999), have been proposed to explain the origin of the ⋆ E-mail: aritchey@astro.washington.edu hot interiors of mixed morphology SNRs. A common characteristic of this class of remnants is that they are observed to be interacting with molecular clouds. The center-filled Xray morphologies are likely a consequence of the interaction with dense gas. However, the precise mechanism, or combination of mechanisms, by which these remnants attain their distinct morphologies is not yet understood. The SNR W28 is an archetype of the class of mixed morphology SNRs. A distinct radio shell is seen predominantly to the north and east of the remnant (e.g., Dubner et al. 2000), while thermal X-ray emission is concentrated toward the center (e.g., Rho & Borkowski 2002). A secondary X-ray peak is seen toward the northeast near a well-known site of interaction between the SNR and a molecular cloud (see, e.g., Nicholas et al. 2012). A fainter X-ray shell component is observed toward the southwest. Arikawa et al. (1999) mapped the northeast molecular cloud in CO (3−2) and (1−0) emission. By separating narrow and broad emission components, they demonstrated the positional offset between shocked and unshocked molecular gas. The north-east cloud seems to have been partially overtaken by the supernova blast wave. A secondary ridge of shocked molecular material lies to the north (e.g., Arikawa et al. 1999;Nicholas et al. 2011). Enhanced velocity dispersions on the side of the northeast cloud facing the center of W28 are clear indications of external disruption of the cloud by the SNR (Nicholas et al. 2011(Nicholas et al. , 2012Maxted et al. 2016). Fortyone 1720 MHz OH masers are found to lie along the edges of the shocked molecular material (Claussen et al. 1997). The maser emission is yet another clear indication of the interaction between the SN shock and dense molecular gas associated with W28. Aharonian et al. (2008) discovered very high energy (VHE) γ-ray emission coincident with molecular clouds in the W28 region. Using the High Energy Stereoscopic System (HESS), Aharonian et al. (2008) identified four bright TeV γ-ray sources near W28. One source (labelled HESS J1801−233) coincides with the shocked northeast molecular cloud. The other three sources (labelled HESS J1800−240A, B, and C) are located to the south of W28 beyond the outer boundary of the radio shell which delineates the present location of the SN shock front (see Figure A1). An extended GeV γ-ray source coincides with the northern HESS source and with the shocked molecular cloud (Abdo et al. 2010;Giuliani et al. 2010). Fainter GeV γ-ray emission is found to be associated with the southern HESS sources (Abdo et al. 2010;Giuliani et al. 2010;Hanabata et al. 2014;Cui et al. 2018). The γ-ray emission likely results from the decay of neutral pions produced through interactions between shockaccelerated cosmic rays and dense molecular gas. The southern HESS sources have been interpreted as ambient molecular clouds illuminated by cosmic rays that escaped from the shell of W28 when the remnant was much younger (Hanabata et al. 2014;Cui et al. 2018). It is not yet clear if the γ-ray emission from the northeast shocked cloud is also due to escaped cosmic rays or to newly-accelerated cosmic rays that are now interacting with dense gas in the post-shock region. Numerous investigations have been carried out to examine the structure and kinematics of the shocked molecular clumps associated with W28 (e.g., DeNoyer 1983;Arikawa et al. 1999;Reach & Rho 2000;Reach et al. 2005;Nicholas et al. 2011Nicholas et al. , 2012Gusdorf et al. 2012;Maxted et al. 2016). These studies provide information on the characteristics of the shocks (i.e., shock type, shock velocity, pre-shock density) driven into the dense clumps by the SN blast wave. Considerably less information is available on the velocity of the SN blast wave itself. Lozinskaya (1974) detected an expansion velocity of 40-50 km s −1 from observations of Hα and [N ii] emission lines in W28. The radial velocities of individual [N ii] filaments are as high as 60 to 80 km s −1 (Lozinskaya 1980). Shock velocities between 60 and 90 km s −1 were inferred by Bohigas et al. (1983) from optical line ratios in comparison with shock models. From their own observations of optical emission lines, Long et al. (1991) found that the shock velocity should be larger than ∼70 km s −1 . A shell of neutral hydrogen surrounding W28 was found by Velázquez et al. (2002) to have an expansion velocity of ∼30 km s −1 . The H i shell was interpreted as swept-up interstellar material and thus might be expected to have a velocity significantly less than that of the SN blast wave. While the optical observations of W28 suggest a blast wave velocity of ∼100 km s −1 , much larger values are implied by the temperatures of the northeast and southwest X-ray shell components. Rho & Borkowski (2002), for example, obtained X-ray temperatures of ∼0.6 keV and ∼1.2-1.5 keV for the northeast and southwest shells, respectively. If this X-ray emission results from shocked-heated gas in a region immediately behind the SN blast wave, then the implied shock velocities would range from 700 to 1100 km s −1 . Rho & Borkowski (2002) interpreted the X-ray emission as "fossil radiation" from a time when the remnant was younger and the shock velocity was higher, attributing the properties of the radiation to some combination of cloud evaporation, thermal conduction, and mixing due to various hydrodynamic instabilities. Still, the high X-ray temperatures observed in the interiors of mixed-morphology SNRs such as W28 remain poorly understood. The distance to W28 has also been somewhat controversial. In part, this has been caused by disagreements concerning the systemic velocity of the clouds associated with W28. Arikawa et al. (1999) detected two narrow CO emission components toward W28 with local standard of rest (LSR) velocities of +7 and +21 km s −1 . The OH masers found to be associated with W28 exhibit centroid velocities in the range +5 to +15 km s −1 (Claussen et al. 1997). Lozinskaya (1974) reported a mean LSR velocity for the remnant as a whole of +18 km s −1 , which corresponds to a kinematic distance of 3.6 kpc. However, the H i observations reported by Velázquez et al. (2002) show a very strong selfabsorption feature at +7 km s −1 , which the authors identify as the atomic counterpart to the molecular cloud known to be interacting with W28. Adopting a systemic velocity of +7 km s −1 , Velázquez et al. (2002) derive a kinematic distance to W28 of 1.9 ± 0.3 kpc. One approach to studying the kinematics and physical conditions in shocked and quiescent gas associated with SNRs is to examine the interstellar absorption features that appear in the ultraviolet and visible spectra of stars located behind the remnant. Such an approach has recently been used to obtain a detailed set of physical conditions in gas interacting with the SNR IC 443 (Ritchey et al. 2020). To date, there have been no published studies of interstellar absorption lines (in the UV or visible) toward stars in the W28 region. The purpose of the present investigation is to identify sight lines that can serve as probes of the interstellar material interacting with W28. To this end, we conducted a moderate-resolution ground-based spectroscopic survey of 15 stars in the W28 region. One star in particular (CD−23 13777) proved to be a remarkable probe of shock-accelerated gas in W28. Thus, the majority of this paper will focus on just this sight line. The on-sky location of CD−23 13777 in relation to the TeV γ-ray sources observed with HESS is shown in Figure 1. The remainder of this paper is organized as follows. The observations and procedures used for data reduction are discussed in Section 2. Our basic analysis of the interstellar absorption lines detected toward CD−23 13777 is described in Section 3.1. In Section 3.2, we present evidence of temporal variations in the Na i absorption components detected at high velocity toward CD−23 13777. Our results for this sight line are discussed in Section 4 in the context of multiwavelength observations of the gas interacting with W28. Our basic conclusions are summarized in Section 5. A more complete description of our spectroscopic survey of the W28 region is presented in Appendix A. Detailed component structures for the interstellar species observed toward CD−23 13777 are tabulated in Appendix B. OBSERVATIONS AND DATA REDUCTION Observations of the star CD−23 13777 were originally obtained in 2011 as part of a survey of stars probing W28 using the Astrophysical Research Consortium echelle spectrograph (ARCES; Wang et al. 2003) on the 3.5 m telescope at Apache Point Observatory (APO). The echelle spectrograph provides complete wavelength coverage in the range 3800-10200Å at a resolving power of R ≈ 31, 500 (∆v ≈ 9.5 km s −1 ). Fifteen early-type stars in the vicinity of W28 were observed in an effort to discern high-velocity interstellar absorption features that would indicate that the sight line to the star probed shocked gas associated with the SNR. However, only CD−23 13777 showed evidence of high-velocity absorption. Absorption components at velocities exceeding 100 km s −1 are seen in the Na i D lines and in the Ca ii H and K lines toward CD−23 13777. In most of the other directions surveyed, the Na i and Ca ii absorption components exhibit only very modest velocities, with typical values being between −30 and +30 km s −1 . (The Na i absorption profiles toward all 15 observed stars are presented in Figure A2.) At the time of our original survey of the W28 region, the distances to the program stars were uncertain. Spectroscopic parallaxes indicated that the survey stars were at least as far away as the remnant at 1.9 kpc (Velázquez et al. 2002). However, now that trigonometric parallaxes are available for these stars from the Gaia satellite (e.g., Bailer- Jones et al. 2018), their distances are much better constrained. The Gaia Data Release 2 (DR2) parallaxes reveal that all but two of the stars surveyed have distances of less than 1.8 kpc (see Table A1), placing them in the foreground of the remnant. The most distant star that we observed is CD−23 13777, whose Gaia DR2 parallax indicates a distance of 2.4±0.3 kpc. The discovery of high-velocity gas toward CD−23 13777, but not toward the other observed stars, helps to confirm that the distance to W28 is ∼2 kpc. Having identified CD−23 13777 as a unique probe of high-velocity gas associated with W28, we sought to improve the signal-to-noise (S/N) ratio of the data by acquiring additional ARCES spectra of this star. Ultimately, we obtained 13 separate 30 minute exposures of CD−23 13777 over the course of five nights between 2011 September and 2015 August. Individual exposure times were limited to 30 minutes to minimize the number of cosmic-ray hits occurring during each integration. The raw data were reduced using iraf following standard procedures for ARCES spectra (see, e.g., Ritchey & Wallerstein 2012). Telluric absorption lines were removed from the calibrated spectra using observations of the unreddened standard star η UMa, which was observed on each of the five nights when spectra of CD−23 13777 were obtained. The corrected spectra were shifted to the LSR frame of reference and the individual exposures were coadded to produce a final high S/N ratio spectrum. In order to search for temporal variations in the interstellar absorption profiles, we also produced nightly sum spectra by coadding the two or three exposures obtained on a given night (see Section 3.2). Total exposure times and S/N ratios achieved for each of the five nights are listed in Table 1. Column Densities and Component Structure The most notable features in the ARCES spectra of CD−23 13777 (see Figures 2 and 3) are the high-velocity components detected in the interstellar Na i and Ca ii absorption profiles. Two relatively strong absorption features are seen at LSR velocities of −146 and −110 km s −1 , while two weaker components are found at velocities of +120 and +137 km s −1 . As would be expected, the dominant absorption complex is found near 0 km s −1 . We do not detect any of the high-velocity features in the absorption profiles of the trace neutral species K i or Ca i, nor do we see these com- ponents in the absorption profiles of the molecular species CH or CH + . All four of these latter species, however, show absorption at more moderate velocities. We do not find any convincing evidence for absorption from the CN B−X (0−0) band near 3874Å nor from the A−X (2−0) band of C 2 near 8757Å. Small portions of the coadded spectrum surrounding the detected interstellar lines were isolated and continuum normalized via low-order polynomial fits to the line-free regions. The absorption profiles were then analyzed by means of the multi-component profile synthesis routine ismod (see Sheffer et al. 2008), which assumes a Voigt profile shape for each component included in the fit. The profile synthesis routine returns the best-fitting column density, Doppler b-value, and velocity of each component through a simple root-mean-square minimizing technique. However, ARCES spectra present a particular challenge when analyzing interstellar lines due to the coarse resolution of the data. High and ultra-high resolution observations of the interstellar medium (ISM) have revealed that individual interstellar components are typically characterized by very narrow intrinsic widths (i.e., b 2 km s −1 for trace neutral species such as Na i and K i; Welty & Hobbs 2001;Price et al. 2001;Crawford 2001). Such narrow components will not be resolved by ARCES, which has a velocity resolution of ∼9.5 km s −1 . Thus, when fitting the absorption profiles of the lines observed toward CD−23 13777, we have included a sufficient number of components so that the derived b-values fall within a range typical of interstellar lines (i.e., 0.5 b 4.5 km s −1 ). In most cases, the lines are not severely saturated so that the adopted component structure will not have too large an effect on the column densities obtained. The results of our profile synthesis fits for the promi- In fitting the main Na i absorption complexes at low velocity (which are severely saturated), we used the component structure found from the K i λ7698 line as a guide. However, while the resulting N(Na i)/N(K i) ratios appear to be consistent with the typical interstellar value of ∼90 (e.g., Welty & Hobbs 2001), we still consider the Na i column densities associated with these low velocity components to be highly uncertain. Another complication that arose in fitting the Na i profiles is that the weak components at high positive velocity in the λ5889 line are partially blended with the stronger components at high negative velocity in the λ5895 line (see Figure 3). We therefore used the high positive velocity features seen in the λ5895 line as templates for fitting and removing the corresponding components in the λ5889 profile. Since the sight line to CD−23 13777 is adjacent to what is likely a site of interaction between shock-accelerated cosmic rays and molecular gas (Figure 1), and since Li is a byproduct of such interactions (e.g., Meneguzzi et al. 1971), it is important to establish whether Li i absorption is detectable in this direction. In Figure 4, we show the region of the coadded spectrum of CD−23 13777 surrounding the Li i λ6707 transition. The very weak Li i feature is detected at a significance level of greater than 5σ. However, due to the coarse resolution of the ARCES spectra, we are unable to resolve the individual fine-structure lines of the Li i doublet, which have a velocity separation of only 6.7 km s −1 . Nor can we evaluate the Li isotope ratio from the ARCES data. We can, however, determine the total Li i column density along the line of sight. To accomplish this, we created a template for the Li i absorption profile based on the three strongest components seen in the K i line (see Table B1). We then fit this template to the observed Li i spectrum, keeping the relative velocities, fractional column densities, and b-values of the components fixed. The result of this fit is shown by the solid red line in Figure 4. The figure also shows two weak diffuse interstellar bands (DIBs) that are detected on either side of the Li i feature. (Many other DIBs are detected along the line of sight to CD−23 13777. However, the analysis of these features is beyond the scope of the present paper.) The total equivalent widths and line-of-sight column densities of the atomic and molecular species observed toward CD−23 13777 are presented in Table 2. (We do not list column densities for the Na i D lines because the line profiles are severely saturated at low velocity rendering the total column densities uncertain.) The equivalent width errors reported in Table 2 account for both photon noise and errors in continuum placement, while the column density errors include an additional term based on the degree of saturation in the line profile. We note that there may be some unresolved saturated absorption that is not fully accounted for in our fits to the Ca ii profiles since the stronger line of the Ca ii doublet yields a somewhat smaller column density than the weaker line. However, the two results for the total Ca ii column density agree with one another at about the 2σ level. In order to place our determinations of column densities for the line of sight to CD−23 13777 in the context of more general surveys of the ISM, we can compare our determinations with the trends revealed by those more general surveys. For example, the N(Li i)/N(K i) ratio seen toward CD−23 13777 is fully consistent with the trend between these two species obtained from surveys of diffuse molecular clouds (e.g., Welty & Hobbs 2001;Knauth et al. 2003). The total CH column density also seems to be consistent with the amount of K i present (e.g., Welty & Hobbs 2001;Welty et al. 2006). On the other hand, the CH + column density toward CD−23 13777, which we find to be N(CH + ) ≈ 2.0 × 10 14 cm −2 , is among the highest values known for sight lines probing diffuse clouds in the local Table 2. Total equivalent widths (in mÅ) and column densities (in cm −2 ) of the atomic and molecular species observed toward CD−23 13777. (No column densities are reported for the Na i D lines due to significant saturation in the line profiles at low velocity.) Galactic ISM (e.g., Welty et al. 2006;Smoker et al. 2014). Still, while the N(CH + )/N(CH) ratio, which equals ∼3 for the entire line of sight, is higher than average, it is not as high as observed toward some Galactic regions such as the Pleiades, where this ratio can reach values of 20 or more (Ritchey et al. 2006). The column density of Ca i also seems to be somewhat high compared to the amount of K i seen in this direction, but the N(Ca i)/N(Ca ii) ratio is not unusual (e.g., Welty et al. 2003). Further insight may be gained from an examination of the column densities in distinct velocity components along the line of sight. For this exercise, we will sum the column densities of any closely-spaced components that together constitute a single distinct feature. In this way, uncertainties in the detailed underlying component structure of the various species will not significantly affect our interpretation. In Table 3, we present the column densities of nine such distinct components identified toward CD−23 13777. The velocities listed in Table 3 are the column-density weighted mean velocities averaged over all of the species in which a given component is detected (see Table B1). The Ca ii and Na i column densities were obtained by taking the weighted means of the results from the two lines of the doublets. (Again, we do not list column densities for the main Na i absorption complexes at +5 and +32 km s −1 due to significant saturation in the line profiles at these velocities.) The N(Na i)/N(Ca ii) ratio varies considerably among the various components identified toward CD−23 13777. The lowest values of this ratio are found for the high positive velocity components at +120 and +137 km s −1 , where N(Na i)/N(Ca ii) ≈ 0.06. Somewhat higher values are found for the high negative velocity components. We find N(Na i)/N(Ca ii) ratios of 0.42 and 0.16 for the components at −146 and −110 km s −1 , respectively. The K i column densities derived for the main low velocity absorption complexes at +5 and +32 km s −1 imply respective N(Na i)/N(Ca ii) ratios of ∼11 and ∼3 (assuming the ratio of Na i to K i is ∼90; e.g., Welty & Hobbs 2001). Large variations in the N(Na i)/N(Ca ii) ratio are typical along sight lines probing gas associated with SNRs. Similar variations have been seen, for example, toward stars behind the Vela SNR (Danks & Sembach 1995), the Monoceros Loop SNR (Welsh et al. 2001), and the SNRs S147 (Sallmen & Table 3. Column densities (in cm −2 ) for distinct velocity components observed toward CD−23 13777. The velocities listed are the column-density weighted mean velocities (in km s −1 ) averaged over all of the species in which a given component is detected. The Ca ii and Na i column densities shown here were obtained by taking the weighted means of the results from the two lines of the doublets. (No column densities are given for the main Na i absorption complexes at +5 and +32 km s −1 because these features are too strongly saturated to allow for accurate column density determinations.) (Welsh & Sallmen 2003;Ritchey et al. 2020). Since Ca is usually heavily depleted onto interstellar dust grains (e.g., Crinklaw et al. 1994), while Na is only lightly depleted, the N(Na i)/N(Ca ii) ratio can reach values as high as 10-100 in cold, quiescent interstellar clouds (e.g., Crawford 1992). The low N(Na i)/N(Ca ii) ratios that are typical of high velocity gas associated with SNRs may then be ascribed to the destruction of dust grains by SN shocks, although ionization effects may also be important considering the difference in ionization potential between Na i and Ca ii. The N(Ca i)/N(Ca ii) ratio can be useful for placing constraints on the ionization balance in diffuse clouds independent of any depletion effects. However, for the line of sight to CD−23 13777, absorption from Ca i is detected only at low to moderate velocity. The N(Ca i)/N(Ca ii) ratios that we find for the components at −29, +5, and +32 km s −1 are all ∼0.01, which is a typical value for sight lines probing diffuse clouds (e.g., Welty et al. 2003). The N(Ca i)/N(Ca ii) ratio seems to be somewhat higher for the component at +59 km s −1 . However, this determination is uncertain because the associated absorption features are very weak. It is unusual to see absorption from species such as Ca i and CH + at moderately high velocity (such as we see near +59 km s −1 toward CD−23 13777). However, similar moderately high velocity Ca i and CH + absorption components were detected along sight lines probing IC 443 (Hirschauer et al. 2009;Ritchey et al. 2020). The N(CH + )/N(CH) ratios for both of the main low velocity components toward CD−23 13777 are significantly higher than the typical interstellar value of ∼0.9 (e.g., Welty et al. 2006). We find respective N(CH + )/N(CH) ratios of 2.5 ± 0.2 and 3.3 ± 0.5 for the components at +5 and +32 km s −1 . Detailed chemical models have revealed that the CH + abundance is a sensitive tracer of the dissipation of turbulence in diffuse clouds (e.g., Godard et al. 2009Godard et al. , 2014. The high N(CH + )/N(CH) ratios we find for the low velocity components toward CD−23 13777 may therefore be indicative of enhanced turbulence in the gas associated with these absorption features. Temporal Variations As discussed in Section 2, the ARCES spectra of CD−23 13777 were acquired on five separate occasions over the course of approximately four years. While this was done mainly in an effort to build up the S/N ratio in the final coadded spectrum, it also affords us the opportunity to determine whether any changes occurred in the interstellar absorption profiles during that four year period. To examine this issue, we produced separate coadded spectra of CD−23 13777 for each of the five nights when data were obtained. In searching for temporal changes, we will focus our effort on the Na i D and Ca ii H and K lines since these lines are intrinsically strong and are the only lines that show high velocity interstellar absorption. From an examination of the equivalent widths of the Ca ii absorption features in the nightly sum spectra, we find that any variations that may be present do not significantly rise above the level of the uncertainties. (In other words, any apparent changes in the Ca ii equivalent widths are at the level of ∼3σ or less.) This is not entirely unexpected since the S/N ratios in the nightly coadded spectra are rather low in the vicinity of the Ca ii lines. (A typical value is ∼30.) Much higher S/N ratios are achieved in the nightly sum spectra of CD−23 13777 near the Na i D lines (see Table 1), meaning that any temporal changes will be easier to discern in Na i. While we do not find any convincing evidence of changes in the Na i absorption profiles for components at low to moderate velocity (i.e., −30 v LSR +60 km s −1 ), we do see evidence for weak variations in the absorption components at high negative velocity. (The Na i components at high positive velocity are too weak to allow for any meaningful examination of temporal changes.) In Figure 5, we plot the absorption profiles of the Na i complexes near −146 and −110 km s −1 as seen in the nightly sum spectra and in the final coadded spectrum. (In these data, the high positive velocity components of the λ5889 line, which are partially blended with the high negative velocity features in the λ5895 profile, have been removed.) Immediately apparent in the figure is the increase in absorption that occurred near −154 km s −1 in the spectrum taken on 2015 May 24. (It is important to note that the increase in absorption occurs in both of the Na i D lines as would be expected if this is a real change and not just an artifact of noise or a telluric feature.) Indeed, it was the appearance of this weakly variable component in the May 24 spectrum that prompted us to acquire an additional spectrum of CD−23 13777 later that same year. In the subsequent spectrum taken on 2015 Au- gust 20, the Na i profiles near −154 km s −1 seem to have returned to their previous norm. However, upon further examination, other weak yet systematic changes can be seen in the Na i absorption profiles for the components found at high negative velocity. For example, absorption from the weaker complex at −110 km s −1 seems to vary systematically from a maximum on 2012 August 26 to a minimum on 2015 August 20. In order to place more quantitative constraints on these observed changes, we analyzed the high negative velocity Na i absorption profiles from the nightly sum spectra by means of our profile fitting routine, adopting the same procedures as were used in fitting the coadded data. The resulting column densities of the two main absorption complexes seen at high negative velocity are listed in Table 4 for each of the five nights. As anticipated, the column density of the absorption complex near −146 km s −1 increased substantially in the spectrum taken on 2015 May 24 but then reverted to a more typical value by the time the next spectrum was obtained on 2015 August 20. Meanwhile, the column density of the absorption complex near −110 km s −1 reached a maximum in the spectrum taken on 2012 August 26 and then decreased considerably by 2013 April 24, maintaining this lower value over the next few years. (Again, these changes are seen in both lines of the Na i doublet simultaneously, which helps to confirm that these are real changes in column density rather than random variations due to noise or telluric interference.) The column-density weighted mean velocities of the two main absorption complexes seen at high negative velocity show only very minor changes over the course of the observations. Given the low resolution of the ARCES data, it is difficult to determine whether these changes in velocity are real. However, if the underlying component structure of a given absorption complex changes with time (for example, if one subcomponent becomes stronger relative to another), then small changes in the column-density weighted mean velocity would be expected. Temporal changes in absorption components detected along sight lines probing SNRs are fairly common. The most famous examples are the many sight lines probing the Vela SNR that have shown dramatic changes in their Na i and Ca ii absorption profiles over time (e.g., Hobbs et al. 1991;Danks & Sembach 1995;Cha & Sembach 2000;Welty et al. 2008;Rao et al. 2016Rao et al. , 2017. Moderately high velocity time-variable Na i components have also recently been found toward stars probing the Monoceros Loop SNR (Dirks & Meyer 2016) and IC 443 (Ritchey et al. 2020). While the variations we see in the high negative velocity Na i absorption components toward CD−23 13777 are perhaps not as dramatic as in some of these other examples, they nevertheless help to strengthen the connection between time-variable interstellar components and gas associated with SNRs. DISCUSSION The most important result of our spectroscopic survey of stars in the W28 region is our discovery of high velocity interstellar absorption along the line of sight to CD−23 13777. Our detection of Na i and Ca ii absorption at both high positive and high negative velocity in this direction suggests that we are viewing both sides of the expanding shell of the remnant. The velocity of the dominant interstellar absorption component toward CD−23 13777 at +5 km s −1 is consistent (given the relatively low resolution of the ARCES spectra) with a systemic velocity for W28 of +7 km s −1 (Arikawa et al. 1999;Velázquez et al. 2002). Relative to the systemic velocity (assuming v sys = +7 km s −1 ) the highest positive and negative velocity absorption components toward CD−23 13777 have velocities of +130 and −153 km s −1 (see Table 3). This potentially suggests an asymmetry in the expansion of the SNR shell, with the shell expanding more rapidly on the near side than on the far side. The other major absorption components observed toward CD−23 13777 are distributed more symmetrically about the systemic velocity. The second highest positive and negative velocity components have velocities of +113 and −117 km s −1 relative to v sys . At the edges of the main absorption complex toward CD−23 13777 there is a pair of weak components with rela-tive velocities of +52 and −58 km s −1 . Adjacent to the main interstellar component is another pair of components with relative velocities of +25 and −36 km s −1 . This last pair exhibits velocities that are similar to the expansion velocity inferred for the H i shell observed by Velázquez et al. (2002). The velocities of the high velocity components discovered toward CD−23 13777 are significantly higher than most previous estimates of the velocity of the shock associated with W28. Observations of optical emission lines in W28 have generally suggested shock velocities of v s 100 km s −1 (Lozinskaya 1974;Bohigas et al. 1983;Long et al. 1991). Our observations indicate that the shock velocity could be 150 km s −1 or greater. Note that the velocities observed along the line of sight to CD−23 13777 are simply the radial velocities, which represent lower limits to the actual shock velocities if there are significant transverse components to the motion. Furthermore, the gas motions we are detecting in absorption may be due to shocks driven into pre-existing interstellar clouds by the SN blast wave. In such a scenario, the cloud shocks will generally have velocities that are less than that of the blast wave moving through the lower density intercloud medium. The blast wave velocity might therefore be considerably higher than 150 km s −1 . The position of the sight line to the background star CD−23 13777 seems to have been particularly fortuitous. This sight line is located very close to the edge of the shocked northeast molecular cloud on the side of the cloud facing the interior of the SNR (see Figure A1). The velocity dispersion distributions measured for various molecular species associated with the northeast cloud (Nicholas et al. 2011(Nicholas et al. , 2012Maxted et al. 2016) strongly suggest that the interior face of the cloud was the point of contact between the cloud and the SN shock. At the interface between the shocked and unshocked portions of the northeast cloud, a half ring of 1720 MHz OH masers is found with velocities in the range +7 to +14 km s −1 (Claussen et al. 1997; see also Nicholas et al. 2012). These velocities are similar to those of the main interstellar components toward CD−23 13777 (see Table B1). The northeast molecular cloud is also a site of strong GeV and TeV γ-ray emission (Aharonian et al. 2008;Abdo et al. 2010;Giuliani et al. 2010) and therefore most likely a site of cosmic-ray acceleration. The shocked portion of the northeast cloud might thus be expected to exhibit an enhanced Li abundance since Li is produced, in part, by spallation and fusion reactions between cosmic ray particles and interstellar nuclei (Meneguzzi et al. 1971;Lemoine et al. 1998). While the Li i/K i ratio toward CD−23 13777 does not appear to be unusual, a better probe of Li production is the 7 Li/ 6 Li isotope ratio, which we are unable to determine due to the low resolution of the ARCES spectra. Additional observations of CD−23 13777 are needed to discern the physical conditions in the high velocity absorption components detected in this direction. Spectroscopic observations in the UV, using HST /COS for example, would allow the densities, temperatures, and thermal pressures of the components to be determined through use of the many UV diagnostics available in that wavelength regime. An analysis of this kind, using UV observations of atomic finestructure lines to discern the physical conditions in shocked and quiescent gas, has recently been applied to sight lines in IC 443 (Ritchey et al. 2020). In that study, the high velocity components were found to exhibit a combination of enhanced thermal pressures and significantly reduced dustgrain depletions, indicating that the absorption is tracing gas in a cooling region far downstream from shocks driven into neutral gas clumps. A similar scenario may explain the high velocity components observed toward CD−23 13777. However, the near symmetry of the components seen at high positive and high negative velocity could indicate that a dense shell is forming as the remnant enters the radiative phase of SNR evolution (e.g., Chevalier 1999). Ritchey et al. (2020) reported the detection of a very high velocity absorption component in highly-ionized species toward a star behind IC 443. The velocity of this component (which is ∼600 km s −1 ) is consistent with the shock velocities inferred from observations of soft thermal X-ray emission from IC 443 (e.g., Troja et al. 2006), but is much higher than the shock velocity of ∼100 km s −1 typically quoted for this remnant. IC 443 is a mixed-morphology SNR like W28. Thus, it would be interesting to determine whether sight lines through W28 (such as that toward CD−23 13777) also exhibit absorption from highly-ionized species at velocities close to those derived from the temperatures of the X-ray shell components (e.g., Rho & Borkowski 2002). Such a determination would require absorption-line observations in the UV. Higher resolution ground-based spectra of CD−23 13777 near the Na i and Ca ii lines would enable the complex component structure of those absorption features to be examined in much greater detail than is possible from the ARCES spectra. Moreover, spectroscopic monitoring of the absorption complex at high negative velocity would help to confirm the temporal variations seen in the ARCES data. High-resolution spectra at very high S/N would be required to determine the Li isotope ratio in the main interstellar absorption component toward CD−23 13777. A lower than average 7 Li/ 6 Li ratio, as was detected for a line of sight in IC 443 (Taylor et al. 2012), would be expected in the case of Li production by cosmic rays. A measurement of the Li isotope ratio toward CD−23 13777 could help to clarify the relationship between the +7 km s −1 molecular cloud in W28 and the source of the cosmic rays responsible for the γ-ray emission. CONCLUSIONS Moderately high resolution optical spectra of 15 early-type stars in the vicinity of W28 were acquired to search for highvelocity interstellar absorption components associated with shocked gas in the SNR. Along one line of sight, toward the background star CD−23 13777, we find numerous Na i and Ca ii absorption components at high positive and high negative velocity. The Na i/Ca ii ratios in these high velocity components are significantly lower than those determined for the components at low velocity, suggesting that the high velocity components have been subjected to grain destruction by shock sputtering. The high column densities of CH + , and the high CH + /CH ratios, for the components found at relatively low velocity toward CD−23 13777 may be indicative of enhanced turbulence in the clouds interacting with W28. The highest positive and negative velocities of the Na i and Ca ii components detected toward CD−23 13777 imply that the velocity of the shock wave associated with W28 is at least 150 km s −1 . This is a significantly higher value for the shock velocity than most previous observations of W28 have indicated. The detection of high-velocity interstellar absorption toward CD−23 13777, but not toward the other observed stars, helps to confirm that the distance to W28 is ∼2 kpc. Temporal changes in the Na i absorption complex seen at high negative velocity toward CD−23 13777 serve to strengthen the connection between time-variable interstellar components and gas associated with SNRs. The line of sight to CD−23 13777 passes very close to the apparent point of contact between the SNR shock and the northeast molecular cloud in W28. This cloud exhibits broad molecular line emission, OH maser emission from numerous locations, and bright extended GeV and TeV γ-ray emission. The CD−23 13777 sight line is thus a unique and valuable probe of the interaction between W28 and dense molecular gas in its environment. Future observations of CD−23 13777 at UV wavelengths will enable a more detailed examination of the physical conditions in the shocked and quiescent components along the line of sight. In addition, high-resolution ground-based spectra of CD−23 13777 acquired at very high S/N will allow a determination of the Li isotope ratio, which could provide insight into the diffusion of cosmic rays accelerated at the SN shock and their subsequent interaction with dense molecular gas in the vicinity of W28. Welty, D. E., & Hobbs, L. M. 2001, ApJS, 133, 345 Welty, D. E., Hobbs, L. M., & Morton, D. C. 2003, ApJS, 147, 61 Welty, D. E., Simon, T., & Hobbs, L. M. 2008, MNRAS, 388, 323 White, R. L., & Long, K. S. 1991 APPENDIX Table A1. The on-sky locations of the targets in relation to the radio shell of W28, and the TeV sources observed by HESS, are shown in Figure A1. The original concept was to survey stars that could serve as probes of the kinematics and physical conditions in the clouds interacting with W28. Thus, targets were chosen that were early-type stars that were bright enough for ARCES observations (i.e., V 11) and had spectroscopic distances that placed them either within or behind the SNR (i.e., d 1.9 kpc). We deliberately chose targets that could potentially probe both the northern radio shell of W28, including the northern TeV source (W28-North), and the southern HESS sources (A, B, and C). The raw echelle observations were reduced following the same procedures as described in Section 2. The final, reduced ARCES spectra of the Na i λλ5889, 5895 and K i λ7698 lines toward all 15 targets are displayed in Figure A2. As is readily apparent from the figure, high-velocity (>100 km s −1 ) interstellar absorption is detected only toward CD−23 13777. (The same conclusion is reached if one examines the Ca ii absorption lines.) Initially, this finding was somewhat surprising. In other cases where a SNR is interacting with diffuse atomic and/or molecular clouds in its vicinity, one can typically find multiple sight lines displaying high-velocity Na i and Ca ii absorption features (e.g., near the Vela SNR and IC 443; Danks & Sembach 1995;Cha & Sembach 2000;Welsh & Sallmen 2003;Hirschauer et al. 2009;Ritchey et al. 2020). However, after reliable distances to our target stars became available, from trigonometric parallaxes measured by the Gaia satellite (see Table A1), it became clear that most of the stars we observed were foreground stars. (Hence, the majority of this paper focuses on the line of sight toward CD−23 13777.) For completeness, we present in Table A2 the total equivalent widths of the prominent interstellar absorption lines observed toward all 15 stars surveyed with ARCES. The star HD 313599 has a Gaia DR2 distance of 2.2 ± 0.3 kpc and so may also lie behind the SNR (whose distance is estimated to be 1.9 ± 0.3 kpc; Velázquez et al. 2002). This star lies along the northern edge of the radio shell associated with W28 ( Figure A1). Thus, any shocked gas in this direction may have a large transverse component to its motion and would therefore not appear to be at high velocity from our perspective. Moreover, the shocked gas components would be blended with absorption from quiescent gas near the systemic velocity of W28. There is a weak Na i (and Ca ii) absorption component near +35 km s −1 toward HD 313599 that is not seen toward most of the other stars in Figure A1. On-sky locations of the 15 early-type stars in the W28 region surveyed with ARCES. Solid symbols are used for stars whose Gaia DR2 parallaxes place them at distances greater than 1.9 kpc. Stars represented by open symbols are likely foreground targets. Red contours indicate the 4, 5, and 6σ significance levels of the TeV sources observed by HESS (Aharonian et al. 2008) with labels as in Figure 1. Plus signs indicate the positions of the 41 1720 MHz OH masers observed by Claussen et al. (1997). The dashed circle gives the approximate position and size of the radio shell associated with W28 (Green 2019). the region ( Figure A2). This component could trace either a low-velocity shock or a high-velocity shock with a large transverse component. The low N(Na i)/N(Ca ii) ratio for this component (∼0.2) supports this hypothesis. The star CD−23 13793 lies along the southeast boundary of the radio shell associated with W28 ( Figure A1). While this sight line exhibits none of the very high velocity Na i and Ca ii absorption components seen toward CD−23 13777, it does display fairly strong and complex absorption extending from −60 to +70 km s −1 ( Figure A2). The distance to this star derived from the Gaia DR2 parallax is 1.1 +3.0 −0.5 kpc, which would suggest that the star lies in front of the SNR. However, the Gaia distance has a large associated uncertainty (with an upper bound equal to ∼4 kpc). Given the strong and complex interstellar absorption seen in this direction, and the high degree of interstellar reddening (see Table A1), it is probably more likely that this star does in fact lie behind the SNR (and the associated molecular clouds with which it is interacting). APPENDIX B: DETAILED COMPONENT STRUCTURE TOWARD CD−23 13777 In Table B1, we present the component parameters derived from profile synthesis fits to the interstellar absorption lines detected toward CD−23 13777. In particular, we list the LSR velocity, the column density, and the Doppler b-value for each of the components included in the fits for the different species. The fits themselves are presented in Figures 2 and 3. We do not list column densities for the main Na i components at low velocity because the absorption associated Table A1. Stellar and observational data for the 15 targets in the W28 region surveyed with ARCES. Spectral types and B and V magnitudes are from the SIMBAD database. Values of E(B−V ) were calculated using intrinsic colors from Wegner (1994). Since the luminosity class of HD 313618 is not known, the reddening value is only approximate. The distances listed are from trigonometric parallaxes measured by the Gaia satellite (Bailer- Jones et al. 2018 Table A2. Total equivalent widths (in mÅ) of the Na i λλ5889, 5895, Ca ii λλ3933, 3968, K i λ7698, Ca i λ4226, CH λ4300, and CH + λ4232 lines toward the 15 stars in the W28 region surveyed with ARCES. Errors in equivalent width are calculated as the product of the width of each individual component and the root mean square of the noise in the continuum. The individual component errors are then added in quadrature to yield the total equivalent width errors. 3σ upper limits are given in cases where a line is not detected. Figure A2. ARCES spectra of the Na i λλ5889, 5895 and K i λ7698 lines toward 15 stars in the W28 region. The panels are arranged to approximate the on-sky positions of the targets as shown in Figure A1. Only toward CD−23 13777 do we find evidence of highvelocity interstellar absorption. (Note the overlap between the high negative velocity λ5895 features and the high positive velocity λ5889 components in this direction.) The weak absorption features outside the velocity range of the main interstellar components toward HD 164265 are weak stellar absorption lines. Table B1. Velocities (in km s −1 ), column densities (in cm −2 ), and Doppler b-values (in km s −1 ) derived from profile synthesis fits to the interstellar absorption lines detected toward CD−23 13777. The Ca ii and Na i column densities listed here were obtained by taking the weighted means of the results from the two lines of the doublets. (No column densities are given for the main Na i components between −16 and +41 km s −1 because the absorption associated with these components is too strongly saturated to allow for accurate column density determinations.)
2020-05-21T01:01:27.856Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "b20d8947d09e7491d581a2d9988606ca2f3338ba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.09818", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b20d8947d09e7491d581a2d9988606ca2f3338ba", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249360768
pes2o/s2orc
v3-fos-license
Impact of high hydrostatic pressure on the micellar structures and physicochemical stability of casein nanoemulsion loading quercetin Highlights • High hydrostatic pressure could effectively disintegrate the micellar structure.• HHP-induced casein showed better emulsifying activity and loaded more quercetin.• 300 and 400 MPa treatments were found to create the uniformed nanoparticles.• The nanoemulsion prepared by 500 MPa showed the best physical and ions stability. Introduction Casein is the most abundant kind of protein in bovine milk, accounting for about 80% of the total protein content (Dalgleish, 2011). It consists of αs 1 -, αs 2 -, β-and κ-casein, with an average diameter of about 150-200 nm (de Kruif, 1998). Casein can be considered as a highly structured protein with discrete hydrophobic and hydrophilic units, as it tends to show the characteristic of self-assembly, which spontaneously combine to form micelles (Moitzi, Portnaya, Glatter, Ramon, & Danino, 2008). These strong interactions between αand β-caseins as well as the presence of charged κ-casein contribute to the structural integrity and make them stable against aggregation. Thus, casein micelles are believed to be the natural nano-vehicles and potentially used as the delivery system for drugs and bioactive substances (Ranadheera, Liyanaarachchi, Chandrapala, Dissanayake, & Vasiljevic, 2016). However, the loading capacity of casein is generally lower than expected, especially for embedding those water-insoluble nutrients, even though the interior of the micelle is fairly hydrophobic (Menendez-Aguirre, Kessler, Stuetz, Grune, Weiss, & Hinrichs, 2014). It is due to the micellar integrity maintained by hydrophobic attractive forces and colloidal calcium phosphate (CCP) acting as a bridge, which largely limits the encapsulation efficiency (EE) of casein. Therefore, artificial caseinates (e.g. sodium caseinate and calcium caseinate) without micelles are more commonly used in the dairy and food industry as an emulsifier instead of natural casein. But with the high demands of food processing and clean label of products, less chemical modifications have been introduced into the natural protein. Hence, it is necessary to explore green physical technologies aiming to improve the encapsulation capacity of casein without using any chemical methods. In recent years, a large number of studies focused on the applications of high hydrostatic pressure (HHP) technique on protein structural modification (Cepero-Betancourt, Opazo-Navarrete, Janssen, Tabilo-Munizagab, & Perez-Won, 2020;Tran, Lafarge, Pradelles, Perrier-Cornet, Cayot, & Loupiac, 2019;Yao, Jia, Lu, & Li, 2022). HHP is a novel non-thermal and environmentally friendly process, which could break up non-covalent bonds, e.g. ionic, hydrophobic and hydrogen bonds, without affecting the covalent bonds of small molecular compounds. Thus, it is believed to just modify the high-order three-dimensional structures of a protein as well as keeping the original nutrition, color and flavor of nature foods. In a previous study, Bai et al., (2021) demonstrated that HHP treatment (150 MPa) of myosin could improve its emulsifying activity by inducing moderate depolymerization and spiral unfolding of myosin structure. Qin et al., (2013) showed that HHP treatment could result in gradual unfolding of walnut protein isolate structure, further enhancing its emulsifying and foaming properties. Besides, the current research has found that the HHP process could largely improve the rehydration behaviors of micellar casein powders via dissociating the CCP connections (Ni et al., 2021). 500 MPa treatment was used to increase the free calcium content of casein solution by two times, demonstrating the effective disintegration of casein micelles. As mentioned above, the integrity of casein micelle might limit its encapsulation capability. This is due to the presence of CCP preventing the exposure of the interior hydrophobic groups of casein micelle, making the interior hydrophobic groups difficult to combine with those water-insoluble nutrients. In that case, HHP shows great potential in restructuring the micelles and creates the possible amphipathic nanoparticles to load the bioactive substances. Nassar et al. (2021) recently used the 100-500 MPa HHP on the micellar casein produced from caprine milk and it found the emulsifying activity of treated sample was significantly ameliorated. However, there still lacks enough evidence to prove the effect of HHP on the structure of micellar casein when used in the delivery system to protect the water-insoluble nutrients. Quercetin is a natural flavonoid compound present in a variety of fruits and vegetables (Maheshwari, Kumar, Bhadauria, & Mishra, 2022). Studies have shown that quercetin exhibits antioxidant, antiinflammatory, antiviral, antibacterial and anti-cancer activities, which makes it have a broad application prospect in food, medicine and other fields (Hai et al., 2020;Shanmugam, Ganguly, & Priya, 2022;Xu, Hu, Wang, & Cui, 2019). However, quercetin is classified in Biopharmaceutics Classification System (BCS) Class II, indicating the poor aqueous solubility and the low oral bioavailability greatly limit its commercial value. Therefore, oil-water emulsions or nanoparticles are often used to encapsulate quercetin aiming to enhance the solubility and stability (Cao et al., 2021;Shanmugam, Rengarajan, Nataraj, & Sharma, 2022). Currently, micellar casein has been used to encapsulate quercetin to improve its bioavailability (Ghayour et al., 2019;Penalva, Esparza, Morales-Gracia, Gonzalez-Navarro, Larraneta, & Irache, 2019). However, due to the existence of highly structured self-assembly system, the EE of quercetin is quite poor. Casein, as a nano-sized protein with the hydrophobic nucleus, is expected to be used as the ideal carrier of quercetin after proper HHP processing and regulation (Tang, 2021). Therefore, this study attempts to use the HHP technique to prepare a stable casein nanoparticle system containing quercetin as a model hydrophobic bioactive substance, which is of great significance for expanding the application scope of casein in the food industry. In this study, micellar casein solution was firstly treated by HHP process from 100 to 500 MPa with a holding time of 15 min. The emulsification activity index (EAI) and the emulsification stability index (ESI) were used to evaluate the casein nanoparticles, which subsequently loaded with quercetin to form the nanoemulsion. Then the particle size, polydispersity index (PDI), Zeta-potential and EE were determined to characterize the nanoemulsion and compare the effect of different HHP. In addition, the thermal, pH, ions and physical stability of nanoemulsion were also investigated, which finally gave the strong evidence of natural casein acting as the nano-vehicles of quercetin by only using a green physical process. Micellar casein solution preparation and HHP treatment The micellar casein powders were reconstituted with 40 • C deionized water to produce suspensions with the target concentrations of 5% (w/ w) and then magnetically stirred at 14 × g for 12 h. After that, the suspensions were centrifuged at 3000 × g for 10 min, which achieved the micellar casein solution with a solid content of 2.50 ± 0.02%. The solution was sub-packed in polyethylene bags and sealed with a vacuum sealer for HHP treatment (FB-110G5, Litu Ultra High Voltage Equipment Co., Ltd, Shanghai, China). The samples were put into the pressure chamber of the equipment and completely immersed by the pressure transmission medium (distilled water). Five pressure treatments were carried out at 100, 200, 300, 400 and 500 MPa for a holding time of 15 min at 25 ℃. The pressure rise rate was 100 MPa/min and the pressure was released instantaneously. The final obtained micellar casein solutions treated with HHP were labeled as MC-100, MC-200, MC-300, MC-400, MC-500, respectively. Emulsifying activity and emulsifying stability analysis The emulsifying activity and stability were analyzed according to the method of Han, Wang, Wang, and Tang (2020) with a slight modification. 80 mL of six casein solutions (control sample, MC-100, MC-200, MC-300, MC-400, MC-500) were mixed with 20 mL of soybean oil (oil volume fraction was 0.2), respectively. The mixtures were subsequently sheared at 5595 × g for 5 min with a temperature of 25 ℃ to obtain the coarse emulsions. 50 μL of the coarse emulsion was added into 5 mL of SDS solution (1 g/L) at 0 and 10 min after shearing, respectively. The absorbance of the mixture was recorded at 500 nm. The blank reading was measured in the same way using the SDS solution. The EAI and ESI were calculated by Eq. (1) and Eq. (2), respectively. A0 × 100 (2). Where, A 0 and A 10 are the absorbance values at 500 nm at 0 min and 10 min, respectively; C is the volume fraction of oil in the experiment (0.2); ∅ is the concentration of protein (g/mL), and the dilution factor is 100. Turbidity measurement The turbidity of samples was measured by turbidimeter (WGZ-2000, Youke Instrumentation Co., Ltd, Shanghai, China). Prior to the test, all samples were diluted 50 times with deionized water. Approximately 3 mL of the sample was then added to the glass cuvette for turbidity measurement. Deionized water was used as a blank reference. Determination of dissociation degree of CCP The content of free calcium in HHP-treated casein solution was determined by multifunctional pH meter (S220, Mettler Toledo International Trade Co., Ltd, Shanghai, China) and composite calcium ion selective electrode (perfectION™, Mettler Toledo International Trade Co., Ltd, Shanghai, China). Before measurement, a series of concentrations (10, 100, 1000 ppm) of calcium ion standard solutions were used for equipment calibration in order. 50 mL of six casein solutions (control sample, MC-100, MC-200, MC-300, MC-400, MC-500) were mixed with 1 mL of calcium ionic strength adjustment solution, respectively. Following that, the electrode was immersed in the mixture and the content of free calcium was read from the equipment screen. The content of total calcium in micelle casein was determined by inductively coupled plasma spectroscopy (ICAP 6300, Thermo Fisher Scientific Co., Ltd, Shanghai, China). The dissociation degree of CCP was presented as the percentage of free calcium content to total calcium content. Determination of nitrogen soluble index (NSI) The Measurement of NSI was based on a previous study with some modifications (Bradford, 1976). 5 mL of micelle casein solution with HHP treatment was centrifuged at 5000 × g for 10 min. The protein content in the supernatant was determined by the Bradford method and the specific steps were as follows: a series of BSA standard solutions with different concentrations (0, 0.02, 0.04, 0.06, 0.08, 0.12, 0.16, 0.20 mg/ mL) was prepared firstly, then a 20 μL of BSA standard solution or sample supernatant was mixed with a 200 μL Coomassie brilliant blue G250 staining solution, after the reaction was maintained at room temperature for 3-5 min, the absorbance at 595 nm was recorded. The protein content of the supernatant was calculated according to BSA standard curve, which was made with protein concentrations as abscissa and absorbance values as ordinate. The NSI was expressed as the percentage of protein content in the supernatant to total protein content of the sample. Solubility measurement The HHP-treated micelle casein solution was lyophilized into a powder, which was then sieved with a stainless-steel sieve (100 μm aperture) and stored in a dryer until analysis. 2 g of powder was added into 100 mL of deionized water and then stirred at 14 × g for 2 h. Subsequently, the obtained suspension was centrifuged at 3000 × g for 10 min to separate the undissolved micelle casein solid, which was dried at 105 • C until its mass remained unchanged. The solubility of micelle casein powder was shown as the content of casein dissolved in deionized water, which was calculated based on the mass of the undissolved micelle casein solids. Preparation of casein nanoemulsion loaded with quercetin Soybean oil was used to prepare oil/water nanoemulsion loaded with quercetin. The dispersed phase was prepared by pure quercetin with a loading of 0.08% (w/w) in soybean oil under mild heating (<5 min, 50 ℃), and magnetically stirring at 14 × g for 1 h to ensure complete dissolution of quercetin. The continuous phase consisted of six casein solutions (control sample, MC-100, MC-200, MC-300, MC-400, MC-500). The dispersed phase was slowly added to the continuous phase, in which the ratio of continuous phase to dispersed phase was 9:1. After uniform stirring, the mixture was sheared at 5595 × g for 5 min and homogenized for three times at 600 bar to form the oil/water nanoemulsion. The final obtained casein nanoemulsions loaded with quercetin were labeled as MCQ-100, MCQ-200, MCQ-300, MCQ-400, MCQ-500, respectively. Characterization of casein nanoemulsion 2.6.1. Particle size, PDI and Zeta-potential The average particle size, PDI and Zeta-potential of the nanoemulsion were determined using a particle size analyzer (Zetasizer ZEN 3700, Malvern Instruments Ltd, Worcestershire, UK). The refractive index of oil droplets and aqueous solution were set as 1.460 and 1.330, respectively. All samples were diluted 1000 times with deionized water before testing. EE of quercetin The EE of quercetin in nanoemulsion was measured following the method described by Tan et al. (2014). The aqueous suspension containing the quercetin nanoemulsion was centrifuged at 5595 × g for 20 min to separate the free quercetin crystals. Then the free quercetin crystal was dissolved in absolute ethanol and the absorbance of the solution was recorded at 373 nm. Quercetin solutions (0-200 μg/mL) were used to generate a standard curve. The encapsulation efficiency was calculated according to Eq. (3). EE (%)= mE mI × 100 (3). Where, m E is the mass of quercetin in the nanoemulsion; m I is the initial mass of quercetin. Effects of environmental factors on the stability of casein nanoemulsion 2.7.1. Thermal stability analysis The nanoemulsion was heated in a boiling bath (100 ℃) for 20 min and then cooled naturally to room temperature (20 ℃). The sample was diluted 1000 times with distilled water before the analysis. A particle size analyzer (Zetasizer ZEN 3700, Malvern Instruments Ltd, Worcestershire, UK) was used to measure the droplet size, PDI and Zetapotential of treated samples. pH stability analysis The pH value of the casein nanoemulsion was adjusted to 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0 and 9.0 with 0.1 M HCl or NaOH solution, respectively. After 12 h of equilibration, the samples were diluted 1000 times with the corresponding pH buffer solution. Subsequently, a particle size analyzer (Zetasizer ZEN 3700, Malvern Instruments Ltd, Worcestershire, UK) was used to measure the droplet size, PDI and Zetapotential of treated samples. Ionic strength tolerance analysis The nanoemulsion was mixed with different concentrations of NaCl buffer to achieve salt concentrations of 100, 200, 300, 400 and 500 mM, respectively. After 12 h of equilibration, the samples were diluted 1000 times with a buffer solution of the corresponding NaCl concentration. Subsequently, a particle size analyzer (Zetasizer ZEN 3700, Malvern Instruments Ltd, Worcestershire, UK) was used to measure the droplet size, PDI and Zeta-potential of treated samples. Physical stability analysis The physical stability was investigated by a stability analyzer (LUMiFuge 111, LUM Instruments Ltd, Berlin, GER), where the sample was centrifuged during the measurement process aiming to accelerate the particle movement and phase separation. Parallel near-infrared waves (880 nm) were used as the penetrating medium. The changes of transmitted light intensity as a function of time and position were recorded to determine the entire process of particle migration behavior. Briefly, 400 µL of nanoemulsion was uniformly injected into a rectangular container (2 × 8 mm). The temperature was set to 25 ℃ and the centrifugal speed was set to 4000 rpm. The characteristic lines of the transmittance of the sample were recorded every 10 s for a total of 360 times. The instability of the nanoemulsion was analyzed by using the "perspective integral" and "phase interface layer tracking" of the detected data. Statistical analysis In this study, all measurements were performed in triplicate, and the results were expressed as mean ± standard deviation. Origin 2019 (OriginLab Corporation, Northampton, MA, USA) and SPSS 25.0 software (SPSS Corporation, Chicago, IL, USA) were used for plotting and data analysis, respectively. Significant difference analysis was carried out with analysis of variance followed by the Duncan test, and a level of p < 0.05 was considered as a significant difference between the two treatments. Emulsifying activity and stability The EAI and ESI are effective indicators to evaluate the emulsifying properties of protein at the oil-water interface (Xu, Wang, Fu, Huang, & Zhang, 2018). As shown in Fig. 1A, when the pressure increased from 0 to 500 MPa, the EAI of casein samples increased significantly (p < 0.05) from 2.4 to 5.2 m 2 g − 1 , which indicated that HHP could improve the emulsifying activity of the casein. This was mainly due to the partial dissociation of casein micelles under high pressure, allowing more amphiphilic micellar fragments access to water and oil, in which hydrophilic residues were oriented to the aqueous phase and lipophilic residues were oriented to the oil phase, so as to reduce the surface tension at the interface. Meanwhile, the decrease in the ESI of HHPtreated casein was also observed. This was attributed to the breakdown of casein micelles induced by HHP treatment, causing the formation of submicelles with smaller size and larger specific surface area. It could give rise to a thinner interfacial film and less space repulsion, thus reducing the emulsifying stability of casein ( Table 1 displays the changes in turbidity, dissociation degree of CCP, NSI and solubility of micellar casein solution during pressure treatment. With the increase of pressure to 500 MPa, the turbidity of micellar casein solution decreased significantly (p < 0.05) from 106.37 to 23.63 NTU, while the dissociation degree of CCP increased significantly (p < 0.05) from 14% to 28%. CCP, as an important component of micellar casein, plays a crucial part in maintaining the structure of casein micelles (Mcmahon & Oommen, 2013). The dissociation of CCP caused by HHP treatment could lead to the disintegration of casein micelles, resulting in the reduction of casein particle size, which is manifested in the decrease of turbidity of micellar casein solution. The NSI of protein represents the solubility of protein in solvent. In comparison with the NSI of the control sample, the NSI increased by 21% after 500 MPa treatment, indicating that HHP treatment increased the casein content in the supernatant after centrifugation. In addition, the solubility of casein powder pretreated with HHP was significantly (p < 0.05) improved, and MC-300 powder showed the highest solubility, which was about 2.5 times that of the control sample. These observations were mainly due to the disintegration of casein micelles caused by the dissociation of CCP, which increased the surface area of casein in contact with water molecules, thus enhancing the hydration of casein. Particle size, PDI and Zeta-potential The particle size, PDI and Zeta-potential of six nanoemulsions are listed in Table 1. As the pressure increased, the size of the stabilized emulsion droplet decreased significantly (p < 0.05) from 723 nm to 345 nm, PDI decreased remarkably (p < 0.05) from 0.54 to 0.32 and the negative value of Zeta-potential increased from − 15.87 mV to − 18.10 mV. The particle size distribution of the six nanoemulsions is shown in Fig. 1B. When the pressure reached 200 MPa, the particle size of the nanoemulsion changed from a bimodal distribution to a unimodal distribution. However, although the particle size distribution of MCQ-500 showed a single peak, its distribution became wider. These observations suggested that the nanoemulsion prepared from the casein treated with 200-400 MPa was able to form a stable system with smaller particle size and better uniformity. The change in particle size of nanoemulsion may be related to the decrease of bound calcium content in the casein The data are shown as mean ± standard deviation. Different lowercase superscript letters in a column indicate a significant difference (p < 0.05). micelles. The study reported by Ye (2011) demonstrated that when the casein micelles in milk protein concentrates were dissociated owing to a decrease in bound calcium content, the emulsifying ability was improved by forming a fine emulsion with a smaller droplet size. Other studies had also shown that casein with lower calcium content could form a more stable emulsion with smaller emulsion droplets (Euston & Hirst, 1999;Singh & Ye, 2020;Ye & Singh, 2001). In this study, it has been proved that HHP treatment could increase the dissociation degree of CCP (Table 1), which meant HHP could reduce the bound calcium content in casein micelles. Therefore, the decrease of particle size of nanoemulsion was closely associated with the dissociation of CCP in casein micelles after HHP treatment. EE of quercetin by nanoemulsion The EE of six nanoemulsions on loading quercetin is illustrated in Table 1. The EE of the control sample was only 48.9%, which indicated that the natural casein could be used in the emulsion to load quercetin to a certain extent, but the EE was not satisfactory. However, this kind of efficiency was remarkably improved after HHP treatment, and the EE of MCQ-500 was increased into nearly 80%, which indicated that HHP process could help casein to bind more quercetin into the emulsion droplets. As a nano-scale protein with the hydrophobic core, casein can be potentially used as an ideal delivery carrier for hydrophobic active ingredients after proper processing and regulation (Tang, 2021). Menendez-Aguirre et al. (2014) investigated that the content of vitamin D2 loaded in casein micelles was about five-fold higher after being treated at 600 MPa and 50 ℃. Chevalier-Lucia, Blayo, Gracia-Julia, Picart-Palmade, and Dumay (2011) found that high-pressure homogenization (100-300 MPa) enhanced the binding of α-tocopherol acetate to casein micelles, due to the micellar size decreased significantly as the pressure increased, while the binding rate to α-tocopherol acetate increased. These observations were similar to our results, revealing that HHP pretreatment could induce the partial dissociation of micelles and thus improve the EE of casein on loading hydrophobic active substances. The thermal stability of nanoemulsion The food emulsion system may be affected by thermal treatment such as heat sterilization during processing, so it is necessary to study the changes of the six nanoemulsion systems under thermal treatment. Fig. 2 shows the changes in Zeta-potential, particle size and PDI of six nanoemulsions after treated in a boiling water bath (100 ℃) for 20 min. As can be seen from Fig. 2A, there was no significant change in the Zetapotential of the control sample and MCQ-100 after heat treatment, while MCQ-200 to MCQ-500 showed an increase, suggesting that strong electrostatic repulsion existed between emulsion droplets. It might prevent aggregation between droplets and kept the emulsion stable (Zhou, Zheng, & McClements, 2021). Besides, the particle size of the nanoemulsion decreased slightly after heat treatment, and the PDI of the nanoemulsion was mostly kept in the range of 0.2-0.5 (Fig. 2B), indicating that no flocculation occurred between the droplets. These results all confirmed that the casein nanoemulsion had good thermal stability. This was mainly because casein itself had a high thermal denaturation temperature of about 140 ℃ due to the lack of tertiary structure (Wang et al., 2013;Broyard & Gaucheron, 2015). Therefore, when casein was used as a delivery system, it could withstand high temperatures to some extent. Fig. 3B corresponds to the appearance of samples at pH 2, 3,4,5,6,7,8,9 from left to right. Different uppercase letters indicate significant differences (p < 0.05) among different pH under the same pressure. Different lowercase letters indicate significant differences (p < 0.05) among different pressures in the same pH. The pH stability of nanoemulsion Food emulsion systems are usually subjected to different pH environments during food processing, so it is of great significance to investigate the stability of casein nanoemulsions at different pH values. Fig. 3 presents the corresponding particle size, PDI and appearance of six nanoemulsions. As shown in Fig. 3A, the particle size of the control sample had an obvious downward trend with the increase of pH value. Conversely, there was no remarkable change in the particle size of nanoemulsion prepared from HHP-treated casein, showing that HHP was conducive to improving the pH stability of casein nanoemulsion. According to Fig. 3B, the emulsion stratified when pH value was 4 and 5, and flocculated when pH was 3 and 6. This was because the pH was close to the isoelectric point of casein (pH 4.6), resulting in the weakening of electrostatic repulsion between the emulsion droplets. Furthermore, the color of nanoemulsion darkened when pH was adjusted to 9.0, which was related to the rapid dissolution and degradation of free quercetin. It was reported that the phenolic hydroxyl group of quercetin would be deprotonated and negatively charged under strongly alkaline conditions, which could greatly improve the water solubility of quercetin (Peng, Zou, Zhou, Liu, Liu, & McClements, 2019). In addition, compared with the color of the control sample under pH 9.0 conditions, the color of MCQ-300 to MCQ-500 became lighter, indicating that more quercetin was embedded in casein micelles after HHP treatment, which was consistent with the EE measured earlier (Table 1). The ionic strength stability of nanoemulsion Studying the stability of emulsions at different ionic strengths is an important basis for further application in different food matrices. Fig. 4 shows the effects of different ionic strengths on the Zeta-potential, particle size, PDI and appearance of six nanoemulsions. The absolute value of Zeta-potential decreased with the increase of ionic strength (Fig. 4A), which was attributed to the electrostatic shielding effect caused by the addition of NaCl buffer. According to Fig. 4B, the particle size of the control sample and MCQ-100 increased as the ionic strength increased. It was also due to the addition of salt ions shielding the surface charge of the emulsion droplets, resulting in a decrease in electrostatic repulsion between droplets, and causing droplet aggregation. Nevertheless, for MCQ-300 to MCQ-500, even if there existed a strong electrostatic shielding effect when the ionic strength was 500 mM, the particle size showed a decrease instead of an increase. This was mainly because the particle size reduction effect resulting from the dissociation of CCP was greater than the particle size increase effect resulting from electrostatic shielding. In this study, it has proved that the dissociation of CCP of casein could be caused by HHP treatment. Meantime, the study of Famelart, Le Graet, and Raulot (1999) showed that the increase in NaCl concentration of casein micelle suspension could also lead to the further dissolution of calcium and phosphorus in the micelles. Moreover, when the NaCl concentration increased from 100 to 500 mM, no significant change in the particle size of MCQ-300 to MCQ-500 was observed, and the PDI value was below 0.35, indicating that the nanoemulsion can remain stable under different ionic strengths. These observations were consistent with the appearance of nanoemulsions shown in Fig. 4C, all of which could maintain a uniform and stable opaque state. Physical stability of nanoemulsion Based on the food industrial application, it is of great necessity to evaluate the physical stability of the nanoemulsion, which largely affects the shelf life of the product. In this study, the dynamic changes in light transmittance of nanoemulsion during the centrifugation process were continuously recorded. The physical stability was quantified and evaluated by the Integral Transmittance and Instability Index that related to centrifugation time. As exhibited in Fig. 5A, the changes in transmittance detected at any point of the sample cell were small in the initial stage of measurement. This revealed that nanoemulsion could maintain strong stability at first. Nevertheless, with the increase of centrifugal time, the transmittance at the bottom of the sample cell gradually increased. It was due to the stratification of nanoemulsion, wherein the water phase with higher density moved to the bottom and the oil phase with lower density moved to the top. In addition, it could be further found that the degree of change in transmittance of MCQ-300 to MCQ-500 was smaller than that of the control sample, MCQ-100 and MCQ-200. This indicated that nanoemulsions prepared from casein treated with 300 MPa and above had better physical stability. This observation was consistent with the results shown in Fig. 5B and 5C that MCQ-300 to MCQ-500 had lower integral transmittance and instability index. Meanwhile, the stratification degree of MCQ-300 to MCQ-500 significantly reduced (Fig. 5C), which could also further prove that HHP treatment (≥300 MPa) can significantly enhance the stability of the casein nanoemulsion. Conclusion In this study, HHP has been demonstrated to show a significant impact on the micellar structure and emulsifying ability of natural casein. It could effectively disintegrate the micellar structures of natural casein by dissociating colloidal calcium phosphate, which largely improve the emulsifying activity and encapsulation efficiency. 300 and 400 MPa treatments caused the formation of casein nanoparticles with the uniformed particle size distributions, while 500 MPa lead to the nanoemulsion loading most quercetin and subsequently showed the best physical, ions and thermal stability. This was conducive to improving the shelf life of casein nanoemulsion and expanding the application prospect of casein-based products. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-06-05T15:13:59.450Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "c290dd27a76b08c2a7e777213571942ab5e118df", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fochx.2022.100356", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03e4da9482fa6bbffc7f925ed017eee7d05c2ebf", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
261412599
pes2o/s2orc
v3-fos-license
Public Discourse of Genetically Modified Organisms in China: An Investigation of Commenting and Reposting Behaviors on Social Media This study examined the factors affecting user engagement in the genetically modified organism (GMO) discourse on one of the most popular social media platforms in China. A content-analysis study was conducted with a sample of the most commented GMO posts on social media and over 7000 comments. Firstly, posts from well-known and government accounts facilitated the online discourse of GMOs. Secondly, the nature of the event and the features of GMO messages influenced how users engaged with the online discussion. Thirdly, among the commenters, female users were more likely to agree with the original posts whereas male users were more likely to repost the message. In conclusion, social media users held negative attitudes toward GMOs as they tended to repost more and agree more when GMO risks were mentioned. Source and message features, as well as commenters’ characteristics, were found to have a significant impact on user engagement. Social media continued to serve as an important public space to facilitate public debate on a controversial scientific topic. Introduction In the past two decades, the debate over genetically modification (GM) technology has never attenuated on traditional or social media (Wen and Wei, 2018;Yu and Xu, 2016).Genetically modified organisms (GMOs) are defined as living organisms whose DNA has been modified by genetic engineering techniques (Nelson, 2001).GMO supporters suggested that GM technology can help address global food security issues (Tao and Shudong, 2003;Zhang et al., 2016); while opposers often argued that this technology can be hazardous to health, environment, or even natural species (Cui and Shoemaker, 2018;Yu and Xu, 2016;Zhang et al., 2016). Researchers found that over 69% of the people looked for GMO knowledge online, which were often unverified or inaccurate (Cui and Shoemaker, 2018;Jiang and Fang, 2019).As a result, the understanding of GMOs among Chinese consumers remains superficial and inadequate (Cui and Shoemaker, 2018;Jin, 2017).Misinformation has also been found in the heated public discussion on social media (Chow, 2019).Scholars argued that the uncertainty surrounding the technology, inadequate official information, and a lack of trust in the government or scientists has made GM technology less favored by the public (Jiang and Fang, 2019). The above concerns help construct a natural call for more scholarly attention to how GMOs were discussed online.Therefore, the current study focused on addressing the public discussion of this topic by examining how the characteristics of source, message, and commenters contribute to user engagement in the GMO discourse on social media.The aim of this study is to deepen the understanding of how common users of social media responded to this hotly debated issue in China, with the hope that the study can help enrich the literature on the role of social media as a networked medium for the information relay of a controversial scientific topic. GM technology Food security is always a national concern in China because the country owns one-fifth of the world's population but only 8% of the world's arable land (Huang and Yang, 2017;Larson, 2017).GM technology is regarded as one of the many possible solutions to address this concern given that it may help improve crop productivity (Larson, 2017;Tao and Shudong, 2003). Therefore, China has become one of the largest growers of GM crops in the world (Zhao and Ho, 2005).Although the societal acceptance of GMO products remained low (Cui and Shoemaker, 2018;Yu and Xu, 2016) in China, the advantages of GM crops in greater productivity, improved nutrition values, reduction in the need for pesticides and herbicides, as well as increased economic benefits for farmers have been well documented by many studies (Bennett et al., 2013;Qaim, 2010). Globally, the acceptance of GMOs and GM crops also varied by country.Consumers showed both skepticism and enthusiasm toward GM technology in France (Ricroch and Jésus, 2009), Turkey (Veltri and Suerdem, 2013), Polland (Rzymski and Królczyk, 2016), as well as in Japan, Norway, and Taiwan (Chern et al., 2002).More importantly, the public discussion of GM technology has gone far beyond the scope of food safety.This topic has been often associated with national security, human rights, regulations of agricultural industries, ecological security, and environmental pollution (Cui and Shoemaker, 2018;Rzymski and Królczyk, 2016;Yu and Xu, 2016). China started to issue GMO Safety Certificates to domestically developed GM tomatoes and cotton in 1997, followed by petunia in 1999, sweet pepper and chili pepper in 1999, papaya in 2006, rice in 2009, and corn in 2009.Among the seven certified crops, cotton was the only one that has been widely cultivated in the country (Zhang, 2014).This slow speed in commercializing GM crops is largely due to the Chinese public's doubts about the alleged benefits of GMOs (Jin, 2017;Xu et al., 2018;Zhao and Ho, 2005). In the most recent decade, scholars have paid more attention to the public perceptions of GMOs (Cui and Shoemaker, 2018;Du and Rachul, 2012;Xu et al., 2018;Yang et al., 2014;Yu and Xu, 2016;Zhao and Ho, 2005).For example, Cui and Shoemaker (2018) found that Chinese consumers who oppose GMOs claimed GMO food could be unhealthy and hazardous; the technology itself could pollute the DNA of natural species, and the facts that many countries in the world remained cautious about GMOs served as justifications for the controversies of the technologies. Beyond these public concerns, Yu and Xu (2016) discovered that consumers on social media criticized the lack of choices for consuming GM foods and the paucity of trustworthy information for them to understand this technology, which tied a technology issue to the violation of human rights.Additionally, believers of conspiracy theories think that GM food was used as a bioweapon to weaken the immune system of the Chinese people (Yu and Xu, 2016) or that Chinese consumers were used as subjects to test the safety of GM crops (Yang et al., 2014).On the other hand, GMO supporters in China acknowledged GMOs' advantages in solving the issue of food shortage and improving food nutrition (Cui and Shoemaker, 2018;Yu and Xu, 2016). In more recent research, Wang et al. (2021) analyzed 257 online GM cartoons and discovered that Chinese social media presented a pessimistic view regarding GM technology through these visuals.The messages conveyed in these cartoons presented the scary and risky images of GM technology and focused on many scientific and political conspiracies surrounding this issue (Wang et al., 2021). Social media and user engagement As one of the most popular social media platforms, Sina Weibo was first launched in 2009.This microblogging platform now owns 566 million monthly active users (Statista, 2021).It is ranked the 11th most widely used social media worldwide according to Statista (2021) and offers functions very comparable to Twitter on which users can post short messages, pictures, and videos.Users can also receive likes or comments from other users.Other users can repost a message to their own page to spread a post.These types of behaviors are defined as user engagement, which are tangible ways that facilitate user interactions with social media content (Oh et al., 2018).There are three discrete types of behaviors that are commonly studied by social media scholars including like, comment, and share (Cho et al., 2014;Kim and Yang, 2017;Srivastava et al., 2018). Our study focused on researching both the commenting and reposting behaviors (i.e.sharing) which are considered more in-depth engagement with the social media content comparing to the clicking of the "like" button (Cho et al., 2014;Srivastava et al., 2018).Theoretically, commenting on a social media message requires more in-depth cognitive processing of the topic which can contribute to the diverse perspectives of an issue (Kim and Yang, 2017;Srivastava et al., 2018). In addition, the reposting behaviors, when a user chooses to share a message to his or her own network on social media, help relay information and knowledge and can facilitate the spread of the knowledge and opinions surrounding GMOs (Xu et al., 2018).Furthermore, we investigated the likelihood of users to agree with the original post depending on various factors associated with the characteristics of source, message, and commenters.Xu et al. (2018) discovered that both message and source attributes of social media posts can impact user engagement in social media, such as the likelihood of liking, commenting, and reposting.Following their approach, this study takes these two sets of attributes into consideration-commenting and reposting-with the hope of adding up to the current literature on user engagement research on social media.We also examined additional factors that may have an impact on user engagement, such as event type, presence of different GMO stakeholders, and commenters' characteristics. Additionally, this is one of the first attempts that explored how commenters' agreement/disagreement with the original post.Studying agreement in comments goes beyond the conventional types of users' behaviors already been researched by social media scholars (Cho et al., 2014;Kim and Yang, 2017;Srivastava et al., 2018), which can provide new insights into understanding users' reaction to the GMO discourse on social media. Two key GMO events The debate of GMOs on social media is often motivated by significant events surrounding the issue (Xu et al., 2018).Therefore, this study chose two key events that happened in 2014 and 2016 which had generated heated discussions of GMOs in China.The first motivating event was the release of a documentary film about GMOs produced by Yongyuan Cui, a well-known TV host, and reporter (Lü and Chen, 2016;Zhang, 2014).The documentary generated a lot of debate online regarding the safety of GMOs, the civil rights to be informed and discussed about them, and the potential risks of consuming GM foods (Yu and Xu, 2016).We considered this event as a bottom-up discourse, which was driven by an independent activist instead of the political authorities. The second significant event was the government's release of a 2016 policy statement that asserted that the government would be very careful in promoting genetic modification technologies in China.It offered a clear signal that the Chinese government was determined to encourage the commercialization of GM technologies (Xiao, 2016), but would be cautious in commercializing the GMO crops and foods, considering the public concerns about the safety of this technology. That event was a top-down initiative driven by the political authorities and stirred another wave of GMO debate online. Our research analyzed the online discourse of GMOs around the times when these two events happened.One of our goals was to understand how different types of GMO events can motivate the public to discuss about GMOs.Additionally, we focused on examining a variety of variables that can affect how the users engaged in the online discourse of GMOs, including source characteristics, message characteristics, presence of GMO stakeholders, and commenters' characteristics. Source characteristics A message source is defined as "a person, a group, or even a label that has a favorable or unfavorable connotation for the message recipient (Hass, 1981: 142)."Because a large portion of knowledge that people obtain are from other people, the sources of information can be vital to the communication process (Hass, 1981).Based on the cognitive response model, there are three sets of characteristics that can influence the persuasive effect of a message, including credibility, attractiveness, and power (Hass, 1981).These source characteristics have been central to several communication theories.For example, the elaboration likelihood model (ELM) indicates that source attractiveness can influence attitude in persuasion through both central and peripheral elaboration routes (Booth-Butterfield and Welbourne, 2002).When considering the context of online information transmission, the source can also be perceived as having multiple layers where one message can have a number of sources (e.g., a tweet being re-tweeted for many times) (Sundar, 2008).Therefore, online users may judge the credibility of a message by taking source cues, such as who wrote the tweet originally and who retweeted it, into consideration (Sundar, 2008). Scholars have studied the importance of sources in communicating about GM technology.Irani et al. (2001) found the Food and Drug Administration (FDA) and the United States Department of Agriculture (USDA) were the most trusted sources by consumers to communicate information about GM food when compared to industry sources.In a more recent study, Xu et al. (2018) discovered that source attributes were significantly associated with user engagement on Weibo.Users were more likely to repost or comment on a GMO post if the message was originally from an organizational account than from an individual account.Messages from verified accounts also received more likes from users.The authors concluded that some source characteristics can influence user engagement in the discourse of GMOs on social media (Xu et al., 2018).When examining sources, the current study focuses on examining the type of accounts that posed the original GMO post and its impact on user engagement. Message characteristics The investigation of message characteristics of GMOs in this study was based on the framing theory which posits that certain aspects of an issue can be highlighted or emphasized in communicative messages (Entman, 1993).However, in more recent literature, Cacciatore et al. (2016) argued that scholars should "abandon the general term 'framing' and replace it with more precise terminologies (p.15)."Therefore, we adopted the term "message characteristics" in the current project to study the emphasis of GMO posts. In the context of GMOs, several message characteristics were found to be influential and can affect how people respond to social media posts (Margolis, 1996;Xu et al., 2018).For example, people choose to adopt different risk frames and/or benefit frames in their online discourses about GMOs (Margolis, 1996).When they adopt either the risk-only or benefit-only frame, they are taking a one-sided view towards this issue.When both frames are used, people see both the risks and opportunities associated with GMOs and make trade-offs in their discussions (Margolis, 1996).Additionally, Xu et al. (2018) discovered that the risk and benefit frames were found to have significant effects on individuals' social media engagement, although the risk frame and the benefit frame varied in their respective influences on different types of engagement (i.e.sharing, comments, and likes).We, therefore, included the risk-versus-benefit message characteristic to be examined in this study. When discussing the risks and benefits of GMOs, individuals could adopt different types of evidence to support their arguments.In persuasion literature, one commonly investigated evidence is exemplar versus statistics (Limon and Kazoleas, 2004).Personal anecdotes, examples, stories, testimonies, and analogies are often used as narrative materials of exemplars, providing qualitative support for communicators' statements (Brosius and Bathelt, 1994;Curtis et al., 2004), whereas numerical items, in the form a summary record, provide quantitative evidence for persuasion (Allen and Preiss, 1997;Zillman, 1999;Zillman and Brosius, 2000).Although the studies on the effect of exemplar and statistical evidence have yielded mixed findings in terms of attitude change, they seem to suggest that people tend to be less critical about the exemplar evidence but are more likely to engage in counterarguing against the statistical evidence ( Limon and Kazoleas, 2004).Therefore, whether GMO posts use exemplar and/or statistical evidence is included as the second set of message characteristics to be examined in this study. Stakeholders in the GMO discourse Stakeholders refer to parties that represent the interest of a public or private sector in a particular issue.GMO stakeholders can include biotechnology companies, major food manufacturers, large food distributors, government departments and regulatory agencies, scientists and their institutions, farmers' unions, environmental and consumer protection groups as well as other non-governmental organizations (Aerni and Bernauer, 2006;Marris, 2001).Yu and Xu (2016) discovered that social media users tended to believe that the safety of GMO food should be justified or protected by stakeholders with authorized expertise such as scientists or government rather than food or agricultural companies.In this project, we focused on examining whether the mention of different stakeholders in GMO posts can have an impact on user engagement. Commenters' characteristics Commenters are users who contributed to the discourse of a topic through commenting on the initial social media posts.There has been increasing scholarly attention on comments (Hellmueller et al., 2020).Onuzulike (2021) examined comments on YouTube and discovered how commenters vented their feelings toward the Nigerian government's shortcomings though commenting on videos.In a more related context, Seo et al. (2015) suggested that negative comments on Facebook can induce negative responses toward food safety information.Hilverda et al. (2018) argued that positive comments and likes signaled cues of social proof by indicating users' agreement with social media posts.When comments were perceived as useful, consumers were more likely to find additional information about organic food (Hilverda et al., 2018).Different than previous investigations that have only researched source or message effects on social media, this study also examined commenters' characteristics disclosed in their profiles and social media messages, such as self-reported biological sex, number of followers, number of followees, posting frequency, and indirect exposure (i.e.whether a commenter exposed to the original GMO post via another user's repost).The goal is to explore how this set of variables contributes to user engagement in GM discourse on Weibo. Research aims and questions The current study took an empirical approach and investigated the debate of GMOs on the most popular social media platform in China.It examined both how the GMO messages manifested on social media and how these messages helped generate different types of comments and engage users in the GMO discourse.By examining these important factors, our study offered a comprehensive understanding of the public discourse of GMOs on social media. We proposed three research questions to satisfy our research goals, including: Sample Given that the Sina Weibo API closed its search interface in 2012, the team needed to use the selfdeveloped Python web crawler to collect relevant posts, replies, and comments from Sina Weibo. The program was based on Python packages including Requests and BeautifulSoup, for acquiring Weibo HTML pages and extracting data.We retrieved all publicly-available posts containing the term "genetically modified" (转基因 in Chinese) covering the aforementioned two months.It yielded a total of 18,657 GMO posts in 2014 and 5867 posts in 2016.Those posts were then ranked based on the number of received comments.With the goal of studying comments to GMO posts, the final sample included the 30 posts most frequently commented on each month. After filtering out irrelevant comments, a total of 7663 comments were retained: 5409 in 2014 and 2254 in 2016.Our sampling strategy followed a similar approach used by Wang and Song (2020). For RQ1a-c, the unit of analysis was each original GMO post.For RQ2a-d, the unit of analysis was each comment received by the GMO posts.For RQ3a-d, the unit of analysis was each report received by the GMO posts. The variable source characteristics measured the nature of the account where a GMO post came from 1 = expert; 2 = media/organization; 3 = celebrity/online influencer; 4 = ordinary user.This variable was later re-coded into 1 = ordinary user, and 0 = other for analyses relating to RQ2a-b and RQ3a-b.Initially, we included scientists/experts in our coding scheme, but it turned out that none of the top 30 most commented posts for 2014 and 2016 were authored by an expert/scientist.Therefore, we removed that category from the coding.Celebrity/online influencers' accounts were coded when the Weibo accounts were officially verified by Weibo and belonged to well-known reporters, idols, writers, singers, and online influencers.Ordinary users were coded when the accounts were not verified by Weibo.Organizations accounts included media organizations, NGOs, government, and business organizations.The variable event type measured the nature of the motivating event (0 = 2014 Documentary, 1 = 2016 Policy Change). We also collected data surrounding the commenter's characteristics.The data for the following variables were subtracted using the Python program we developed: (1) gender of the commenter (0 = male, 1 = female); (2) the number of Weibo accounts a commenter followed (i.e.commenter as a follower), (3) the number of followees of each commenter had, (4) the total posts ever generated by the commenter on Weibo (i.e.posting frequency), and (5) indirect exposure.The variable indirect expose was measured by whether a sign of "//@" was presented.When a "//@" was presented, it signals that a commenter was exposed to the original GMO post via other users' reposts (0 = commenter viewed the original post, 1 = commenter viewed a repost).These variables are public data that can be directly retrieved from Sina Weibo.They are coded by automated coding. Coding of dependent variables The number of comments each GMO post received was extracted using the Python program we developed.The agreement of each comment with the original post was manually coded as 0 = disagreement, 1 = unclear, or 2 = agreement.The reposting behavior measured whether a commenter reposted a GMO post (no = 0, yes = 1). Inter-coding reliability and data analysis The coding was conducted by two coders who were fluent in both Chinese and English.Before the coding started, the two coders were carefully trained and coded a randomly selected sample (30% of the entire sample) independently to establish the inter-coder reliabilities.Using Krippendorff's alpha, the reliabilities for these variables ranged from 0.82 to 1.0 (see Table 1).The data were analyzed using R programming.We performed the chi-square tests using the chisq.test()function in R, and fit the logistic regression models using the glm() function in R. Results RQ1a-c asked whether source characteristics, event type, or the presence of different stakeholders can have an impact on the number of comments a GMO post received. Chi-square tests were used to answer RQ1a-c (see Table 2).We discovered that users were more likely to comment on posts from verified Weibo accounts such as celebrities and online influencers in 2014.For example, one post from Liang Huan (梁欢), a famous screenwriter, posted on 3 March 2014, which questioned the scientific quality of Yongyuan Cui's GMO documentary.This post alone received 601 reposts, 439 comments, and 337 likes. The event type also contributed to the varied patterns we found pertaining to the number of comments received.The release of GMO Documentary made by the GMO activist Yongyuan Cui in 2014 stirred waves of comments on the topic and the majority of the comments were made on posts from celebrity/online influencers' accounts.The policy change regarding GMO research and commercialization in 2016 generated more comments to posts from organizations (49.6%), which was significantly higher than those in 2014 (3.1%) (see Table 2). The number of comments received (Table 3) also varied by the event type and the different typesof GMO stakeholders mentioned in the GMO posts, [χ 2 (3, n = 7663) = 6814.35,p < .001]. Ordinary users The GMO post is coming from an account that is not verified by Weibo. Celebrities/online influencers The GMO post is coming from an account that is verified by Weibo with a symbol of Red V on the account.These accounts are held by celebrities such as well-known singers, writers, TV hosts, online influencers, or actors Organizations The GMO post is coming from an account that is verified by Weibo with a symbol of Blue V on the account.These accounts are held by organizations such as media outlets, companies, or NGOs Level of agreement in comments Whether the commenter indicated disagreement, an unclear attitude, or agreement with the GMO post Note: Information under the "definitions" column were used to guide coders to complete the coding.The reliabilities for categories: GMO benefits (α = 0.98); GMO risks (α = 0.93); exemplar (α = 1.00); statistics (α = 1.00); ordinary users (α = 1.00); celebrities (α = 1.00); organizations (α = 1.00); level of agreement in comments (α = 0.82).GMOs: genetically modified organisms.Because the 2014 event was initiated by a GMO activist, 88% of the comments were made when different GMO activists were mentioned, significantly higher than those in 2016 (0.2%).A different pattern was found in 2016, such that policymakers as the key stakeholders attracted more comments in 2016 (83.0%) than in 2014 (0.6%).This may be due to the fact that the driving event in 2016 was a government announcement.In general, the posts about GMO consumers attracted much fewer comments regardless of the event type. RQ2a-d asked about whether measuring source characteristics, event type, and commenters' characteristics would have an impact on commenters' likelihood of agreement with the original post. To answer RQ2a-d, ordered logistic regressions were conducted with commenters' agreement with the original post as the dependent variable.The predictors were entered into the model in the following order: risks, benefits, exemplars, statistics, ordinary users, event type as the first block; gender of the commenters, number of followers, number of followees, posting frequency, and indirect exposure as the second block; and terms representing the interactions between message characteristics and source characteristics as the third block.The results are presented in Table 4. Message characteristics When including the message characteristics as predictors, the posts mentioning risks had 3.40 times greater odds of receiving agreeable comments than those with non-mention of risks (p < .001). When GMO benefits appeared, the comments that agreed with the original post were less likely to be seen (OR = 0.10, p < .001).The presentation of exemplars (OR = 4.48, p < .001) or statistics (OR = 1.72, p < .001)both increased the likelihood of generating agreement in the comments significantly. Source characteristics When the original post was from an ordinary user, the agreeable comments were more likely to appear (OR = 1.64, p < .001)than when the post was from a non-ordinary user.In other words, Event type The agreeable comments about GMOs were much more likely to appear in response to GMO posts in 2014 than in 2016 (OR = 3.38, p < .001).Commenters tended to agree more with the original post in the GMO documentary event.They were less likely to agree with the original post when the trigger event was the policy change announcement, even though both of the events were seen as anti-GMO signals. In summary, mentions of GMO risks, the use of examples, or the use of statistics increased the likelihood of generating agreeable comments.Mentions of GMO benefits, however, decreased the likelihood for agreeable comments to appear.The 2014 event was more likely to receive agreeable comments than the 2016 event.GMO posts from ordinary users were more likely to be supported by commenters. Commenters' characteristics When considering the receivers' characteristics as predictors, the analyses found that female users were significantly more likely to make agreeable comments (OR = 1.79, p < .001).When users accessed the GMO posts via reposting (i.e.indirect exposure), the likelihood of an agreeable comment to appear increased (OR = 1.62, p < .001). Interaction effects of source and message characteristics There was a significant interaction effect between source characteristics and the presence of GMO benefits on the agreeable comments (OR = 14.22,p < .001).When the original posts presented GMO benefits and were from ordinary users, agreeable comments were significantly more likely to appear.In addition, there was a significant interaction effect between whether the post was from an ordinary user and whether the post mentioned exemplars on the agreeable comments (OR = 0.13, p < .001).Specifically, when the GMO posts mentioned exemplars and came from ordinary users, agreeable comments were less likely to appear. RQ3a-d asked about whether measure characteristics, source characteristics, event type, and commenters' characteristics would have an impact on commenters' likelihood of reposting the GMO post. To answer RQ3a-d, binomial logistic regressions were conducted with the reposting behavior as the dependent variable (see Table 5).The predictors were entered into the model in the following order: risks, benefits, exemplars, statistics, ordinary users, and event type as the first block; the commenter's gender, number of followers, number of followees, posting frequency, and indirect exposure as the second block; and the interactions between the message characteristics and whether the message originated from ordinary users as the third block. Message characteristics GMO posts were more likely to be reposted when they mentioned risks (OR = 2.59, p < .001) or statistics (OR = 1.70, p < .001).They were less likely to be reposted when GMO benefits were mentioned (OR = 0.37, p < .05)or exemplars were included in the message (OR = 0.63, p < .05). Event type and source characteristics The posts in 2014 were less likely to be reposted by commenters than those in 2016 (OR = 0.51, p < .001).The posts from ordinary users were less likely to be reposted than those from other users (OR = 0.12, p < .001). Commenters' characteristics Female commenters were less likely to repost than male commenters (OR = 0.77, p < .001). The commenters who followed more accounts were more likely to repost a GMO post (OR = 1.58, p < .001).Those who saw a GMO post via reposting were 7.84 times more likely to repost it (p < .001). Interaction effect of source and message characteristics There was a significant interaction effect of ordinary users and mention of risks on the likelihood of reposting (OR = 18.21, p < .001).When the posts were from ordinary users, GMO posts presenting risks were much more likely to be reposted.There was a significant interaction effect between ordinary users and users of statistics on the likelihood of reposting (OR = 51.91,p < .001).When the posts were from ordinary users presenting statistics regarding GMOs, they were significantly more likely to be reposted. Discussions and conclusions This study extends previous research on the discourse of GMOs on social media with some important findings.We have provided new evidence to reveal (1) why users comment on GMO posts on social media, (2) why users agree with some GMO posts more than others, and (3) why people choose to pass along a GMO message.These findings provide important implications for our understanding of how the public engages in an important scientific topic in cyberspace and the factors that can promote continuous online engagement. Why do users comment? First, our study found that GMO online debate can be event-driven and motivated by different key stakeholders in a specific GMO event.We discovered that users were more likely to comment on the GMO posts from the celebrities and online influencers in 2014 since the stirring event was the release of a GMO documentary produced by a TV host.In 2016, however, the comments toward the policy announcement event were made frequently on the posts from organization accounts, including mass media and non-profit organizations. Second, the users were found more actively engaged in commenting when the GMO posts mentioned key stakeholders, such as famous GMO activists or policy makers in the 2 years that we examined.In 2014, a GMO documentary was released by a well-known TV host Yongyuan Cui. Therefore, when his name appeared, users were more likely to make comments on GMO posts.In 2016, the policy change on GMO commercialization stirred another round of discussion on GMO issues.This time, the appearance of policymakers in the posts generated more comments.It suggested that users were motivated to engage in the discussion when there were key events happened.The number of comments received varied when the key players in the event changed. These findings suggest that social media users actively chose the sources of information when deciding whether to engage in the online debate.Both GMO activists like Yongyuan Cui or the Chinese government or governmental officials can successfully motivate the discussion of GMOs on social media.Users chose to follow and engage with different account holders when the nature of the event altered.These findings also suggest that if the goal is to enhance the public's discussion of GMOs, celebrities, online influencers, and the government can be effective facilitators. Why do commenters agree? Our study investigated the agreement of the comments with the original GMO posts on social media.We found that commenters were more likely to show agreement when the posts containing some supporting evidence, such as narratives or statistics.The release of the GMO documentary generated more agreeable comments than the event of the policy announcement, although both events signified the negativities and challenges that the GMO faced. Similar to previous literature (Yu and Xu, 2016), the negative attitude toward GMOs is also noted in our study with commenters more likely to agree with the anti-GMO posts.When GMO benefits were mentioned, commenters were less likely to agree or to repost.For example, a post on 31 March 2014 mentioned that consumption of GMO products would not cause diseases and the rumors surrounding GMOs were not true.The post generated dozens of comments and over 60% were negative (see the original post below).Commenters were also more likely to agree with the posts from non-celebrity accounts (i.e.ordinary users).This finding suggests that even though people are more likely to follow and engage with the GMO posts coming from organizations and celebrity/online influencers' accounts, they tend to agree more with posts coming from unknown users.It may be because these unknown users are people "like me" and this perception of self-representativeness makes users more likely to agree with the opinion raised.This is an area that future studies can further explore. One important contribution of this study is that our findings on commenters' behavior can extend the theoretical considerations of user engagement research.The study found that when commenters saw a GMO post that had been reposted by others, they were more likely to agree with the original GMO post.In other words, when individuals saw the "//@" sign in the message, it transfers some of the persuasive power to those who made comments and these commenters tended to agree more with the message comparing to those who saw the original post directly. Theoretically, when a message is being reposted or retweeted on social media, it signifies (1) public agreement with someone; (2) validation for others' thoughts; (3) amplification or spreading the message to new audiences (Boyd et al., 2010).Based on the theoretical explanation of reposting behaviors, it is likely when commenters saw a GMO post via a repost, they perceived it as others also support this."As a result, they tended to agree more when they were exposed to a GMO message via reposting.The persuasive effect of reposting was evident in this study, which can possibly impact commenters' attitudes toward an issue. Why do people repost? Reposting or retweeting behaviors have been studied widely on microblogging sites (Boyd et al., 2010;Luo et al., 2013).Besides sharing information and helping messages to relay on a social media platform, the reposting behavior also suggests an attitude of supporting or agreeing with a message (Boyd et al., 2010).In this study, users were found more likely to repost a GMO message when it mentioned risks or statistics.GMO posts that contained personal stories or GMO benefits were less likely to be further spread on Weibo.This implies that users still perceive GMO products as more risky than beneficial. The reposting of GMO messages was more aggressive in 2014 when the GMO documentary was released than when the policy change was announced in 2016.Male commenters were more likely to repost than female commenters.The commenters who had more followers were more likely to repost.Additionally, we found that commenters who saw a GMO post via reposting were 7.8 times more likely to repost it.This is in line with another finding in this study that commenters considered the reposted message as more than just a click of the repost button.When they viewed a GMO post via others' reposts, they were more likely to relay the reposting behavior.This evidence is valuable to extend the current theoretical understanding of the spreading network of GMO information on social media. GMO issues in China The attention to GMO issues remained active in scholarly research partially because the public still had a mixed understanding of this scientific development.Rumors, conspiracies, and misinformation surrounding GMOs (Wang and Song, 2020;Wang et al., 2021;Yu and Xu, 2016) may confuse how people are supposed to comprehend them.One risky finding we discovered is that commenters were more likely to agree with unknown users on GMO discussions, which means that unverified accounts may be more persuasive than known sources.This may contribute to the spread of untruthful information about GMOs and provoke new spirals of controversies surrounding GMOs.The pause of commercialization of GMO crops suggests that there will still be years before the public is ready to accept GMO products (Jin, 2017). Limitations This study is subjected to a few limitations.Given the nested nature of the data, multi-level logistic regression would be an ideal option.However, unlike continuous outcome variable models, estimating multi-level logistic regression models requires a large number of groups in order to achieve acceptable coverage rates for the variance component estimates (Ali et al., 2019).Because of the small number of groups in our dataset, single-level logistic regression models were used as an alternative approach. The second limitation is that our data were collected back in 2017.Therefore, our results only reflected the public discourse of this controversial issue at that time.It is vital to note that the public understanding of GMOs can evolve over time (Marques et al., 2015).As a result, our results remained meaningful as they documented two important historical GMO events in China and how the public responded to them in history.In addition, some recent literature suggest that the issue of GMOs remained as a scholarly focus of researchers due to the wide spread of rumors and conspiracies surrounding this issue (Wang and Song, 2020;Wang et al., 2021).Our project helped contribute to this line of research. Another limitation is that we did not have the ability to differentiate between reposts made by real users and by fake automated accounts.The non-human accounts on Weibo are referred to as bots, the type of accounts that can repost messages automatically on social media such as Weibo, Twitter, and Facebook (Bai, 2018).Because of the increased censorship targeting Weibo, more and more users chose to use fake accounts to disguise themselves from being identified as dissidents (Bai, 2018;Liu and Zhao, 2021).Therefore, in order to fully understand the nature of reposting behavior, future studies need to examine whether reposts were made by bots or real accounts. In this study, we focused on studying comments on GMO posts and commenters' behaviors.The set of commenters' characteristics (e.g., gender and number of followees) has been rarely explored in previous research, which left this part of the investigation lacking enough theoretical foundation.We also want to point out that the retrieved data for the variable gender was obtained from registered user information on Weibo.Users can be untruthful when disclosing their gender on social media, which can reduce the validity of the results associated with users' gender in this study.We included this part of the examination as an attempt to conduct exploratory research.One significant result we found by including this set of exploratory variables is that the reposting symbol "//@" generated more agreement and reposting behaviors, which can set up a new direction for future research. Conclusion and implications This study has extended the current literature on how people engage with a scientific topic on social media in multiple ways.Firstly, the study revealed that users chose to react to different GMO events in different ways.Celebrities, online influencers, and the government facilitated the online discourse of GMOs.Secondly, the nature of the event and the characteristics of GMO messages can all affect how users engage with the message.Lastly, our study discovered the persuasive power of the symbol "//@".When commenters saw a GMO post via others' reposts, they commented in a more agreeable way and were more likely to repost them. This project has also revealed important implications for the public understanding of GMOs.As GMOs remain a controversial topic in China (Cui and Shoemaker, 2018;Yu and Xu, 2016), there is still a long way to go before the public accepts GMO products as part of their daily food choices. Furthermore, social media continued to serve an important public sphere to promote public understanding of an important scientific topic.And GMOs were no exception. RQ1a-c: How do the a) source characteristics, b) event type, and c) the presence of different stakeholders influence the likelihood that a GMO post being commented?RQ2a-d: How do the a) source characteristics, b) event type, c) message characteristics, and d) commenters' characteristics to the GMO posts influence commenters' likelihood of agreement with the original post?RQ3a-d: How do the a) source characteristics, b) event type, c) message characteristics, and d) commenters' characteristics to the GMO posts influence commenters' likelihood of reposting a GMO post?MethodSampling frameThis study implemented a content analysis study that systematically examined social media messages.The two key events discussed above have helped determine the sampling frame of this study.The GMO documentary was released on 1 March 2014, so posts about GMOs on Weibo were sampled from 1 March 2014 to 31 March 2014.The policy-change announcement was made on 27 January 2016.Therefore, we sampled the Weibo posts published from 27 January 2016 to 26 February 2016. Table 1 . Definitions of manually coded variables. Table 2 . Means and percentages of comments received by source characteristics and event type. Table 3 . The number of comments received by posts addressing different stakeholders. Table 4 . Ordinal logistic regression predicting the appearance of the agreeable comments. Table 5 . Binomial logistic regression on reposting of the original post.The number of followers and the number of those followed were re-scaled.The units for those numbers are thousands of accounts.Ordinary users = 1, non-ordinary = 0. Event type: 2016 = 0, 2014 = 1. *
2023-09-01T15:16:22.066Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "1b87c9d36e5b06d182ca4e72ab773c2e52cd769b", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/27523543231196341", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "1c0d6be944307e6ff6d9973a5c17742f19f9d496", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
56144291
pes2o/s2orc
v3-fos-license
Research on multimodality in the context of highly qualified national scientific journals : a cartographic study Contemporary social practices are increasingly permeated by a variety of semiotic resources or, in other words, by the combination of verbal and non-verbal languages - multimodality. However, it was not until recent times that multimodality received the status of a field (BEZERRA; NASCIMENTO, 2013). The present study then aims at mapping the studies on multimodality published in five A1 national scientific journals that were developed in Brazil in the period from 2009 to 2013 in terms of 1) theoretical and methodological trends; 2) genres investigated; 3) generic aspects approached; and 4) the geographical distribution of the studies in Brazil. To do so, the abstracts and the articles of the experimental studies were considered in the analysis. The results indicate that there was a small amount of publication related to the field. However, it was possible to note a growth in the number of studies in the period investigated. The majority of these studies are comprised in the social semiotic basis (BEZEMER; JEWITT, 2010) investigating mainly genres (MOTTA-ROTH, 2008) on the media sphere, especially magazine cover and advertisement. The publications are mostly affiliated to institutions on the Southeast region. It may be possible to argue that UFMG is the university in the Southeast region that is closest to holding a niche of investigation on multimodality, since there is a pattern in terms of theory use. The present study intends to contribute to the consolidation of the field in the national context as well as, pedagogically, help researchers and people interested in multimodality. Introduction People are constantly involved in social practices that require diverse semiotic abilities and knowledge in order to achieve mutual understandings.On a website, for example, it may be necessary to be able to read written texts in connection with images, with the layout of the page in order to interpret the meaning(s) being exchanged.In this view, "it is now no longer possible to understand language and its uses without understanding the effect of all modes of communication that are copresent in any text" (KRESS, 2000, p.337).Multimodality is so considered as "comprising a broad field of inquiry unified by the claim that people construct meaning and communicate through a range of resources that may include, but go beyond verbal language" (NASCIMENTO, 2012). Considering the importance of multimodality in contemporary society and the recency of the multimodal discourse analysis as a field of study, the present study aims at investigating the studies on multimodality developed in Brazil in a five-year period -from 2009 to 2013.This study considers the context of five highly qualified national scientific journals in order to provide a map of the: 1) theoretical and methodological basis that underlies the studies; 2) genres investigated; 3) generic aspects 1 approached; and 4) geographical distribution of these studies in national territory. It is important to highlight that this work is part of an umbrella project titled Critical Genre Analysis and implications for multiliteracies 2 (HENDGES, 2012, GAP/CAL nº 031609) which aims at developing genre analyses from the perspective of Critical Genre Analysis (MOTTA-ROTH, 2005, 2006, 2008) with implications to the pedagogy of multiliteracies. In this perspective, several multimodal genres have been analyzed, such as comic strips, pop science news, English textbooks, the academic poster, the audiovisual research article.The present study as well as other studies included in the umbrella project intend to contribute to the consolidation of the still young field of multimodal discourse analysis. In order to accomplish the objectives of this research, the theoretical and methodological trends will be identified considering the approaches to multimodality proposed by Bezemer and Jewitt (2010). The genres and the generic aspects will be mapped, when necessary, on the view of Critical Genre Analysis (MOTTA-ROTH, 2005, 2006, 2008). The pedagogical implications of the present study will be projected in relation to the pedagogy of multiliteracies (THE NEW LONDON GROUP, 1996) perspective.These bases are discussed in section 2. 1 In order to avoid misunderstandings, the term "generic aspect" will be used in the present study as referring to "aspect of a/the genre" and not to "general aspect". 2 The original title is Análise crítica de gêneros e implicações para os multiletramentos (HENDGES, 2012, GAP/CAL nº 031609). Multilitaracies In contemporary life, social practices are increasingly pervaded by different semiotic modes, in other words, by the combination and interaction of verbal and non-verbal languages (BEZERRA; NASCIMENTO, 2013).At school, for example, Kress and van Leeuwen (2006) point out that images are highly encouraged by teachers in the early years of primary schooling (in the sense of illustration), but they start giving space to written expressions by the end of this period.In the secondary schooling, images make more part of school subjects in a specialized representation (maps, diagrams, charts). In this sense, although the social practices lived by students in the school context comprise images as one mode of meaning, their role in each context is not really taught and, most importantly, "assessment continues to be based on writing as major mode" (KRESS and van LEEUWEN, 2006, p.16).Outside school, the image "plays an everincreasing role" (KRESS; van LEEUWEN, 2006, p.17): whether in the print or electronic media, whether in newspapers, magazines, CD-ROOMs, or websites, whether as public relations materials, advertisements or as informational materials of all kinds, most texts now involve a complex interplay of written text, images and other graphic or sound elements, designed as coherent (often at the first level visual rather than verbal) entities by means of layout.But the skill of producing multimodal texts of this kind, however central its role in contemporary society, is not taught in schools.(KRESS; van LEEUWEN, 2006, p. 17). A number of studies have then presented the knowledge on multimodality as essential to an effective social interaction in genres (UNSWORTH, 2001;IEDEMA, 2003;BEZERRA;NASCIMENTO, 2013). These studies consider that: being able to read a multimodal text is very important in a society where people are constantly bombarded with images and verbal language put together in a myriad of layouts across modes, media and genres, especially with the solidification of the internet.(BEZERRA; NASCIMENTO, 20013, p.138). To take account of these issues in contemporary life, The New London Group (1996) proposes the concept of multiliteracies, which addresses "the multiplicity of communication channels and media, and the increasing saliency of cultural and linguistic diversity" (p.63). Based on this concept, they create a pedagogy of multiliteracies that focuses on the teaching and learning of modes of representation that go much beyond verbal language.In this view, "language and other modes of meaning are dynamic representational resources, constantly being remade by their users as they work to achieve their various cultural purposes" (THE NEW LONDON GROUP, 1996, p. 64).Language is then no longer approached as monolingual and monocultural. Multimodal discourse analysis emerges as increasingly important in multiliteracies studies, since it takes the responsibility of combing linguistic, visual, audio, gestural and spatial modes of meaning and of explaining their relationship in a meaning-making context (THE NEW LONDON GROUP, 1996).Different approaches to multimodality have then been developed. The first approach, social semiotic multimodal analysis, is originally associated with the work of Kress and Van Leeuwen (1996;2006).In their book titled Reading Images: The Grammar of Visual Design (1996), the authors initially used Halliday's theories of social semiotics and systemic functional grammar to build and offer "a framework to describe the semiotic resources of images and analyze how these resources can be configured to design interpersonal meaning, to present the world in specific ways, and to realize meaning" (JEWITT, 2009, p. 29).In social semiotic multimodal analysis, there is an emphasis on "how the context of communication and the sign-maker shaped signs and meaning" (JEWIT, 2009, p. 29), which means that the focus is centered on "the sign-maker and their situated use of modal resources" (JEWIT, 2009, p. 30).In this approach, "signs, modes and meaning-making are treated as relatively fluid, dynamic and open systems intimately connected to the social context of use" (JEWIT, 2009, p.30).Mode is in this perspective understood as "shaped by the daily social interaction of people" (JEWITT, 2009, p. 21) , 2009, p.32). In this sense, the approach emphasizes "the metafunctional systems underlying semiotic resources and the integration of system choices in , 2009, p.32).The focus is, thus, on the system and the system in use (semiotic resource not mode) in a hierarchical organization.So, analytically it is on: how a system achieves this focus, how the metafunctions are realized through the systems of meaning which constitutes the meaning potential of semiotic resources, and how system choices integrate in multimodal phenomena to create meaning in the context of the situation and the context of culture (JEWITT, 2009, p. 32). The third approach, multimodal interactional analysis, is associated with Interactional socio-linguistics (particularly Scollons' work on mediated discourse, but it also includes Goffman, Gumperez, Tannen, cited in JEWITT, 2009), Kress andVan Leeuwen's work on multimodality (2001, cited in JEWITT, 2009) and digital technology (JEWITT, 2009).This approach emphasizes "the notion of context and situated interaction" (JEWITT, 2009, p. 33).In this sense, the focus is "on the action taken by a social actor with or through multimodal meditational means […]" (JEWITT, 2009, p.33).The attention in this approach is on the interaction which is considered as combining linguistic elements as well as gesture, gaze, posture, movement, space and objects in a specific situation (JEWITT, 2009).Also, in contrast with the previous approaches, modal systems are not a concern but the interplay between modes, since "mode, sign-makers, and context are too intimately connected to tear apart" (JEWITT, 2009, p.34). These three approaches categorized by Jewitt ( 2009) -1) social semiotics multimodal analysis, 2) multimodal discourse analysis and 3) multimodal interactional analysis -represent the main perspectives on the study of multimodal genres.One year later, in 2010, Bezemer and Jewitt reviewed this categorization and proposed two new multimodal approaches to multimodality: the social-linguistic and the social semiotic approach (BEZEMER, JEWITT, 2010).For the purpose of the present study in mapping the approach(es) adopted by research on multimodality developed in Brazil, these two perspectives will be considered to locate the theoretical and methodological basis underlying each research that constitutes the corpus. The social semiotic approach, on the other hand, presents as starting point the idea of extending "the social interpretation of language and its meaning to the whole range of modes of representation and communication employed in a culture" (KRESS, 2009;van LEEUWEN, 2005 cited in BEZEMER;JEWITT, 2010, p.4). Based on this view, three theoretical assumptions are stated: 1) "social semiotics assumes that representation and communication always draw on a multiplicity of modes, all of which contribute to meaning" (BEZEMER; JEWITT, 2010, p. 4).So, it is central to analyze and describe each meaning-making resource used in specific contexts and to develop explanations on how these resources make meaning (BEZEMER; JEWITT, 2010); 2) "multimodality assumes that all forms of communication (modes) have, like language, been shaped through their cultural, historical and social uses to realize social functions" (BEZEMER; JEWITT, 2010, p. 5); and 3) "the meanings realized by any mode are always interwoven with the meanings made with those other modes co-present and co-operating in the communicative event" (BEZEMER; JEWITT, 2010, p.5). For the second and third stages of this study, mapping the genres and the generic aspects that are investigated in these studies, the analysis will be guided by the concept of genre proposed within the framework of Critical Genre Analysis (MOTTA- ROTH, 2005ROTH, , 2006ROTH, , 2008)), briefly reviewed in section 2.3. Critical Genre Analysis In contemporary life, there is "an increasing interest on the analysis of different discursive genres of social life from activities and social roles recurrent in daily life in a diversity of cultural contexts" Having these theoretical frameworks as bases, the data from the present study was collected and analyzed following the procedures discussed in section 3. The corpus The corpus of the present research consists of abstracts and articles of studies on multimodality published in five highly qualified national journals -Linguagem em (Dis)curso (LemD), 3 The original title is Coordenação de Aperfeiçoamento de Pessoal de Nível Superior.This classification is available in WebQualis, "an application that allows the classification and search of Qualis in each area as well as the publication of the criteria used to classify journals and proceedings" (CAPES, 2009).Using this application to search the A1 journals in the area 4 The original citation is "tal expansão demanda que as análises considerem as condições de produção, distribuição e consumo do texto, e focalizem os textos que circulam na sociedade contra o pano de fundo do momento histórico.Olham-se as finalidades e a organização econômica dos grupos sociais, em termos de vida cotidiana, negócios, meios de produção, formações ideológicas, etc., que determinam o conteúdo, o estilo e a construção composicional dos gêneros [...] (MOTTA-ROTH, 2008, p. 351). 5The original title is Laboratório de Pesquisa e Ensino de Leitura e Redação. of Languages/Linguistic, the system provided a list with 120 journals (download in April, 2014).These journals were organized according to five categories (Table 1).The journal has released ten issues and a total of 112 articles. In relation to the issue 57, the nine studies published do not present abstract.For this reason, they were not considered part of this research sample, since the abstracts are the units that were initially analyzed to define whether the journals present or not a multimodal study. DELTA Documentação de Estudos em Linguística Teórica e Aplicada (DELTA) is a publication from Pontifícia Universidade Católica de São Paulo (PUC-SP) "addressed to all areas of study concerning language and speech, whether theoretical or applied" (DELTA, 2014). The journal publishes a wide variety of genres such as articles, reviews, booknotes, debates, round tables.Considering exclusively the number of articles, D.E.L.T.A has released 14 issues and a total of 105 articles. Analytical procedures Firstly, the 650 abstracts were analyzed aiming at the selection of the ones that present a study on multimodality.For this purpose, the abstracts were scanned in order to find keywords, in Portuguese and in English, that can characterize a study on multimodality, such as 'multimodality', 'Multimodal Discourse', 'non-verbal', 'Kress', 'van Leeuwen', 'images', 'visual', 'semiotic mode', 'multiliteracy'. Secondly, the abstracts selected were classified according to nationality, that is to say, whether they were reporting a national or international study by considering the author's institutional affiliation. Only the national studies are relevant to this research, since the objective is to map the studies on multimodality in the Brazilian context. Thirdly, the studies were classified as theoretical or experimental. In this research, only the experimental ones were taken into account, since the objective here is to have a map of the genres that have already been investigated in order to offer subsidy to language teaching. Typically, only experimental studies, as opposed to theoretical ones, develop the analysis of an object of study such as a genre. Fourthly, the final sample of abstracts and articles was analyzed in order to identify, if possible, the four categories mentioned in the objectives 1) theoretical and methodological trends; 2) genres being investigated; 3) the generic aspects being studied; and 4) the geographic distribution of the studies all over Brazil. Lastly, the data was quantitatively and qualitatively analyzed in order to verify if there is predominance in one or more categories, if it is possible to find any pattern in the studies.As a final result, this study should provide a map of the multimodal studies developed in Brazil in terms of national journals. Results and discussion The results of the present study reveal that in the five national journals selected for the analysis, only 22 articles out of a total of 650, are related to multimodality (Table 2).If, on the one hand, the average of studies on multimodality is low, there has been a growth of interest on the topic in the investigated period [2009][2010][2011][2012][2013]. Table 3 also shows that the interest on the field has grown when each journal is considered separately.DELTA and Ilha include the topic in the later years (2012,2013) and since 2010, LemD, Gragoatá and RBLA regularly publish at least one yearly article on multimodality.These 22 articles are then investigated in terms of the four aspects presented in the objective of this study. Theoretical and methodological trends In order to define the approach that each article follows, it was first necessary to verify what literature and technical concepts the study present in relation to multimodality.To do so, the literature review section or words typically connected to this section were analyzed. Later, the classification of multimodal studies proposed by Bezemer and Jewitt ( 2010) was used to identify the approaches to multimodality (thirteen of the twenty-two) are classified in the "other bases" category. Two scientific journals require special attention in this classification because of the number of studies that consider theoretical bases other than the ones proposed by Bezemer and Jewitt (2010): LemD (six studies of nine) and Gragoatá (all five studies).It is important to highlight that all these studies consider semiotic modes other than the linguistic one and this could indicate a movement towards the consolidation of multimodality as a research area.However, their frameworks are not typical and mainly recognized as focusing on multimodal aspects, since they do not present categories for the analysis of images, such as the ones used in the social semiotic and social-linguistic approaches (the two main perspectives within multimodality).In other words, images were considered in the investigation, but the theoretical and methodological bases may not cover their analysis as they covered verbal language. The second category in recurrence is the social semiotic approach which underlies the total number of studies on multimodality in DELTA and Ilha and a small number in RBLA and LemD (two for each journal). It is important to consider that the majority of studies presenting a Such recurrence may be explained since the social semiotic approach is initially associated with the work of these authors (JEWITT, 2009).Gragoatá is the only journal that does not present studies using the social semiotic approach.Also, the studies were published in different years (2010, 2011, 2012 e 2013).So, it may be possible to argue that this journal is more inclined to develop and publish researches on "alternative bases". Table 4 also reveals that the social linguist approach is not often used in the analysis of non-verbal language in the context of these five journals, since none of the studies of the corpus presented it in the literature review section.Also, based on the data, it seems uncommon to combine two or more approaches in a same study. The second stage of analysis is the genre(s) investigated in each study.The results are shown in the next section. Genres Investigated The genres investigated in the 22 publications were mapped considering firstly a) the researchers definitions of their object(s) of study as genre (example 1); secondly b) the critical genre analysis (MOTTA-ROTH, 2008) approach to genres (example 2); and thirdly c) situations in which the genres could not be identified (example 3). As shown in example 1, the article presents a discussion on the concept of genre using it to define the object under analysis -MSN Messenger -as a digital genre.In such cases, explicit reference to the object of study as a genre was considered to map the genre under investigation in the study.Lastly, example 3 refers to an article in which it was not possible to identify the object under analysis as a genre, although there was an object being investigated.Therefore, in such cases, the publication will not be considered in the "genre investigated" category of analysis. Example 3. Extract from 20124#18 article WebQuests são ambientes multimodais de aprendizagem colaborativa que incentivam os participantes a interagir no processo de desenvolvimento de projetos on-line pelo uso da web e de seus recursos.Representam um modelo de pesquisa orientada, focada na busca de informações para resolver uma situação problema, realizada no espaço da internet (DODGE; MARCH, 2007;DODGE, 2008).As fontes de pesquisa são recomendadas por meio dos links fornecidos para que os participantes não se "afundem" no oceano de informações da internet -são, pois, uma estrutura de aprendizagem assistida (MARCH, 2004).(Myemphasis) ISSN 2237-6321 Based on these criteria, a number of different genres were identified, as shown in Table 5.It is also relevant to consider that although these two genres were studied in several exemplars of the corpus, a wide range of other genres are also investigated under the media sphere, such as the comic strip, the audiovisual chronicle, the newspaper page cover.This result shows the versatility of this sphere in holding a big number of different genres.Another important aspect is that all five journals present at least one genre belonging to the media sphere. The second sphere in recurrence is the literary with five genres under investigation: literary book cover, plays, visual poem, short story and literature book for children.Gragoatá covers three of these five genres.This amount of publications in Gragoatá may be explained by the fact that the journal covers the field of literature in addition to linguistics/applied linguistics.Only DELTA and Ilha do not consider genres under this sphere in their publications, since it is not in the scope of these journals as stated on their websites. Among the twenty-two studies, two belong to the third criteria: situation in which no genre could be identified.The focus of the analysis in these two publications is not in a genre, but in a happening (enunciado-acontecimento) and in a multimodal environment.Therefore they could not be considered in the present analysis. When crossing the data from the theoretical and methodological trends with the genres investigated, it is possible to observe an interesting pattern.Table 6 shows that the media sphere is also predominant in the studies classified in the social semiotic approach.Of the eight publications associated to the studies of Kress andvan Leeuwen (1996, 2006), seven analyze genres located in the media sphere, more precisely, magazine genres such as advertisements, magazine cover, magazine article.that interest on this area of study is not isolated, but of national relevance, as visually represented in Figure 1.The data also reveal that the studies are spread in different universities in each region.South, for example, holds five publications affiliated to five different educational institutions.UFMG is the university that outstands, since it hosts four of the twelve studies.In this view, except UFMG and perhaps Unicamp (three publications), there seems to be no research niches on multimodality in terms of universities.If, on the one hand, it is not reliable to affirm the existence of research groups on multimodality, it may be possible to consider that this topic is drawing attention of researchers in many different educational institutions. Two interesting patterns could be addressed when crossing the results from theoretical and methodological trends with the geographic analysis also intend to contribute, in terms of pedagogical applications, to help students and researchers interested in the field of multimodality to find previous studies, research groups, universities interested in the topic, genres and generic aspects already investigated, theoretical and methodological approaches used by providing them with a panorama of the field in the context of A1 national journals. Three considerations in relation to the limitations of the present study need to be addressed: first, we understand that the number of journals selected and the corpus of the present research is still small to fulfill the purpose of this study in such a national magnitude and, for this reason, further research needs to be conducted.However, these results can be considered an initial view of studies on multimodality in the specific context of A1 national context.Second, we tried to delimit keywords to reach the largest number of studies on multimodality when scanning the abstracts.However, it is highly possible that these words have not covered all the studies considering multimodal aspects.Third, for lack of time, other national journals or even different contexts could not be included in the present study.In light of these considerations, further research about the situation of multimodality in Brazil is necessary to provide a wider view of the field in our country and a more solid amount of data for language teaching within the perspective of multiliteracies (THE NEW LONDON GROUP, 1996). ISSN 2237-6321 Gragoatá, Revista Brasileira de Linguística Aplicada (RBLA), Ilha do Desterro (Ilha) and Documentação de Estudos em Linguística Teórica e Aplicada (DELTA).All five journals are classified in the A1 level (Qualis) attributed by Coordination for the Improvement of Higher Education Personnel 4 (CAPES).The abstracts and articles were available online and were collected in the journals website considering the period from 2009 to 2013.These specific journals were chosen considering two main criteria: 1) the nature of the journal; and 2) acceptability through time.The first criteria refers to the selection of journals that cover a wide range of topics in order to verify the status of multimodality in different research fields.The second refers to the selection of important journals to the Laboratory for Research and Teaching of Reading and Writing 5 (LabLeR), to which our research group is affiliated.This importance can be visualized when considering the number of studies published in these journals by members of LabLeR: considering the same relevant period of time for the current investigation, 2009 to 2013, all five journals present publications by researchers from LabLeR.LemD and RBLA, for example, published three studies affiliated to the lab, while Gagoatá, Ilha and DELTA published one each.This criterion was considered in order to check what is being published on multimodality beyond the work of our group.Contextualization of the object of study In Brazil, the national scientific journals are evaluated and classified on the basis of the Qualis system.According to CAPES (2009), Qualis "is a set of procedures used by Capes to evaluate the quality of the production of the post-graduation programs in Brazil".The journals are classified from the highest level of quality -A1 -to the lowest level -C. For the purpose of the present research, only Brazilian journals that included studies on Linguistics and/or Applied Linguistics were considered.Once these were identified, the following criterion was the relevance of the title to the research group of LabLeR, as explained earlier, which led to the selection of five titles.Information on these five journals is given in the next subsections.Linguagem em (Dis)curso Linguagem em (Dis)curso (LemD), a publication of the Universidade do Sul de Santa Catarina (UNISUL), focuses on "issues related to the fields of text and discourse"(LemD, 2014).Besides articles, the journal publishes presentations, essays, reviews, editorials and retrospectives.However, for the purpose of the present analysis, only the genre research article was mapped.From 2009 to 2013, a total of 103 articles were published by LemD in fifteen issues.Gragoatá Gragoatá is a publication of both the post-graduation program in language studies and the post-graduation program in literary studies from Universidade Federal Fluminense (UFF).The journal publishes research articles and reviews that are relevant to the linguistic and literary areas (GRAGOATÁ, s/d).Considering exclusively the genre research article, Gragoatá has published ten issues and 147 studies from 2009 to 2013.ISSN 2237-6321 Revista Brasileira de Linguística Aplicada Revista Brasileira de Linguística Aplicada (RBLA), a publication from Universidade Federal de Minas Gerais (UFMG), "encourages research in the field of Applied Linguistics" (RBLA, 2014).Besides articles, the journal publishes reviews and interviews.In the eighteen issues published between 2009 and 2013, RBLA brings 192 articles.Ilha do Desterro Ilha do Desterro is a journal linked to the post-graduation program from the Universidade Federal de Santa Catarina (UFSC) that publishes, since 1979, original articles and reviews on the areas of English language, Literature and Cultural Studies. social semiotic approach use at least one of the Reading Images: The Grammar of Visual Design editions written by Kress and van Leeuwen in 1996 (first edition) and 2006 (second edition): seven studies of eight. Figure 1 . Figure 1.Spatial distribution of experimental studies on multimodality in Brazil Considering each journal, there seems to be no direct relation between each title and the geographical origin of the studies on multimodality they publish.RBLA, for example, despite being affiliated to a Southeastern university from Minas Gerais, contains exemplars from three different regions (Northeast, Midwest, Southeast). in a specific context.v. MOTTA-ROTH, 2008, p. 342).In this interest, Critical Genre Analysis "combines the theoretical framework from Genre Analysis, Systemic Functional Linguistics and Critical Discourse Analysis"(MOTTA-ROTH, 2008, p. 375).This combination has resulted in an expansion in the concept of the genre in the 2000s:this expansion demands that the analysis consider the production, distribution and consumption conditions of texts, as well as, focus on texts that circulate in society considering the background of the historical moment.The purposes and economical organization of social groups are considered in terms of daily life, business, means of production, ideological positions, etc., that determine the content, style and composition of genres 3 [...] (MOTTA-ROTH, 2008, p. 351).Considering this view, genres "refer to relatively stable kind of 'enunciados' (cf.: Bakhtin, 1952' (cf.: Bakhtin, -1953' (cf.: Bakhtin, /1992 a; a; b), used to specific purposes in a specific social context.These are social processes that guide to recognizable and shared conventions and expectations (cf.: Grabe 2002: 250)"(MOTTA-ROTH, 2008, p.351). Table 1 . Classification of national scientific journals in the area of Languages/ Linguistic at the A1 level Table 2 . Number of experimental studies on multimodality in the five national journals between 2009-2013 Considering these results, it is possible to note that the number of publications on multimodality is small: only an average of Table 3 reveals this intensification in the total amount of publications per year.Even though the rise was not continuous (there was a slight drop in 2013), in 2010 and 2011 the number of exemplars grew to five per year and then to six in 2012 as compared to 2009, when there was only one study published. Table 3 . Number of experimental studies on multimodality across time Table 4 reveals that the majority of studies on multimodality Research on multimodality... Extract from 20092#15 article Na seção seguinte, após um breve histórico sobre o MST, será feita uma discussão sobre a construção da identidade coletiva por meio do modo visual a partir da noção de multimodalidade. Table 5 - Genres and spheres investigated in the corpus of the experimental studies on multimodalityThe results show that a massive quantity of genres comes from the media sphere: twelve of the nineteen studies in which the genre could be identified.Two genres require a special attention since they
2018-12-10T22:10:40.972Z
2017-09-23T00:00:00.000
{ "year": 2017, "sha1": "d3ed9e679fa8df57278453ccde95cc6b69aa5285", "oa_license": "CCBYNC", "oa_url": "http://www.entrepalavras.ufc.br/revista/index.php/Revista/article/download/909/427", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d3ed9e679fa8df57278453ccde95cc6b69aa5285", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
15564669
pes2o/s2orc
v3-fos-license
Hamiltonian systems with an infinite number of localized travelling waves In many Hamiltonian systems, propagation of steadily travelling solitons or kinks is prohibited because of resonances with linear excitations. We show that Hamiltonian systems with resonances may admit an infinite number of travelling solitons or kinks if the closest to the real axis singularities in the complex upper half-plane of limiting asymptotic solution are of the form $z_\pm=\pm\alpha+i\beta$, $\alpha\ne 0$. This quite a general statement is illustrated by examples of the fifth-order Korteweg--de Vries-type equation, the discrete cubic-quintic Klein--Gordon equation, and the nonlocal double sine--Gordon equations. Introduction. Nonlinear localized travelling waves such as bright or dark solitons are key concepts for many branches of modern physics, including nonlinear optics, theory of magnets, theory of Josephson junctions, etc. It is known that in many dispersive systems the presence of these nonlinear entities is strongly restricted due to resonances with linear excitations. These resonances take place in Hamiltonian systems of various origin, such as the fifth-order Korteweg-de Vries equation [1], nonlinear lattices [2][3][4] and models with complex dispersion and nonlocal interactions [5,6]. As a result, it is quite typical that in such Hamiltonian systems the localized excitations either do not exist at all or they only exist for specific values of some external parameters. In the last case the nonlinear excitations are called embedded solitons (i.e. solitons "embedded" into the spectrum of linear waves). These embedded solitons have been discovered in hydrodynamics, nonlinear optics and other fields of modern physics [7]. To give an example, consider an operator equation for a function u(ξ) where L ε is a Fourier multiplier operator in ξ space with even symbol L(k) in k space, F (u) is a nonlinear function and ε is a parameter. The prototypical examples of problems leading to Eq.(1) are the generalized Korteweg-de Vries equation, or discrete or nonlocal Klein-Gordon equations, for u(x, t), where M ε is a Fourier multiplier operator in x space. Above ξ = x − vt is the travelling wave coordinate and the operator L ε in Eq.(1) includes both M ε and v. We assume that in the both cases ε = 0 implies a degeneration of the problem with L 0 = ∂ 2 ξ . Consider a solitary wave u(ξ) which is asymptotic to the equilibrium state u ≡ 0 as ξ → ±∞ (the case of kink wave which is asymptotic to a pair of equilibrium states u ≡ u ± as ξ → ±∞ can be analyzed in a similar way). Then the resonances correspond to the real roots of the dispersion equation near u ≡ 0 If for some value of ε, there exist a single pair of real roots k = ±k 0 in Eq.(4), we are in situation when the resonance prohibits propagation of regular solitons in Eq. (2) and Eq.(3) and the embedded solitons may appear. In this case, the velocity v of the soliton, typically, is not arbitrary but should be "adjusted" to avoid "gluing" with linear modes. In general, v belongs to some discrete set. This set may be empty (i.e, no localized waves propagate), or include finite or infinite number of values. The case when Eq.(4) has more then one pair of real roots is more complex and the presence of localized excitations in this case is highly doubtful. In this paper we address the following question: Are there some conditions which would indicate to existence of infinitely many embedded solitons described by Eq.(1)? If this is possible, can we describe this infinite set asymptotically? As a result, we present conditions for existence of countable infinite sequence of embedded solitons. The main assumption is that the limiting solution of Eq.(1) as ε → 0, being extended in a complex plane, should have a pair of symmetric singularities in the upper halfplane. We give an asymptotic formula for values {ε n } as n → ∞, for which embedded solitons exist. In terms of Eq.(2) and Eq.(3) this means a presence of an infinite number of velocities v for nonlinear localized excitations. Up to the moment no rigorous proof of this asymptotic theory has been found. We give some heuristic explanation of the mechanism behind the asymptotic formula and illustrate this result with three numerical examples. Surprisingly, it has been observed that in some cases the asymptotic formula predicts the parameters even of lowest embedded solitons from this sequence with reasonable accuracy. We note that the idea that two symmetric singularities in the upper half-plane can be related to the countable infinite sequence of tangential intersections of stable and unstable manifolds can be found in [8] for the primary intersection point of the two-dimensional symplectic maps. In this paper we generalize this principle to more general class of physically relevant systems. Main result. Consider Eq. (1), where u(ξ) is realvalued function defined on R. Assume that L ε is a real operator which depends continuously on real parameter ε and satisfies L 0 = ∂ 2 ξ . The Fourier symbolL ε (k) is supposed to be an even function of k. Assume now that (a) the equation F (u) = 0 has zero solution u = 0 with F ′ (0) > 0; (b) the dispersion equation (4) has only one pair of real roots k = ±k(ε) such that k(ε) → ∞ as ε → 0; and (c) the equation has an even localized solutionũ(ξ) such thatũ(ξ) → 0 as ξ → ±∞. In addition, the key assumption of our asymptotic theory is that the solutionũ(ξ) can be continued into the complex plane and the closest to the real axis singularities ofũ(ξ) in the upper half-plane are given by the pair z ± = ±α+iβ, with α, β > 0, which is symmetric with respect to the imaginary axis. Then, we expect the existence of an infinite sequence of values {ε n } such that for each ε = ε n , Eq.(1) has a soliton solution u(ξ) with u(ξ) → 0 as ξ → ±∞, and this sequence obeys the following asymptotic law where ϕ 0 is a phase constant that depends on L ε andũ. This result can be extended naturally to the case of kink solutions of Eq.(1) connecting a pair of equilibrium states u = u ± , such that F (u ± ) = 0 and F ′ (u − ) = F ′ (u + ) > 0. In this case, F ′ (u ± ) appears instead of F ′ (0) in the dispersion relation (4), whereas the differential equation (5) is assumed to have a kink solutionũ(ξ) such thatũ(ξ) → u ± as ξ → ±∞. Justification. Let us give some heuristic arguments for justification of the main result. Then for small ε, the term H εũ in right-hand side of the inhomogeneous equation (7) dominates and the solvability condition for this inhomogeneous equation [9] can be written approximately as the orthogonality condition The asymptotic value of the integral in (9) as ε → 0 is determined by the closest to the real axis singularities of integrand in the complex plane (the Darboux principle, see [10]). Since H εũ (ξ) is even, bounded, and real-valued for real ξ, the main contribution comes from z ± = ±α+iβ and J + = J − ≡ J. In the simplest case, when the integrand has poles of order n in the points z ± , the result is simply a sum of the residues in these poles multiplied by 2πi. In a more complicated case, the singularities z ± of H εũ can be rational or transcendental branch points. For both cases, since H εũ (x) is even and real for real x, we can write where κ is a real number, κ = 0, 1, 2, . . .. It is naturally to assume that C(ε) ∼ C 0 ε q as ε → 0 for some values of C 0 and q. Then applying standard formulas [13] in the asymptotic limit k(ε) → ∞ as ε → 0, we conclude that where φ 0 = arg(C 0 ). Consequently, zeros of J(ε) obey the asymptotic formula (6) with ϕ 0 = π/2 − φ 0 . Ifũ(ξ) is a symmetric kink solution of Eq.(5), the reasoning remains the same up to the point that H εũ (ξ) is now odd in ξ. Examples. The validity of the main result has been confirmed by many numerical studies. Below we give three illustrative examples that concern problems of different physical origins. Example 1. Consider the equation where ε is a parameter. Eq.(13) arises in hydrodynamics where it describes travelling waves for the fifth-order KdV equation [1]. If ε = 0 and r > 3/ √ 2, Eq.(13) has an exact soliton solutioñ Closest to the real axis singularities in the upper complex half-plane are the simple poles Since H ε = −ε 2 ∂ 4 ξ , we note that the singularities of H εũ are situated in (15) and are poles of order n = 5 = −κ. The dispersion relation (4) reads as ε 2 k 4 − k 2 − 1 = 0 and it has one positive root k 0 (ε) such that k 0 (ε) ∼ 1/ε as ε → 0. According to the main result, we expect that there exists an infinite sequence of values {ε n } such that Eq.(13) has soliton solutions for ε = ε n with the asymptotic formula Numerical computations strongly support this prediction. Fig.1 shows the values α/(πε n ) which approach to integers for larger values of n. The profiles of the three lowest solitons corresponding to points A, B, and C are shown on the inserts by solid line, together with the limiting soliton (14) by dotted line. The discrepancy reduces quickly for larger values of n. Example 2. Consider the nonlocal double sine-Gordon equation where a > 0 is a parameter. In particular, this equation arises in nonlocal Josephson electrodynamics where it describes layered structures [6] (the second sine harmonic is important if they include, for instance, ferromagnetic layers, [11]). A list of possible kernels K ε which arises in Josephson models can be found in [12]. We assume that K 0 is the Dirac distribution such that Eq. (17) with ε = 0 reduces to the classical double sine-Gordon equation. If we denote the Fourier transform of K ε byK ε (k), thenK ε (k) →K 0 = 1 as ε → 0. Travelling wave solutions u(ξ) = u(x − vt) of Eq. (17) satisfy the equation where v 2 < 1 is assumed. We consider 2π-kink solutions with boundary conditions at infinity For a > 0, Eq. (19) has exact 2π-kink solutioñ and the closest singularities to the real axis are the two logarithmic branching points z ± = ±α + iβ, where The dispersion relation is assumed to have a single pair of real roots k = ±k(ε) for all v 2 < 1. In particular, if K ε is the Kac-Baker kernel then this assumption is satisfied and Let us present arguments that Eq.(18) with the kernel (21) admits an infinite sequence of the 2π-kink solutions. We note that this equation can be reduced to the system of differential equations v 2 u ξξ = q + sin u + 2a sin 2u, −ε 2 q ξξ + q = u ξξ , where an additional variable q is introduced. For the limiting kinkũ, we denote a solution of the second equation of the system (22) byq. Now since H εũ =ũ ′′ −q, we understand that H εũ has a double pole inũ ′′ at z ± = ±α + iβ in addition to the logarithmic branching points inq. Hence n = 2 = −κ and the expansion (10) holds with C(ε) → C 0 as ε → 0, where C 0 is real. Therefore, φ 0 = 0 in Eq. (12) and ϕ 0 = π/2 in Eq. (6). According to the main result, we expect that there exists an infinite sequence of values {ε n } such that Eq.(18) with the kernel (21) has a 2π-kink solution for ε = ε n with the asymptotic formula as n → ∞, Using an appropriate shooting method [14], we compute numerically the values of ε, for which there exist 2π-kink solutions of Eq. (18) with the kernel (21). Numerical calculations strongly confirm the existence of the sequence {ε n } as well as its asymptotic properties (23). The values of δ/(πε n ) for a = 1/8 and v = 0.1 are given in Table I and approach closer to odd integers for larger values of n. Evidently, each value ε n depends on the parameter v; however from the physical viewpoint, the inverse functions v n (ε) are more important. Fig.2 represents the dependence of the velocities v n versus ε for the first three 2π-kink solutions. The corresponding profiles of the 2πkinks (solid line) at the points A, B and C are shown in the inserts together with the limiting kink (20) (dotted line). The difference between the actual kink and the limiting kink (20) is not visible already for kinks at the points B and C. Example 3. Discrete Klein-Gordon equation is one of the basic equations describing lattice dynamics in various contexts, from solid state physics to biophysics [2]. Travelling waves of the discrete Klein-Gordon equation satisfy the equation where ε is the spacing between lattice sites and F is a nonlinear function. Bistable nonlinearity may support kinks which satisfy the boundary conditions lim ξ→±∞ u(ξ) = ±1. If γ = 0, Eq.(24) corresponds to the classical φ 4 model, where no travelling kinks were previously found [4]. We anticipate that for γ > 0 and v fixed there exists an infinite sequence of travelling kinks for discrete values of parameter ε. If ε = 0, Eq.(24) reads where v 2 < 1 is assumed. For γ > 0, Eq. (26) has exact kink solutioñ and the closest singularities to the real axis are the two square root branching points z ± = ±α + iβ, where . Using Newton's method with the fourth-order finitedifference approximation of the second derivative, we compute numerically the values of ε, for which kink solutions of Eq.(24) exist. Numerical calculations strongly confirm the existence of the sequence {ε n } as well as its asymptotic properties (28). The values of χ/(πε n ) for γ = 5 and v = 0.6 are given in Table II with satisfactory agreement. Fig.3 shows the first two kink solutions of Eq.(24) with γ = 5 and v = 0.6 (thick and thin solid lines) together Conclusion. We have shown on three prototypical examples that Hamiltonian systems with resonances may admit an infinite sequence of travelling solitons or kinks if the leading-order asymptotic solution has a pair of symmetric singularities in the upper half-plane. This simple but universal observation reveals the way why travelling solitons or kinks have increased mobility in some nonlinear systems with resonances but not in the others. To emphasize the universality of this prediction, we mention two additional examples earlier studied in literature, where the mechanism for existence of the infinite sequence of travelling solitons or kinks also should take place. First, three families of radiationless travelling solitons were captured numerically in the saturable discrete nonlinear Schrödinger equation [15]. Although the leading-order asymptotic solution is not available in the closed form, one can show that it has a pair of symmetric singularities in the upper half-plane, hence one can anticipate that there exist infinitely many families of travelling solitons in this example. Second, two moving kinks were reported in the modified Peyrard-Remoissenet potential of the discrete Klein-Gordon equation [3]. Since this model is a modification of the discrete double sine-Gordon equation, one can anticipate the existence of an infinite sequence of families of the moving kinks in this example. Our study of some other models of this kind (e.g. various nonlocal generalizations of double and triple sine-Gordon model) also confirms the asymptotical formula (6).
2013-09-01T05:11:38.000Z
2013-09-01T00:00:00.000
{ "year": 2014, "sha1": "edc0b242bf5c61861f729b2c554076490ccf92e5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1309.0183", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d65899ebf822268c4cf7d10ea76200b56a971abc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
169218949
pes2o/s2orc
v3-fos-license
Linking Transparency and Accountability to Local Legislative Performance in the Province of Nueva Ecija in the Philippines Transparency and accountability are essentials to the operation of governments. They are tools to prevent corruption. The study assessed the observance of the shared principles of transparency and accountability in the local legislative departments of five cities in the province of Nueva Ecija, in the Philippines. It discussed the importance of transparency and accountability in local governance; described the shared commitment to governance principles of accountability and transparency of local legislators, and correlated local legislators commitment to governance principles to legislative performance measured in terms of the number of ordinances and resolutions made. Using the mixed methods of research and Pearson Product Moment Correlation, the study found that: (a) the twin principles of transparency and accountability are necessary for local legislative operations; (b) local legislators observed governance principles of transparency and accountability and; (c) actual observance of transparency and accountability are associated to higher level of legislative performance. The study recommended that observance of transparency and accountability may be strengthened through the use of Information Communications Technology not only to monitor legislative duties but also to better respond to public demands and produce the quality of services commensurate to the value of public money. Introduction Transparency and accountability are essentials to the operation of governments regardless of the type of regimes and territorial boundaries of States (Ackerman, 2004;Listend &Naurin 2010;Lindberg, 2013). They are seen as complements of policy effectiveness such as when citizens collectively act against power holders who support bad policies or commit inaction despite the presence of urgency to commit action (Gabriel, 2017;Gabriel & Gutierrez 2017). Transparency is a tool to increase government accountability to the people, especially in a democratic setup. It is the sunlight that disinfects" the dark corridors of power" from inefficiency and corrupt practices. When actual adherence is observed, they benefit citizens. These twin principles of good governance are so important that the World Bank (WB, 2008) considers "accountability exchange" and the free flow of information as preconditions for development (Islam, 2016). The need for actual adherence to transparency and accountability is especially true among governments that are negatively perceived as corrupt. The image of Philippine bureaucracy is negatively appreciated. According to the World Bank Report (Quah, 2017), from 1977 to 1997alone there is an estimated 48 billion dollars siphoned from government coffers towards personal pockets of Philippine government officials. A huge amount which would otherwise allocate for development projects or for social services to benefit the poor but diverted to the selfish aggrandizement of bureaucrats (Briones & Zosa, 1987). This corrupt practice continuously destroys the integrity and public trust to the public institutions (Brillantes & Fernandez, 2011;. Consequently, people in the government are generally and unfairly treated as crooks, slow in the delivery of services, inefficient and corrupt. Making the situation worst is the inadequate citizen's engagement in public affairs and the inherently complex built-in procedures in the public offices which slow down the process of transactions in the government. In one study, it was found that Southeast Asia performed poorly in the observance of good governance principles (World Bank, 2006)." The finding is similar to the case of Malaysia wherein lack of transparency and accountability is perceived as a hindrance to economic growth. Accordingto Hira, Murillo, and Kim (2016), developing countries in general, lack or underdeveloped in terms of accountability and transparency ranging from property rights and independent judiciaries. Transparency and accountability are linked to reform efforts against corruption. Many laws in the Philippines are passed by Philippine Congress to address the issue of corruption. The 1987 Philippine Constitution for instance, devoted an entire article to serve as guidelines for public officials and employees in the execution of their public duties (Bernas, J, G., 1996). The Code of Conduct and Ethical Standards for Public Officials and Employees (Republic Act 6713) is a codification of the standard of behaviors that public personnel should observe in the performance of duties. The Department Order No. 38-20016, created the Presidential Commission on Good Government (PCGG) assigned to get back the alleged ill-gotten wealth of former President Ferdinand Marcos. Unfortunately, these attempts bear no substantial results due to: lack of political will on the part of the leader and agency assigned to implement conflicting laws on corruption, judicial inefficiency, low salaries of government employees, weak citizenship, absence of role model, punitive nature of anti-corrupt policies and lack of reform priorities and inadequate citizens engagement (Brillantes & Fernandez, 2011;Carino & Alfiler, 1986;Quah, 2010;Reyes, 1994;and Reyes 1994). According to Carino and Alfiler (1986), majority of anti-corruption laws penalize rather than prevent the commission of corrupt acts. They argued that effective anti-corruption measures must contain both a system of punishing offenders and a built-in mechanism for prevention coupled with public support at the agency level generated from the trust to public institutions enough to warrant citizen"s active engagement in anti-corruption programs. If trust therefore is the important ingredient to citizens active engagements to curb corruption and the practice of transparency and accountability builds citizens trust in its government, it follows that these governance principles also serve as inputs that will link reform efforts to prevent corruption and tangible outcomes in the agency level. If this is correct, then public administration must be proactive in the fight to eliminate corruption and unethical behavior. And to improve the overall public image, observance of transparency and accountability is a necessary component (Henson, 2016). However, despite its importance in governance, there is a dearth in the number of literature on transparency and accountability practices in the local context. Many of the studies are based on western experiences, theories, and practices. This study would try to add on to the existing body of knowledge on the topic. It also aimed to countercheck the presence or absence of the theory-practice gap in public administration prevalent in the findings of prominent scholars and Professors Brillantes and Haque. The incongruence of theory and practice in the field of public administration has been the focus of debates in the last decades, thus, the study has academic value. The study assessed the observance of the shared principles of transparency and accountability in the city governments in the province of Nueva Ecija, Philippines. The subject of inquiry are the legislative branches of five city governments and links its observance to the legislative chamber performance. The study has three main parts. One is the introduction that establishes the importance of transparency and accountability in local governments. The second part deals on the research methodology employed to gather and analyze the data and the last part deals on the interpretation of gathered data and its implications to the practice of public administration. Objectives of the Study The study measured the organizational effectiveness of local City Councils of the component cities of Nueva Ecija, Philippines. Five City Councils and 55 councilors were the respondents of the study. The variables measured their commitment to good governance principles of transparency and accountability. There were three specific objectives of the study, namely; 1) To discuss the importance of transparency and accountability in local governance; 2) To describe the shared commitment to governance principles of accountability and transparency of local legislators; 3) To correlate the local legislators shared commitment to governance principles of transparency and accountability to legislative performance; Study Local The study local consists of five cities in the province of Nueva Ecija in the Philippines. The subject of inquiry is the local legislative body and its observance of the principles of transparency and accountability as well as its possible influence on legislative performance. It (2013) Meantime, to link the legislative performance to the observance of transparency and accountability, the table below shows the legislative outputs of the five city councils in terms of ordinances and resolutions made during the period covered by the study: The data above was gathered from actual research in the area. The local legislative council performance was measured through the number of policies made. It is one of the three branches of government in charge of law making. It is also the branch of government drafting and approving the budget for the Local Government Unit (LGU) in a given fiscal year (LGC, 1991). The legislative outputs could either be in the form of ordinances or resolutions (Villaluz, 2004). ISSN 2161-7104 2018 The table above shows the legislative output of the respondent City Councils in terms of ordinances made. The ordinance is defined as a policy implemented and passed by the local legislative body. It was implementable within the political jurisdictions of the seating legislature. An ordinance has the character of permanence and can only be amended by the same local legislative body. City B had the highest number of ordinances, with646 deliberated and approved during the years 2007-2013. While City C has the lowest number of legislative outputs. Only 74 ordinances are passed within six years. City A has produced 29.46 percent of the ordinances produced by all cities in six years. The greater percentage was produced by City B with 45 percent of the overall ordinances produced by the five city councils in six years. The lowest percentage of ordinance produced by City C with only 0.5percent of the total ordinances produced and passed in the entire province. Journal of Public Administration and Governance Another vital outcome of the local city council operation was the resolution. It is defined as the expression of feeling, support or position of a local legislative body on a certain issue, phenomenon or event. It was based on the recorded minutes of the deliberation of the members of the parliamentary body. As the table shows the city council with the highest number of resolutions made was City A, while the lowest number of passed resolution was City C with only 161 resolutions in six years. Methodology The study is a mixture of qualitative descriptive and quantitative research methods. The description of the legislators" observance of the principles of transparency and accountability is measured using frequency scaling. The data were gathered using survey, and the questionnaire forms were distributed among 55 members of the five city councils. The answers solicited were subjected to further informal interviews and follow up questions as the need arises. The data on file in the legislative chambers were also used to triangulate responses and interview results. There were 55 participants in the study. Each of the legislative body was represented by 11local legislators, elected and served during the year 2007 to 2013. Cross-references were also used to strengthen theoretical grounding. The data were analyzed using correlation statistics. The influence or relationship between observance of governance principles of accountability and transparency and number of resolutions and ordinances were analyzed using Pearson Product Moment Correlation to link good governance principles to performance. After the internal validity and reliability of the instrument have been ascertained and informed consent granted, the instruments were distributed, and with 90%retrieval rates, the data were gathered and analyzed. Each of the questionnaire required five to 10 minutes to fill out. The relationship between legislative body outputs and observance of governance principles determined the link between observance of principles and legislative performance. Description of the Observance of Transparency in the Local Legislative Operation It is a dictum that government officials are like goldfish in a goldfish bowl (Nigro &Nigro, 1997). How they execute, their functions must be seen by the citizens. In other jurisdictions, ISSN 2161-7104 2018 transparency is observed not only by public organizations but also of private corporations dealing with government functionaries leading to "sunshine legislation" (Hill,1997). The term transparency is defined as a concept ensuring access to accurate and important information on the activities of the organization by people whether insiders or outsiders (Ingrams, 2016). It has three forms: transparency can be passive,proactive or forced. In many studies on transparency, it is linked to accountability as signified by the conceptual relationship between the two principles; thus, the greater window of transparency creates a higher degree of accountability on the part of government officials. The Table 3shows the transparency rating of the city councils, to wit; As shown in the Table 3, City A has the lowest adjectival rating on transparency. Meanwhile, City B has the highest rating given to transparency principle with a weighted mean average of 3.56 verbally described as very often observed. The finding is significant considering that lack of public trust is correlated to lack of government transparency. As the Table 3 shows, the common practice in all legislative councils studied is to observe transparency. Meantime, the construct on time submission of report, posting and webpage presence accessible to citizens of the city are observed by City B with the highest average on transparency practice of 3.56. Description of the Observance of Accountability in Local Legislative Operation Accountability is constitutionally guaranteed. It is the reason behind the exercise of political power (Sender et al.,2008). To heighten the degree of accountability, the exercise of discretion of government official is limited (Victoria, 2012). The concept of accountability implies the duty to act responsibly and to be accountable for one's actions. There are different mechanisms of accountability in public administration they are: money, process, and outcomes. For the study, accountability is measured in terms of outcomes (Graycar,2016,)of the legislative bodies in the form of ordinances and resolutions. The table below shows the accountability rating of city councils, to wit; ISSN 2161-7104 2018 The respondent legislators rate the shared principle of accountability as "sometimes". However, different from the majority is the regard to observance of internal rules as it is rated very often by all the city councils. Each of the legislative chambers has its own internal rules. This directs the smooth operation of the legislative branch. Besides failure to observe the rules would warrant suspension and /or penalty. That is why majority of them assumed upon themselves the duty to observe internal rules. On the other hand, the average responses is different. It is not considered as a sine qua non to the legislative function. The average weighted mean of 3.06 shows to the fact that it is not always observed. Among the accountability measures, observance of internal rules has the highest average weighted mean across cities. It has a weighted mean rate of 3.49 or "very often." While among the five cities, it is the City B that gives the highest average mean score of 3.50 and described as "very often." Transparency and Legislative Performance There were three variables measured to determine the observance of transparency by local legislators, to wit; a) on time submission of the financial report; b) use of the website to disseminate financial information; c) updated email address, social media for feedback and civic engagement.The overall correlation results computed in the study on the significant relationship between transparency variables and legislative performance are as follows: a) 0.391 not significant at 0.298 and having a degree of little relationship to quantity of ordinance for City A; The cities D and E also showed the same degree of correlation. The cities A D and E yielded significant relationship, although little correlation is recorded between legislative performance and observance of transparency. On the other side, the r-value of cities B and C is 0.719 interpreted as the strong degree of correlation and 0.406 r-value interpreted as moderately significant. Meaning, there is a direct positive correlation ISSN 2161-7104 2018 between observance of transparency principle and legislative output. In theory, citizens' participation in parliamentary operation heightens the government commitment to provide quality service and performance outputs. The more open the window of transparency the higher is the degree of accountability. Accountability and Legislative Performance The accountability principle was measured using the following constructs: a) posting of legislative procedure, b) observance of internal rules; c) suspension of barangay officials and d) support for recall elections. They are elements of accountability in a parliamentary arrangements operating within a democratic system. Based on data gathered, the following overall r values are arrived at: The correlation of accountability principle and quantity of ordinance authored and approved per legislator are both negatively and positively shown. For the council members, posting of procedure and observance of internal rules are negatively correlated to the number of ordinances. The recall elections and support to the suspension of erring barangay officials are strongly correlated to legislative performance. As a whole, cities A, and E manifested little positive correlation; this implies that among council members of cities A and E, they perceived accountability as related to the quantity of ordinance produced. Both Cities A and E consider observance of accountability as part of their legislative functions and could influence their legislative outputs by the manner they observed the twin principles of transparency and accountability. Meanwhile, City B shows moderate relationship between the observance of the principle of accountability and legislative performance. It has an r value of 0.523 not significant at 0.149. The principle of accountability in local governance is proven to affect parliamentary performance. In general, increase accountability pushes the government official to perform better while in office. Good governance principle of accountability works toward effective parliamentary duties (World Bank, 2005). The negative but significant correlation between accountability and legislative performance is recorded in the case of City C where the observance of higher degree of accountability could be said to influence negatively the output of the city council. The negative correlation implies that inverse relationship is observed between the variables of accountability and legislative output. This requires further study. All the rest of the respondents show not significant correlation. Conclusion and Recommendations The study proved that the five local legislative bodies of component cities in Nueva Ecija observed governance principles of transparency and accountability. But the description generated is "sometimes" which could be deduced as inconsistent to the basic principle that government must always observe transparency and accountability. The data also showed the link between observance of governance principles and legislative output. This means that the higher the observance of transparency and accountability, the greater the legislative output within a certain period of time. The City B having the highest degree of performance among legislative bodies showed the interplay between governance principles and practice and local legislature productivity. It is to be noted that more than an individual set of principles, transparency and accountability are organizational norms upon which public trust to the public institution is rebuilt. The development of New Public Administration Model (NPM) demands that delivery of tangible result is the end all of the performance of every government office. But this should not relegate the significance of legislative process where good governance principles may be observed and adhered to. And as the study showed two important principles in public administration that influenced governmental outcomes if not ensure governmental operation under the era of New Public Administration Model where measurement of performance is done through concrete and tangible results of operation. The findings of the study generated from local context are supported by studies elsewhere in the Southeast Asia where the link between performance and observance of governance principles are established. Transparency, accountability and citizens' participation are proven correlated with improved performance and are proven to minimize corruption in the government. For instance, citizens "participation in Thailand and Indonesia improved the performance of the local government of Praya Bunlou and mobilized the support of Forum Warga for participatory development (Gabriel, 2017). The practice of transparency is also found to provide benefits such as: (a) higher gross domestic product; (b) lower levels of corruption; (c) long term economic development;(d) involvement in policymaking, and;(e) creation of the trust to the government. Transparency and accountability are effective tools against corruption. The emergence of ICT in public administration bring about new and effective way of minimizing corruption in the delivery of public service and a dependable ally to respond to the demands of accountability and transparency in public setting (Orelli,2016). It is also a tool to strengthen democracy in the local level where citizens could participate fully in governance (Bawan et al.2017). The local legislative body is a subpart of an integrated whole and interlinked to the bigger system of Local Government Unit (LGU) operations. Its malfunction delays crucial development program of the entire LGU system. On the contrary, its effective and efficient performance of functions may hasten the realization of local development goals. Finally, ordinances are local laws that provide legitimacy to the use of public funds. They are the outputs of the observance of transparency and accountability in legislative operation. Apropos, adequate performance of legislative functions and the principles of transparency and accountability offer life and spirit to the otherwise barren constitutional principle that "public office is a public trust. Given the findings of the study, the following recommendations are at this moment submitted: (a) For the DILG to devise a mechanism to track the whereabouts of local legislators when ISSN 2161-7104 2018 not in session. Transparency dictates that they must serve by the number of hours required by their office. Journal of Public Administration and Governance (b) The local legislative body to require observance of the twin principles of transparency and accountability through mechanisms and structures that would ensure transparency and accountability in operation. The use of ICT is a wise option. The application of ICT may further the transparency and accountability principles in the arena of policymaking. (c) Create a local legislative body quality assurance team. This ensures the quality of ordinances and resolutions made. (d) The study may be replicated in some municipalities using quality ordinances which could be linked to transparency and accountability in legislative advocacy. (e) Finally, the study provides the area for future research where performance measurement and its applicability to local context are employed in the organizational practice thereby minimizing the theory and practice gap.
2019-05-30T23:44:57.957Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "de31b6e92f0ebf782429461791689635833268f3", "oa_license": "CCBYNC", "oa_url": "https://www.macrothink.org/journal/index.php/jpag/article/download/13345/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b34ff73c8d4d391695e41835dae8f1e77b88f9af", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business" ] }
221969862
pes2o/s2orc
v3-fos-license
Human Detection through RSSI Processing with Packet Dropout in Wireless Sensor Network This paper presents a device-free human detection method for using Received Signal Strength Indicator (RSSI) measurement of Wireless Sensor Network (WSN) with packet dropout based on ZigBee. Packet loss is observed to be a familiar phenomenon with transmissions of WSNs. The packet reception rate (PRR) based on a large number of data packets cannot reflect the realtime link quality accurately. So this paper firstly raises a real-time RSSI link quality evaluation method based on the exponential smoothing method. Then, a device-free human detection method is proposed. Compared to conventional solutions which utilize a complex set of sensors for detection, the proposed approach achieves the same only by RSSI volatility. The intermittent Karman algorithm is used to filter RSSI fluctuation caused by environment and other factors in data packets loss situation, and online learning is adopted to set algorithm parameters considering environmental changes. The experimental measurements are conducted in laboratory. A high-quality network based on ZigBee is obtained, and then, RSSI can be calculated from the receive sensor modules. Experimental results show the uncertainty of RSSI change at the moment of human through the network area and confirm the validity of the detection method. Introduction Object detection is a popular area of research. Detection based on WSN processes object information collected by sensors, sends to the control center, and then analyses the data to realize the detection [1]. The sensor signal acquisition is significantly important, and the type of sensors concerns the detection result. At present, pyroelectric infrared [2], voice signal, vibration signal, and magnetic signal sensors are commonly used for detection. However, these detection studies often require external sensors or human body carrying corresponding equipment [3,4]. With the rapid development of Internet of Things (IoT), massive data is delivered through trillions of interconnected smart devices. IEEE 802.15.4 is one of the preferred mechanisms to provide modulation and Media Access Control (MAC) in wireless IoT networks. Sensors and actuators in WSNs are usually deployed in edge environments where wireless channels are subject to burst packet loss due to multipath fading or high bit error rate. Packet loss is an important factor that reduces the robustness of WSNs. The research on correctly implementing reasonable control strategies to improve network performance has attracted wide attention from many scholars. At present, data collection phase is one of the most critical phases in the whole process of communication between device and human. Numerous data collection solutions were proposed in the literature [5][6][7]. There are many methods for evaluating the quality performance of network links, which can be mainly divided into two categories. One is to use the PRR for evaluation [8]. The packet rate can directly reflect the current link quality, but in order to obtain accurate link evaluation, it is necessary to calculate the packet rate through a large number of samples, which wastes a lot of energy and deviates from the low power consumption of wireless sensor networks. The other is by using RSSI and link quality indicator (LQI) [9]. RSSI is one of link quality evaluation indicators and has been widely used in localization [10], distance estimation, and link quality assessment. Mo et al. [11] believe that RSSI values are an important indicator of network packet loss. When the RSSI value is close to the gray area, the packet loss phenomenon will occur remarkably, and a lightweight adaptive repair and adjustment decision method is proposed. See et al. [12] proposed network link quality can be measured by the vector network analyzer or RSSI received by the receiving node, and the probability distribution of the RSSI value at different packet loss rates is used to study the relationship between the path loss and the packet loss rate. However, the existed literatures show that RSSI can be affected by many factors, for instance, the performance of WSN nodes, the temperature and humidity of the surroundings [13][14][15]. Each can change RSSI, which cause RSSI to cannot accurately reflect the quality of the network link. Hamida and Guillaum [16] found that people have a great influence on RSSI when walking around in the network area and proposed that RSSI cannot accurately network link quality when there is someone in the daytime. In this paper, we introduce a real-time link quality estimation method and a human detection method using RSSI volatility. There are already several researches which have shown detection result using RSSI [17]. Hussain et al. [18] investigated the variation of RSSI can be used to detect the mobility of an intruder and tested by a light sensor. Radio Tomographic Imaging (RTI) has been developed to image the attenuation caused by physical objects using RSSI in a wireless network. An Elliptical Weight Model (EWM) was introduced to describe the characteristic of RSSI attenuation and further to track and estimate the position of individuals [19]. A distributed processing of RSSI for indoor surveillance was proposed to detect and localize moving persons, and power consumption is reduced by the intrusion alert algorithm [20]. The shadowing effect between stationary wireless nodes in which the line of sight obstructed by a human body was adopted and RSSI variations were analyzed for human presence detection [21]. This paper describes a WSN for real-time human detection by the fluctuation of RSSI. Different from other previous works, the uncertainty change of RSSI at the moment of human entering or leaving to the detection area is considered. The Intermittent Kalman filter is applied to improve the accuracy of detection by considering the packet loss in WSN and to smooth RSSI volatility caused by the environment or other factors. Furthermore, the real-time RSSI reference value has been sought to reduce the misjudgment of human. The paper is organized as follows. A real-time link quality estimation method based on the RSSI value and a human detection method in the case of data loss are given in Section 2. The experimental results are presented to confirm the validity of the detection method in Section 3. Finally, we conclude in Section 4. Methodology 2.1. Real-Time Link Quality Estimation Method. The realtime wireless link considered in this paper is dynamic and time-varying. If the PRR is calculated according to the success rate of each data reception, the error is large and cannot estimate the link quality. This paper uses an exponential smoothing method [22], which is a common sequence data processing method. It takes into account the role of past data and uses the concept of weight to consider the degree of data impact. Window sliding average processing is now performed on the observed data. The observation data sequence moves on the window that can hold 100 data, and the observation data corresponding to the window center is updated for the window moving on data series, which named window sliding average. In the T time, the number of transmitted data packets is N = 100, and the number of successfully received data is m, which is obtained by the exponential smoothing method: where PRRðnTÞ is the current PRR prediction, PRRððn − 1Þ TÞ is the last prediction, Y ðnTÞ reflects the PRR measurement of the n-th window time, which is m/N, N = 100, so Y ðnTÞ is a fraction less than 1, and also a is a number of ½0 , 1. We can see from the model, the bigger a is, the more weight is given to the last prediction PRRððn − 1ÞTÞ. On the contrary, the lower a is, the more weight is given to the most recent observation Y ðnTÞ. Since the PRR is a number less than 1, we take the third digit after the decimal point to improve the accuracy and simplify the calculation. Let PðnTÞ = 1000PRRðnTÞ, Equation (1) can be changed to According to Equation (2), the value of the smoothing constant a is the key to obtaining the PRR. Now, we use the RSSI value received by the receiving node to determine the value of a. The RSSI value can reflect the link quality under certain circumstances. For example, the RSSI is generally stable at around -60 dBm in an unmanned laboratory environment, and in this case, the PRR tends to be relatively high and stable. When a human was moving into the network area, the RSSI will fluctuate greatly and the attenuation can reach -85 dBm; the PRR tends to decrease and be unstable. So the value of a can be recalculated as a′, which is expressed as where r is the influence of the RSSI value on a; σ 2 ðnTÞ is the variance of RSSI for the n-th window time; σ 2 0 is the variance of the measured RSSI values in unmanned environment, the average of RSSI for the n-th window time is RSSI avg ðnTÞ, and RSSI avg is the average RSSI value in the unmanned environment; and RSSIof f set is found empirically by sending a large amount of packets in the unmanned environment. When using Equation (2), the initial value PðTÞ should be determined and obtained. Through a large number of statistical 2 Journal of Sensors data analysis, RSSI avg , the variance σ 2 0 and PðTÞ are set. The RSSI value is above the threshold and fluctuates at RSSI avg in a small range in the unmanned environment. The network link quality is relatively stable. a can be obtained through a large number of statistical data in the unmanned environment and by calculating the average and variance of RSSI values over the window time; and the value is very close to 1. a ′ is obtained from packets in the case of interference, and the value is close to 0. Detection Algorithm. Packet loss is observed to be a familiar phenomenon with transmissions of WSNs. Frequent undesired packet loss seriously degrades the network performance. The real-time link quality estimation method mentioned in Section 2.1 can be used to observe the PRR of each window time. The detection result is set to be invalid when the PRR at one time is low. So data dropout in transmission should be considered. In this paper, the Intermittent Kalman filter algorithm [23] is used to estimate the lost RSSI values and further to realize detection. Owing to the existence of measurement error, the measured RSSI value ZðkÞ, ðk = 0, 1, ⋯, mÞ fluctuates randomly within a narrow range near the actual RSSI value XðkÞ, namely, ZðkÞ − λ/2 ≤ XðkÞ ≤ ZðkÞ + λ/2, and the measurement precision is determined by constant λ. Gaussian random noise is considered in this experiment and filtered by the Intermittent Kalman algorithm to obtain the real RSSI value [24]. The system of RSSI equations can be described as where XðkÞ is the estimated value of RSSI at time k; ZðkÞ is the measured RSSI value; Wðk − 1Þ is Gaussian random noise of system; Wðk − 1Þ ∼ Nð0, QÞ, where Q is the variance; VðkÞ is Gaussian random noise of measurement; and V ðkÞ ∼ Nð0, RÞ, where R is the variance. Q and R should be known, and the values have direct influence on the algorithm. The values need to reset with the changes of the environment. In this paper, online learning is used to set the value of Q and R, which are the learning abilities. The values of RSSI are adopted as the training inputs enabling the algorithm to automatically and timely adapt to the environmental dynamics. Assume there are N arbitrary distinct training samples ðx i , y i Þ, i = 1, 2, ⋯, N, where X are the training inputs and Y are the training targets; we consider a loss function lðy,ŷ Þ = ðy∧ − yÞ 2 that measures the cost of predictingŷ when the actual answer is y. The ultimate goal is to minimize the cumulative loss suffered along its run. And there is a correlation between the past and present when environment changes. We consider this problem as a discrete time system; the uncertainty arises mainly because of packets dropout in the network, which leads to the randomness of receiving observations. Data loss can be described by a random variable which obeyed Bernoulli distribution [25]. Firstly, a random binary variable γ t is used to describe the observation at time t, γ t = 1 expresses the packet at time t received, and γ t = 0 expresses the packet at time t lost; the probability p γt ð1Þ = λ t , λ t ð0 ≤ λ t ≤ 1Þ is the arrival rate. γ t at time t is mutually independent with γ t at time s. The measured noise V t can be defined as follows: when γ t is equal to 1, the variance of V t is R; when γ t is 0, the variance of V t is σ 2 I, and the value of σ 2 is arbitrary. In reality, the corresponding observation value cannot be obtained when the packet loss happens, so σ → ∞. Now, we use a virtual observation to replace the actual value that has been lost with the given variance σ 2 , and let σ → ∞. Then, the Kalman filter equation can be calculated as follows: whereX t|t is the estimate value of RSSI, A is a unit matrix, and H is a unit matrix. If σ → ∞, then, (8) and (9) are equivalent to the following two equations: where K t+1 = P t+1|t H T ðHP t+1|t H T + RÞ −1 is the Kalman gain, which is the same as the traditional Kalman algorithm. Calculating KðtÞ, t = 1, 2, ⋯, Pð0 | 0Þ should be known firstly. Pð0 | 0Þ = Ef½Xð0Þ −Xð0 | 0Þ½Xð0Þ − X∧ð0 | 0Þ T g. Actually, λ t = 1 is a kind of ideal state. When λ t = 0, Equation (8) degrades into the traditional Kalman algorithm. Data loss means we cannot obtain the current RSSI value; for example, the average RSSI value is about -60 dBm in nobody environment in the laboratory. The RSSI value will be extracted as 0 dBm if data loss happened at that time. Therefore, the intermittent Kalman filter algorithm is adopted to avoid the error caused by packet loss. The effect of Intermittent Kalman filtering when packet loss happens during transit is shown in Figure 1. There are some data losses in 250 packets sent by the transmitting node, and the RSSI value is extracted as 0 dBm. However, RSSI curve becomes smooth after being filtered. We also find that the fluctuation can reach 15 dBm from real data, which may neutralize the effect of human body and further impact the experiment results. But after being filtered, the maximum RSSI fluctuation 3 Journal of Sensors of every moment is less than 1 dBm compared with the previous moment. So Intermittent Kalman can in some degree filter fluctuation caused by environmental changes, the performance of sensor nodes, indoor obstacles, and measurement errors. Recent research has shown that variations of RSSI in indoor environments where sensor nodes have been deployed can reveal movements of persons [26]. The detection method proposed is mainly based on the human body interfering with RSSI by causing fading and shadowing effects. In our earlier works, we compute the average RSSI value in nobody environment; however, we find that average RSSI value laboratory may change by many factors, the performance of WSN nodes, the temperature, humidity of the surroundings, and so on. And when lab students work over their office desk, the values can be obviously different with the value in nobody #define RSSI_offset The threshold value set var value; The last RSSI value received var value_new; The current value obtained 11 |value_new-value|<RSSI_of f set return value; else return value_new; Algorithm 1: RSSI preprocessing algorithm. Journal of Sensors context, regardless of those persons sitting in or out of the detection area. Especially, while the experiment is ongoing, some persons come in or go out of the lab even out of the detection area which may cause the average RSSI value in nobody environment meaningless. So, we take the change of the lab environment into account in this detection experiment. Owning to the existence of multipath effect, scattering, reflection, obstacle, and other interference factors in the lab, a real-time RSSI reference should be set as shown in Algorithm 1. RSSI fluctuates within a narrow range in an unmanned environment, and RSSI offset can be found empirically by sending a large amount of packets. When the current RSSI value is obtained, we compare it with the last RSSI value. If the difference is greater than the RSSI offset, we set the current RSSI as reference. Otherwise, the last value is still the RSSI reference. We can see from Figure 2 that a person enters into the network area at the beginning, and the RSSI value obviously decreases about high to 8 dBm. The person leaves at about the 30th data number, and the RSSI value is up to -65 dBm at the moment but reaches a stable level of -68 dBm. A human comes in the area near the 50th number; however, there is a sharp increase rather than a decrease of the RSSI value. Until the person keeps quiet in the detection area, the RSSI value generally remained at -73 dBm, which manifests that a human body can cause RSSI attenuation. The phenomenon can also happen near the 180th and 235th numbers. So, we conclude that at the moment when the human was moving or leaving the network area, the RSSI is unstable and we cannot be sure that the RSSI is reducing. However, there is an exact attenuation when the person stays quietly in the area. Anyway, either the RSSI value increases or decreases, we can observe the moving person by the increasing variance. The whole process of the human detection method is mainly shown in Algorithm 2. RSSI values are filtered by the Intermittent Kalman algorithm at the beginning, and then, the value of RSSI offset can be obtained from the quantities of the experiment. We get the current RSSI and process it by the RSSI preprocessing algorithm and calculate the variances of processed RSSI values. Then, we can judge whether the human appears or not by the calculated variance. Experimental Results This section presents the experimental results obtained with the method described in the previous parts. WSN consists of a large number of small sensor nodes with sensing, processing, and communicating abilities. Sensor nodes communicate with the coordinator directly in a typical WSN network. The communication speed of WSN nodes is 2.4 Kbps, and 2.4GHz is used to do tests. There are about 24 bytes being sent every second, and the transmit power is set as 0 dBm. Figure 3 shows the photos of the WSN coordinator (a) and sensor node (b). The testing area is 3 m * 3 m and layouts of nodes as shown in Figure 4. In order to highlight the human body affection, we do the following work before the start of the experiment. The validation of the network real-time link quality evaluation method is verified through an experiment. 1000 packets are used, and each 100 packets are divided into a group. The statistical parameters of each group are shown in Table 1. Compared with RSSI avg and σ 2 0 , the value of a is (3), and then, the PRR is calculated for each window according to Equation (2). We can see from Table 1 that the obtained PRRðnTÞ of each window is dynamic and very different from each window. When the human body is existing, the network is shown to be unstable and PRR tends to decrease. Then, the weight of real-time measured values should be increased in the estimation method which shows the real-time performance. The PRR of group 6 and group 8 are 43.73% and 65.9%, respectively, which means a substantial amount of dropped packets. If a person object goes into the network area during this time, it cannot be detected. So real-time monitoring link quality can effectively guarantee the accuracy of human detection. This method can monitor network link quality in real time, and its validity and feasibility are proved by an experiment. Real-time estimation of link quality is very important in a human body detection experiment. If the PRR is too low in a certain period of time, the RSSI value cannot be obtained because of packet loss, which directly affects the accuracy of the detection experiment. PRR can be calculated by this method in real time and used for detection. The test scenario is the following: no humans are present in the lab at first; therefore, no detection is reported. Once a human subject enters the network area, he goes through for 7 times. We can see seven relatively large fluctuations in Figure 5. During the test, 250 packets have been received by sensor nodes. Journal of Sensors Figure 6 shows the result of one detection test; when a human goes into the network area, the RSSI experiences a rapid change. And there are separately seven parts with variance greater than 0 in Figure 6, and each part indicates there is someone entering into or getting out of the detection area. So the detection method proposed in Section 2 can detect a moving human effectively and accurately. Mainly because of the human body's absorption and reflection effects, the variations of RSSI are quite satisfactory for accurate human presence detection. The second experiment is performed to obtain the detection accuracy. We firstly do tests for 100 times by changing the location of the transmit and receive nodes, and the detection accuracy of the proposed method is around 95%, in total. Then, a number of sensor nodes are changed, and we do tests for 50 times; the accuracy is around 92%, in total. The detection accuracy is high; human presence can be detected no matter how close to the sending or receiving node. And each pair of nodes has their own detectable area. The solution to improve the accuracy in a larger detectable area is to install l additional node. Conclusions In this paper, a device-free human body detection method by RSSI is present. In view of the common phenomenon of packet loss in transmission of WSN, a real-time network link quality estimation method is proposed, which can not only overcome the delay caused by statistical PRR but also consider RSSI vulnerability to environmental impact. And the Intermittent Kalman filter algorithm is adopted to filter RSSI data, which improves the accuracy of detection. Compared with other conventional detection methods, the human body does not need to carry any equipment, and sensor nodes do not need to add external sensors. So this method is easy to implement and low cost. Of course, some flaws still exist during our experiment process. Such experiments mentioned above were operated in a spacious and obstacle-free room; the experiment results reflect just one idealized application environment. For more practical consideration, we should conduct experiments in heavily obstructed environments. The attenuation of the radio signal is usually strong. This decreases the Journal of Sensors transmitting range of the nodes and the packet delivery ratio. A future work will try to find solutions to improve the detection accuracy of the system by further improving the detection algorithm. Multiple sensor nodes will be adapted to realize accurately and tract the movements of a person in a monitored area. In a critical three-dimensional indoor environment, more than a single person should be detected inside the monitored area of the system correctly. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest.
2020-09-27T15:06:20.324Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "1adb4bc5fc44bdc91ae2c32405fdf41d13d8de08", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/js/2020/4758103.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1adb4bc5fc44bdc91ae2c32405fdf41d13d8de08", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
214667608
pes2o/s2orc
v3-fos-license
Experimental realization of spin-tensor momentum coupling in ultracold Fermi gases We experimentally realize the spin-tensor momentum coupling (STMC) using the three ground Zeeman states coupled by three Raman laser beams in ultracold atomic system of $^{40}$K Fermi atoms. This new type of STMC consists of two bright-state bands as a regular spin-orbit coupled spin-1/2 system and one dark-state middle band. Using radio-frequency spin-injection spectroscopy, we investigate the energy band of STMC. It is demonstrated that the middle state is a dark state in the STMC system. The realized energy band of STMC may open the door for further exploring exotic quantum matters. Ultracold atomic gases provide a versatile platform for exploring many interesting quantum phenomena [1][2][3][4], which give insights into systems that are difficult to realize in solid state systems [5][6][7], and especially study quantum matter in the presence of a variety of gauge fields [8][9][10][11][12][13]. A prominent example is the spin-orbit coupling (SOC), which is responsible for fascinating phenomena such as topological insulators and superconductors [6,7], quantum spin Hall effect [14]. The synthetic one-dimensional (1D) SOC generated by a Raman transition has been implemented experimentally for bosonic [15] and fermionic [16,17] atoms. The 1D SOC has also been realized with lanthanide and alkali earth atoms [18][19][20]. Recently, the experimental realizations of two-dimensional SOC have been respectively reported in ultracold Fermi gases of 40 K [21,22] using a tripod scheme in a continuum space and Bose-Einstein condensate (BEC) of 87 Rb [23] using a scheme called optical Raman lattice in two-dimensional Brillouin zone, where the Dirac point and nontrivial band topology are observed. All of these proposed and realized various types focus on spin-vector momentum coupling for both spin 1/2 and 1 [15][16][17][21][22][23][24][25][26][27], while high-order spin-tensors naturally exist in a high-spin (larger or equal to 1) system. A theoretical scheme for realizing spin-tensor momentum coupling (STMC) of spin-1 atoms has been proposed recently and some interesting phenomena were predicted [28]. Here, STMC consists of two bright-state bands as a regular spin-orbit coupled spin-1/2 system and one dark-state middle band. The middle-band minimum is close to that of two bright-states, so significantly modifies the state density of system ground state. This effect combining with interaction can offer a possible way to generate a new type of dynamical stripe states with high visibility and long tunable periods [28], so can bring the advantage for the direct experimental observation. Furthermore, the more complex spin-tensor momentum coupling [29], such as, can induce different types of triply degenerate points connected by intriguing Fermi arcs at sur-faces. Therefore the STMC changes the band structure dramatically, and leads to many interesting many-body physics in the presence of interactions between atoms. In this paper, we experimentally realize this new type of STMC with two bright-state bands and one dark-state middle band in spin-1 ultracold Fermi gases based on the scheme in Ref. [28]. Dark states in quantum optics [30] and atom optics [31] are well studied and have led to electromagnetically induced transparency (EIT) [32,33], stimulated Raman adiabatic passage (STIRAP) [34] and subrecoil cooling scheme such as velocity selective coherent population trapping (VSCPT) [35]. Dark states are superpositions of internal atomic ground states which are decoupled from coupling and have not energy shift induced by coupling. In contrast, bright-states have energy shift depending on coupling strength. For example, considering a lambda atomic systems (two ground states and one excited state) coupled with a pair of near-resonant fields, the excitation amplitudes of different ground states to the same excited state destructively interfere to generate a dark state. Thus when an atom is populated in such a dark state, it remains unexcited and cannot fluoresce. In this paper, we study STMC with the bright-and dark-states in the continuous momentum space. The realization of STMC in ultracold Fermi gases of 40 K atoms is illustrated in Fig. 1(a), which is similar with the scheme [28]. We choose three magnetic sublevels |↑ = |F = 9/2, m F = 1/2 (|9/2, 1/2 ), |0 = |9/2, −1/2 , and |↓ = |9/2, −3/2 of the F = 9/2 hyperfine level of 40 K atomic electronic ground state as the three internal spin states, where F denotes the total spin and m F is the magnetic quantum number. The three spin states are coupled by three Raman laser beams to generate STMC as shown in Fig. 1(a) and (b). Here, two of the laser beams 1, 3 and third laser 2 oppositely propagate alongx direction. Therefore, the three lasers induce two Raman transitions between hyperfine spin states |0 to |↑ (↓) state with coupling strength Ω ij , both of which have the same recoil momentum 2 k r along thex direction. Two Raman couplings flip atoms from |0 to |↑ (↓) spin states and simultaneously impart momentum 2 k r via the twophoton Raman process. However, the two spin states |↑ and |↓ aren't coupled via Raman process due to ∆m F > 1 as shown in Fig. 1(b). The single particle motion alonĝ x direction can be expressed as the STMC Hamiltonian, (1) Here, δ is the two-photon Raman detuning, k r is the single-photon recoil momentum of the Raman lasers, Ω ij is the coupling strength between the state |i and |j [37], and is the Planck's constant. In order to eliminate the space dependence of the off-diagonal terms for Raman coupling in the original Hamiltonian, one can apply an unitary transformation to get the effective Hamiltonian Here we set Ω 12 = Ω 23 = Ω, p x indicates the quasimomentum along x direction. Here, a spin-1 system is spanned by nine basis operators, which include the identity operator (I), the three vector spin operators (F x , F y , and F z ) and the five spin quadrupole operators [36]. The operators F x and F z can be written in the matrix form The term p x F 2 z describes the one-dimensional coupling between a spin tensor and the linear-momentum (i.e., the spin-tensor momentum coupling). We define the recoil momentum k r = 2π /λ and recoil energy E r = ( k r ) 2 /2m = Ω 0 = h × 8.45 kHz as the natural momentum and energy units, where m is the atomic mass of 40 K, and λ = 768.85 nm is the wavelength of the Raman laser. The three dressed eigenstates of Eq. 3 are expressed by the spin-1 basis (|↑ , |0 , |↓ ) where The |α and |β are the lowest and highest energy dressed states respectively. The |γ is middle energy dressed state. We define the spin components |0 and |± = (1/ √ 2)(|↑ ± |↓ ). The middle state |γ corresponds to the spin dressed component |− . For a single-particle energy band structure, the lowest and highest bands of STMC are the bright dressed states, which are composed of three spin components |0 , |↑ and |↓ and the amplitude of three spin components depend on the Ω and δ. The energy shift of the lowest and highest bands of STMC depends on the coupling strength as shown in Fig. 1(c1) and (c2). The highest band of STMC moves to higher energy and the lowest band to lower energy as the coupling strength increases and the detuning δ is fixed. The lowest and highest bands behave as a regular spin-orbit coupled spin-1/2 system. However, the middle state (|γ ) is independent on the Ω and δ from Eq. (7). So the middle state |γ has not energy shift by Raman coupling, therefore called dark state. The darkstate band plays an important role on both ground-state and dynamical properties of the interacting BECs with SOC as described in Ref [28]. We start quantum degenerate gases of 40 K atoms at the spin state |9/2, 9/2 by sympathetic evaporative cooling to 1.5 µK with 87 Rb atoms at the spin state |2, 2 in the quadrupole-Ioffe configuration (QUIC) trap, and then transport them into the center of the glass cell in favor of optical access, which is used in previous experiments [37,38]. Subsequently, we typically get the degenerate Fermi gas of (∼ 4 × 10 6 ) 40 K atoms in the lowest hyperfine Zeeman state |9/2, 9/2 by gradually decreasing the depth of the optical trap. Finally, we obtain ultracold Fermi gases with the temperature around 0.3 T F , where the Fermi temperature is defined by T F = ω(6N ) 1/3 /k B . Hereω = (ω x ω y ω z ) 1/3 2π × 80 Hz is the geometric mean of the optical trap frequency for 40 K degenerate Fermi gas in our experiment, N is the particle number of 40 K atoms, and k B is the Boltzmann's constant. After the evaporation, the remaining 87 Rb atoms are removed by shining a resonant laser beam pulse (780 nm) for 0.03 ms without heating and losing 40 K atoms. Afterwards, the ultracold Fermi gases of |9/2, 9/2 state in optical dipole trap are transferred into the spin state |9/2, 3/2 using a rapid adiabatic passage induced by a rf field with duration of 80 ms at B 19.6 G to avoid the atomic loss because of other Feshbach resonance [39,40], where the center frequency of RF field is 6.17 MHz and the scanning width is 0.4 MHz. Three laser beams of 768.85 nm are used as the Raman lasers to generate the STMC alongx, which are extracted from a continuous-wave Ti:sapphire single frequency laser. The Raman beams 1 and 2 are frequencyshifted around 74.896 MHz and 122 MHz by two single pass acousto-optic modulators (AOM), respectively. The Raman beams 3 is double pass frequency-shifted around 166.15 MHz by AOM. Afterwards, the Raman beams 1 and 3 are coupled with the same polarization into one polarization maintaining single-mode fibers, and Raman beam 2 is sent to the second single-mode fiber to increase the stability of the beam pointing and the quality of the beam profile. The two Raman lasers 1 and 3 from the first fiber and Raman laser 2 from second fiber counterpropagate along the x axis, and are focused at the position of the atomic cloud with 1/e 2 radii of 200 µm, larger than the Fermi radius 43 µm of the degenerate Fermi gas [41], as shown in Fig. 1(a). The two Raman lasers 1, 3 and Raman laser 2 are linearly polarized along the z and y directions, respectively, corresponding to drive π and σ transitions respectively relative to the quantization axis z shown in Fig. 1(a). A homogeneous magnetic bias field B exp is applied in the z axis (gravity direction) by a pair of quadrupole coils described in Ref. [21], which generates Zeeman splitting on the ground hyperfine state. We ramp the magneticfield to an expected field B exp = 160 G during 30 ms, and increase the intensity of the three Raman laser beams to desired value in 20 ms to generate the STMC in three sublevels |9/2, 1/2 , |9/2, −1/2 , and |9/2, −3/2 of ultracold Fermi gases. Here, we employ the spin injection spectrum to measure the energy band structure. So we prepare STMC as the final empty state and the other state |9/2, 3/2 as the initial state. A Gaussian shape pulse of the rf field is applied for 450 µs to drive atoms from |9/2, 3/2 to the final empty state with STMC [16,17,21]. Following the spin injection process, the Raman lasers, the optical trap and the magnetic field are switched off abruptly, and a magnetic field gradient is applied in the first 10 ms during the first free expansion, which creates a spatial separation of different Zeeman states due to Stern-Gerlach effect. At last, the atoms are imaged along the z direction after total 12 ms free expansion, which gives the momentum distribution for each spin component. By counting the number of atoms in the expected state as a function of the momentum and rf frequency from the absorption image, the energy-band structure can be obtained. First, we measure energy-band structure of standard 1D SOC as shown in Fig. 2 which is similar as that reported in Ref [17]. We prepare the atoms in the free spin state |3/2, 9/2 , then switch on two raman lasers to generate the 1D SOC system with two spin states |↑ and |0 . Using rf spin injection, we get the energy-band structure of 1D SOC, which is agree with the theoretical calculation well as shown in Fig. 2(c). Now we study STMC and illustrate the middle state |γ as a dark state in the STMC system. We prepare the ultracold atomic sample in the free state |9/2, 3/2 with fixed magnetic field, then switch on three Raman laser to generate STMC system. Afterwards, we use rf spin injection from free state to empty STMC system as shown in Fig. 3(a). We obtain the energy spectrum of STMC system as shown in Fig. 3. Here, the detuning δ=0, and the Raman coupling strength Ω=2.5Ω 0 (3Ω 0 ) shown in Fig. 3(c) ((d)). The three spin components appear in TOF images (For example in Fig. 3(b)) simultaneously when RF field drive atoms into the lowest and highest bands. The color depth contains the amplitude information of three spin components for the lowest and highest bands in Fig. 3(c) and (d). The highest band of STMC moving to higher energy and the lowest band to lower energy as the coupling strength increases are shown in Fig. 3(c) and (d). It illustrates that the lowest and highest bands are the bright dressed states and behave as a regular spin-orbit coupled spin-1/2 system. However, the middle dressed state |γ only includes two spin components |↑ and |↓ (the spin dressed state |− ). Therefore we only observe the two spin components |↑ and |↓ in the middle band from RF spectrum. Especially, almost no atoms in |0 spin component are populated in the middle band as shown in Fig. 3(c2) and (d2). Moreover, Fig. 3(c) and (d) shows that the middle state |γ is always a dark state without energy shift and decouples from the Raman strength Ω. We also employ another RF spectrum method to measure energy-band structure of STMC state [16]. Here, atoms are prepared in STMC as the initial state and the state |9/2, 3/2 as the final empty state. We first prepare the ultracold atomic sample in the state |9/2, 1/2 at first, then ramp on three Raman lasers with 5 ms to prepare Fermi atoms into STMC state in equilibrium. Then we apply rf pulse to drive the atoms from STMC state into free state |9/2, 3/2 as shown in Fig. 4(a). We also get the energy spectrum of STMC system as shown in Fig. 4(b) and (c). Here, the Raman coupling strength Ω=2.5Ω 0 , and the detuning δ=0. For rf spectrum of STMC, the populated range into three bands of STMC by ramping on three Raman lasers is determined by the temperature of Fermi gases. The higher temperature of Fermi gases will make the momentum distribution broader, which will enlarge the measure range of energy band with compromising the signal-noise ratio of RF spectroscopy. In conclusion, we have realized a scheme for generat-ing STMC system in ultracold Fermi gases, and demonstrate the coupling between the spin tensors of atoms and their linear momenta. We measure and get the energy band structure of STMC via rf spin-injection spectrum. From rf spin-injection spectrum, we demonstrated that the middle state |γ in the STMC system is a dark state. In this work, the dark-state band is not coupled with two bright-state bands through Raman coupling only for the single-particle picture. The dark-state band will couple with two bright-state bands in the presence of interactions between atoms. Thus, if we prepare atoms initially in the dark-state band, atoms will decay into the bright band due to interaction. Moreover, forming the darkstate band requires that the Raman detuning δ for |↑ and |↓ are exactly same. Otherwise, the dark-state band will change into the bright band. The experimental results may motivate more theoretical and experimental research of other novel topological matter, such as study super-solid like stripe orders due to the existence of the dark middle band, and may give rise to nontrivial topological matter with unprecedented properties in future. We would like to thank Chuanwei Zhang for helpful discussions. (c) Reconstructed single particle dispersion and atom population when transferring atoms from the STMC ultracold Fermi gases system to free spin state |9/2, 3/2 for Ω=2.5ER and the detuning δ=0ER.
2020-03-27T01:00:41.041Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "259bfc7dedd3ed1d53535d661b807fc8a9f5da2e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2003.11829", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7ca487294cb9fd902f554b255b1597a0a493c33d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16950621
pes2o/s2orc
v3-fos-license
Sediment records of highly variable mercury inputs to mountain lakes in Patagonia during the past millennium Sediment records of highly variable mercury inputs to mountain lakes in Patagonia during the past millennium S. Ribeiro Guevara, M. Meili, A. Rizzo, R. Daga, and M. Arribére Laboratorio de Análisis por Activación Neutrónica, Comisión Nacional de Energı́a Atómica, Centro Atómico Bariloche, 8400 Bariloche, Argentina Department of Applied Environmental Science, Stockholm University, 10691 Stockholm, Sweden Consejo Nacional de Investigaciones Cient́ıficas y Técnicas, Rivadavia 1917, Ciudad de Buenos Aires, Argentina Instituto Balseiro, Universidad Nacional de Cuyo, 8400 Bariloche, Argentina Introduction Even though aquatic ecosystems are globally exposed to mercury (Hg) by atmospheric inputs of increasing concern, few studies have been focusing on the sources, fate and history of freshwater systems of the Southern Hemisphere that are free from major contamination (Downs et al., 1998;Lamborg et al., 2002;Biester et al., 2007). Here, we used sediment profiles as historical archives to reveal changes in the Hg cycling in two 10 lakes of the southern Andes over the past centuries. Although no relevant point sources of Hg from mining or industrial activities have been identified in the study region, high Hg levels in various ecosystem compartments have been reported, notably in both native and introduced fish species, where levels ranged from 0.06 to 4 µg g −1 dry weight (DW) in liver, and from 0.07 to 2.5 µg g −1 DW in muscle (Arribére et al., 2008), whereas 15 Hg concentrations in lichens and mussels, used as air and water bioindicators, respectively, reached values compatible with locations exposed to moderate contamination (Ribeiro Guevara et al., 2004a, b), suggesting that the anomaly is not limited to aquatic systems. The western part of the Park receives high precipitation, reaching 3000 mm y −1 . Introduction Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion 1994), including areas of geothermal and volcanic activity, which are considered the foremost natural source of Hg (Nakagawa, 1999;Ferrara et al., 2000;Tomiyasu et al., 2000). Cataclysmic volcanoes have the potential to inject enough volatile Hg into the stratosphere to change the global and regional cycle of Hg for a few years, while quiescent degassing and moderate eruptions exhale directly into the troposphere and can 5 have long-term effects also on the local environments (Langway et al., 1995). Geothermal activity has been associated with high Hg levels in soils and air at several places (Siegel and Siegel, 1975;Weissberg and Rohde, 1978;Varenkamp and Buseck, 1986). Volcanogenic Hg can readily enter the aquatic food chain after being released, enlarging bio-available stocks (Nriagu and Becker, 2003). Volcanic activity is a potential source to be considered in the present work since the lakes under study are enclosed in the Southern Volcanic Zone (SVZ), including several volcanoes active during the Holocene. Forest fires reduce drastically the pool of Hg in catchment soils, and releasing also biomass inventories, through elemental Hg volatilization to the atmosphere (Friedli et al., 2003;Sigler et al., 2003;Amirbahman et al., 2004;Harden et al., 2004), 15 potentially enlarging sediment Hg burden after transport and wet or dry deposition. Up to 6 fold increase in Hg concentrations in sediments of Caballo Reservoir, New Mexico, USA, was observed after a forest fire and storm runoff, revealing that the combination of both phenomena enhanced the transport of Hg from the watershed to the water body (Caldwell et al., 2000), another pathway that may contributes increasing Hg contents in 20 sediments after fires. Kelly et al. (2006) observed also in Lake Moab, Jasper National Park, Canada, that post-fire runoff mobilized a large short-term pulse of Hg. In an earlier screening study on lake sedimentary sequences in the region (Ribeiro Guevara et al., 2005), upper layers, associated with 20th century accumulation periods, showed in most cases concentrations elevated above background levels, reaching Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion with additional methods to analyze also other elements and environmental tracers. Study site The Nahuel Huapi National Park is situated in northern Patagonia, on the eastern slope of the southern Andes (40 • 20 to 41 • 40 S, 71 • to 72 • W; Fig. 1) and is the largest 5 protected natural area of Argentina, covering approximately 7100 km 2 and comprising a drainage basin that includes three major river systems, thirteen lakes of more than 10 km 2 , and several hundred small lakes and ponds. Within the Park's limits there are pristine as well as moderately impacted areas, such as the city and suburbs of San Carlos de Bariloche, with a population of circa 120 000 people. Its economy, as well as 10 that of other small towns and villages in the Park, is largely based on tourism. The Park is located in the Northern Patagonian Andes (39 • to 45 • S), a region that is part of the Southern Volcanic Zone (SVZ). The SVZ includes, at least 60 historically and potentially active volcanic edifices in Chile and Argentina, as well as three giant silicic caldera systems and numerous minor eruptive centers (Stern, 2004). The north- 15 ern Patagonian segments of the volcanic arc include several active centers since the Miocene to present, among other Villarrica, Nilahue, Puyehue-Cordón Caulle, Cerro Puntiagudo, Osorno, and Calbuco, with several events registered in historical records since colonisation (Ramos, 1999;Stern, 2004). An analysis of volcanic ash records in short lacustrine sedimentary sequences from the region showed up to 9 tephra layers 20 deposited in the past 1000 yr (Daga et al., 2008). Two sedimentary sequences were extracted from Lake Tonček and Lake Moreno Oeste (Fig. 1). Lake Moreno Oeste is the western branch of Lake Moreno (41 • 5 S; 71 • 33 W, 758 m a.s.l.), draining into Lake Nahuel Huapi. Lake Moreno Oeste has a surface area of 6 km 2 and a maximum depth of 90 m, and is an ultraoligotrophic, warm 25 monomictic system stratified from late spring to early autumn (Queimaliños et al., 1999; Díaz et al., 2007). The lake is surrounded mostly by closed native forest dominated by Nothofagus dombeyi and lesser amounts of Austrocedrus chilensis. This environment has persisted, with variations in the relative composition, during the last millennium (Whitlock et al., 2006). The sampling point selected is Llao Llao bay, a sub-basin with a rather flat bottom at 20 m depth, without tributaries. 5 Lake Tonček (41 • 12 S; 71 • 29 W, 1750 m a.s.l.) is a small lake with 0.03 km 2 surface and 12 m maximum depth, of glacial origin, situated in Catedral mountain approximately 16 km to the southeast of Lake Moreno Oeste, at the foot of high peaks with steep slopes. It is an ultraoligotrophic, dimictic system, with direct stratification in summer and 6 to 8 months of ice cover reaching a thickness of up to 2 m. Lake Tonček 10 watershed is small, with an extension of approximately 2.5 km 2 including one smaller lake situated about 100 m higher, which is connected to Lake Tonček by a small inlet stream meandering across wetlands. Reddish coloration and sulphydric smell in these wetlands have been reported at the end of the summer, when eutrophication processes are developed, potentially impacting Hg cycling in the water body. The lake has two dis-15 tinct sections: a deep central zone that is surrounded like a ring by a shallow outer zone which is 0.5 m deep and up to 30 m wide. The boundary between the two sections is a steep slope dropping to 12 m. Lake Tonček watershed is dominated by rocky ground deposits, and scattered timberline vegetation (Nothofagus pumilio "krummholz"). The water body encloses a simple trophic structure without fish, and also the community 20 structure of zooplankton is relatively simple (Morris et al., 1995;Marinone et al., 2006). Methods Short sediment cores were extracted with a messenger-activated gravity corer from the deepest part of the lakes Moreno Oeste (Llao Llao bay), and Tonček ( Fig. 1). Core lengths were 43 and 70 cm, respectively. The sediment cores were cut opened 25 longitudinally using a portable circular saw to section the tube walls, sliding afterwards a copper plate through the sediment to divide it in two semi-cylindrical sections. Both sections were sub-sampled every 1 cm. Each sub-sampled sediment layer was 25890 Introduction Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion freeze-dried until constant weight and homogenised. Tephra layers were identified visually in the sedimentary sequence before sub-sampling, whereas they were analyzed under binocular magnifying glass after freeze-drying. The sediment accumulation rates of the sediment sequences were determined by 210 Pb and 137 Cs dating techniques (Joshi and Shukla, 1991;Robbins and Herche, 5 1993). 210 Pb, 226 Ra (in secular equilibrium with supported 210 Pb), and 137 Cs specific activity was measured in each layer by high-resolution gamma spectrometry. The Constant Rate of Supply model was used for 210 Pb dating. Correction of the old-date error of the model was implemented by logarithmic extrapolation to infinite depth (Ribeiro Guevara et al., 2003). For 137 Cs dating, the specific activity profiles were compared with the fallout sequence determined in this region, associated mainly with South Pacific nuclear tests from 1966 to 1974 (Ribeiro Guevara and Arribére). The dates for the events registered in both sedimentary sequences before 1900 were obtained by extrapolation of the sedimentation rate determined in upper layers, measuring the core depth in cumulative mass per surface unit, and discounting volcanic ashes from bulk 15 sediments by estimating the fraction in each layer from the analysis under binocular microscope. The organic matter content (OM) of the freeze-dried sediments was estimated as loss on ignition (LOI) at 550 • C for 4 h. >100 ng.g −1 DW) was 2% in homogenous reference samples and 4% in actual samples. Total Hg was determined in bulk sediment except for tephra layers, where the <63 µm fraction was analyzed. The elemental composition of the sediment samples was determined by Instrumental Neutron Activation Analysis, as described by Daga et al. (2008). The elements mea-5 sured were major elements including Al, Ca, Fe, Mg, Mn, Na, K, and Ti, rare earths elements La, Ce, Nd, Sm, Eu, Tb, Tm, Yb, Lu, and other relevant trace elements including Sb, As, Ba, Br, Cs, Zn, Co, Cr, Hf, Sc, Sr, Ta, Th, U, and V. The elements selected are biological and geological tracer that could provide information on environmental changes. Records of subfossil chironomid assemblages were studied in Lake Tonček sediments by picking up head capsules from the sediment according to standard methods (Walker, 2001). The chironomid head capsules were mounted on microscope slides and identified using current taxonomic guides, determining the relative abundance profile of each taxon. 15 Biogenic silica (BSi) concentration was measured in Lake Tonček sediments using the method outlined by DeMaster (1981). Sediment samples that weighed about 20 mg were leached in 1% Na 2 CO 3 over time, and aliquots were analyzed for BSi concentrations using the reduced molybdosilicate acid colorimetric method. Weight percent of total silica was plotted versus time and the extrapolated intercept was used to calculate 20 the BSi concentration of the sediment. Sediment sequences dating A sediment accumulation rate of 13.3 mg cm −2 y −1 (0.058 cm y (Fig. 2), corresponding to 5 the 1948-1970 deposition period, and in the 1.0-1.3 g cm −2 layer of the Lake Tonček sequence (Fig. 3), corresponding to the 1953-1964 deposition period. This decrease is compatible with the Puyehue-Cordon Caulle and Calbuco volcanic events in 1960-1961(Daga et al., 2008 causing bulk sediment dilution by volcanic ashes, which were also identified under binocular magnifying glass. Unsupported 210 Pb values in these 10 layers were corrected before dating. It is necessary to emphasize that the dating before 1900, which is based on the assumption that there was no persistent change in sedimentation rate, is somewhat uncertain particularly for early events. An independent dating corroboration was obtained in the Lake Moreno Oeste sequence. The tephra layer MO5 (Fig. 4) could be associated with a volcanic event in 1759, in agreement with 15 the 210 Pb and 137 Cs dating extrapolation. Interestingly, the 210 Pb flux is three fold higher in Lake Tonček compared to Lake Moreno Oeste, and it is the highest measured in the region based on 10 sedimentary sequences studied in a previous work (Ribeiro Guevara et al., 2003). A positive correlation between 210 Pb flux and the OM concentration of the upper layer of these lakes 20 was reported (Ribeiro Guevara et al., 2003), however Lake Tonček 210 Pb flux does not fit this correlation. The relative high 210 Pb flux to Lake Tonček sediments is consistent with the assumption that due to the characteristics of the catchment area, the sediments of this water body are a good recorder of atmospheric fallout, with relative low retention in the catchment area. Introduction Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Mercury The Hg concentration profiles of Lake Tonček and Lake Moreno Oeste, Llao Llao bay, sedimentary sequences are shown in Fig. 4, respectively. Hg fluxes to sediments ( 20 The analysis of fossil chironomids in the lake Lake Tonček sequence allowed the identification of twelve taxa corresponding to subfamilies Orthocladiinae, Tanypodinae, Podonominae and Chironominae (Tribu Chironomini). The dominant taxon of the chironomid community along the sequence was the cold-stenothermic Pseudosmittia Goetghebuer (Rizzo et al., 2007). The OM contents ranged from 6 to 18%, with the (Fig. 6). Selected major and trace element concentration profiles show different patterns (Figs. 6 and 7), rather constant for major Mn and Fe and trace elements As, Cr and Zn, and a noticeable increase of Sb at 1 g cm −2 depth with higher variability in the lower layers. were only exceeded by the highest values in urban areas with industrial pollution (200 to 300 µg m −2 y −1 , Engstrom and Swain, 1997). Even in a Hg deposition hotspot area in the USA, recent maximum values reached only 90 µg m −2 y −1 , after increasing constantly from 7 µg m −2 y −1 in 1880 (Hutcheson et al., 2008). Accordingly, the sediment domains with high Hg accumulation in our lakes during pre-industrial periods (up to 20 150 µg m −2 y −1 ) must be associated to some abrupt phenomena generating Hg inputs to aquatic environments corresponding to industrial pollution. Lakes and mercury Several watershed features influence the Hg concentration in lake sediments. Parameters of catchment morphometry, such as large drainage area, high catchment and lakebed slopes, and large lake depths could be associated to elevated Hg concentrations in lake sediments (Grigal, 2002;Kainz and Lucotte, 2006). Moreover, dense for-5 est zones in the catchment area are an important source of Hg to the aquatic systems (Kolka et al., 2001;Porvari et al., 2003;Driscoll et al., 2007). Therefore, according to the general characteristics of the water bodies and catchment areas, higher sediment Hg concentrations should be expected in Lake Moreno Oeste relative to Lake Tonček, with a smaller drainage area, shallower lake depths and almost absence of vegetation. Particular characteristics of the Lake Tonček watershed have to be considered regarding Hg concentrations in sediments. An important part of the watershed is covered by wetlands and this kind of lands and their internal processes (high rates of organic matter decomposition, sulphate-reducing conditions, potential for methylation) play an important role in the Hg cycle (Goulet et al., 2007;Driscoll et al., 2007;Selvendiran 15 et al., 2008;Larssen et al., 2008). Further, the snow cover for 6 to 8 months per year facilitates the snow-to-air Hg reemission after photoreduction, which could alter the fate of Hg after atmospheric deposition as observed in high altitude/latitude environments Lalonde et al., 2000;Steffen et al., 2008), although photoreduction also occurs in the water column for all lakes. Also, in warmer periods, 20 the snowmelt and summer storms can represent a significant portion of the annual water and Hg flux from the watershed (Grigal, 2002;Schuster et al., 2008). These features may explain the differences in Hg sequestration between Lake Tonček and Lake Moreno Oeste, even though a detailed evaluation of the impact exceeds the frame of the present work. 25 Even though lakes may differ, our sediment data show substantial and apparently synchronous changes over time. It is remarkable that the Hg profiles in both sequences show a similar pattern regarding the domains of high pre-industrial Hg, supporting the ACPD 9, 2009 Interactive Discussion hypothesis stated previously that external, abrupt phenomena generated substantial Hg inputs to these aquatic environments. The questions arising afterwards are on the records of environmental changes generated by these phenomena and on the Hg sources. 5 The identification of tephra layers in the sediment sequences is the most concrete evidence of an environmentally disrupting phenomenon, as well as a potential Hg source: a volcanic eruption. Environmental changes can also be traced at the biological level. Environmental factors Here, the variations in chironomid communities was studied in the Lake Tonček sequence in order to identify population changes that could be associated to environ-10 mental events or Hg inputs although direct heavy metal pollution is recorded better in morphological deformities. Changes in the chironomid assemblages were observed in Lake Tonček sequence, some of them associated with tephra layers. The change of taxa in relative composition allowed the identification of two sections; the oldest accumulation period corresponding to the 11th to 17th centuries with taxa indicating a colder 15 environment, followed by a period with temperate environment (Rizzo et al., 2007). But no correlation was observed between the variation in the chironomid assemblages and the two domains of high Hg. Moreover, the concentration profiles of the other elements as well as OM and BSi contents (Figs. 6 and 7), do not reproduce the Hg pattern nor do they show any correlation. The absence of correlation of the Hg concentration with 20 the geochemical tracers studied suggests that no direct geological process in the water body or in the watershed can be associated with the high Hg values, whereas the lack of correlation with OM and BSi is not providing any evidence of biological processes explaining the high Hg values. Mercury sources The other question was on potential Hg sources that could explain in particular the two older domains of high Hg identified in the Lake Tonček and Lake Moreno Oeste sequences. One of them is the occurrence of local geothermal emissions (Varenkamp and Buseck, 1986;Nakagawa, 1999). Geothermal activity is usually manifested at the 5 surface as emerging hot waters. Although there are some geothermal systems associated to recent magmatism near to this area, they are located along the Andean Range where the active volcanoes are located, about 50 km to the west (Fig. 1). Such indications have not been reported in or near Lake Tonček. There is no volcanic activity in this area that could provide the heat required to generate geothermal activity, 10 and the geothermal energy generated from very deep heat sources is unlikely to reach 1750 m of altitude without emerging at any other site of the geological formation. On the other hand, the pattern of the Hg profile observed in Lake Moreno Oeste is similar, suggesting that concurrent phenomena generated the high Hg records in pre-industrial periods. Geothermal activity was not observed either in or near Lake Moreno. It seems 15 unlikely that geothermal activity can be sufficiently extended to reach Lake Tonček and Lake Moreno Oeste without at the same time producing traces or reports of other geothermal manifestations. Therefore, geothermal activity is unlikely a Hg source to be considered here. Deforestation is another potential source of Hg to aquatic systems (Porvari et al., 2003). There are no records of massive deforestation previous to the 20 Spanish colonization in this region, other than by extended forest fires. These were a common deforestation practice both before and after the Spanish colonization (Veblen et al., 2003), but may also have occurred naturally together with volcanic events (see below). Volcanic events are a well-known source of Hg on a regional or global scale (Nriagu Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion and determined a contribution of 6% from remote volcanic events to the Hg fallout over the last 270 yr. Tephra layers are fall ash deposits recording volcanic events that if associated to an increase in Hg concentration in the deposit or in the upper adjacent layer, give evidence of Hg releases produced by the volcanic eruption. Tephras TK1 and MO2, which correlate in time and correspond to a possible mixing of prod-5 ucts from events at the volcanoes Calbuco and Cordón Caulle (Fig. 1) according to the geochemical characterization (Daga et al., 2008), show a significant increase of Hg in the overlying layer (Fig. 4), suggesting the occurrence of Hg gaseous emissions concurrent with the eruption impacting the aquatic systems. Moreover, the micro-tephra in Lake Tonček and tephra MO1 (Fig. 4), corresponding to a volcanic event in [1960][1961]10 precede also an increase in the Hg level. Tephra TK6 could be correlated in time with MO5, which shows a noticeable Hg increase in the overlying layers (Fig. 4). Tephra TK6 corresponds to a mixing layer with products from both Calbuco and Cordón Caulle events, while MO5 corresponds clearly to a Cordón Caulle eruption (Fig. 1). The upper sequence domain with high Hg concentrations shows tephras, or an overlying layer, 15 with high Hg concentrations alternating layers with lower values. These Hg concentrations are the highest determined in the profile. Due to the sharp variations it is not possible to determine an increase over previous levels (Fig. 4), but also these high Hg concentrations could be evidence of a volcanic source. In the lower sequence domain with high Hg, tephra layers are concurrent in Lake Moreno Oeste but these volcanic 20 events are not registered in Lake Tonček (Fig. 4). Nevertheless, these high Hg concentrations may be associated to gaseous emissions linked to volcanic events since volcanic ashes can show a highly variable spatial distribution (Daga et al., 2008), while the dynamics of Hg transport could be different. Fires are a potential source of atmospheric Hg. Whitlock et al. (2006) and thus a distinction of surface fires that largely burn grass and herbs, fires that burn both surface cover and woody plants in a patchy manner, and stand-destroying crown fires. At Lake El Trébol, charcoal records declined between 3300 to 2000 yr before present (BP) and returned to high values between 1500 and 500 yr BP. The last 2000 y section of this sequence features variable fire-episode magnitudes, high fire frequency, 5 and short fire-free intervals. Two fire episodes of high magnitude were registered in the Lake El Trébol sequence around 800 and 900 yr BP, the more recent being the highest in charcoal contents during the 10000 yr BP period studied. They are associated to a high peak of the ratio of (grass charcoal)/(total charcoal), thus representing the burning of grass and herbs. These high charcoal records are coincident with the lower domain of high Hg domain observed in Lake Moreno Oeste and Lake Tonček with a date estimated to 1200 to 1350. Forest fires release other trace elements to the atmosphere together with Hg (e.g. As, Br, Ca, Cr, Fe, Mg, Mn, Se, Ti, V, or Zn) in aerosols or gaseous form, but their imprint in lake sediment sequences depends strongly on the transport dynamics in 15 the atmospheric media, in the watershed and in the water column (Yamasoe et al., 2000;Radojevic, 2003), and no correlation between fires and trace element contents in lake sediment sequences was observed in some cases (MacDonald et al., 1991;Virkanen, 2000). Moreover, high Hg enrichment in air above background concurrent with no significant variation in any other trace elements, was observed associated with 20 forest fires (Anttila et al., 2008). Therefore, the lack of correlation between Hg and the other trace elements analyzed in the present work does not preclude forest fires as a potential Hg source. Extended forest fires associated with human activities occurred indeed during the 18th and 19th centuries. Native Americans affected fire regimes and the landscapes 25 of Northern Patagonia through intentional burning for various purposes, which occasionally might have lead to wildfires (Veblen et al., 2003). European settlement, starting in the region about 1850 but earlier in Chile (since the end of the 17th century), was associated with large fires for forest clearance, intensive livestock grazing, and (Veblen et al., 1992). From 1890 to 1920 extensive areas of wet forests were burned in the study region by European settlers, in a failed effort to convert forests to cattle pasture (Kitzberger et al., 1997). Particularly, direct observation of large burns was reported in 1787 in the Lake Nahuel Huapi region, towards the lake South-West (Veblen et al., 2003). By analyzing treering data, Kitzberger et al. (1997) determined the occurrence of an extended fire in Lake Roca (Fig. 1) in 1827. These fires may well have had an impact directly on the Lake Tonček watershed due to the predominantly westerly winds, reaching possibly also Lake Moreno Oeste watershed (Fig. 1). These events coincide with the upper domain of high Hg at Lake Tonček and Lake Moreno Oeste sedimentary sequences. In both periods, the fire records are concurrent with ENSO (El Niño-Southern Oscillation) events which may enhance environmental conditions favouring extended fires. In conclusion, the correlation of both high Hg domains in the Lake Moreno Oeste and Lake Tonček sequences with records of extended fires in the region suggests that this source, as well as the volcanic activity, could have generated the high levels and 15 variations of Hg concentrations and accumulation rates observed in these pristine lakes already in pre-industrial times. , 55, 1-261, 1991. Lockhart, W. L., Wilkinson, P., Billeck, B. N., Danell, R. A., Hunt, R. V., Brunskill, G. J., De-laronde, J., and St. Louis, V.: Fluxes of mercury to lake sediments in central and northern Canada inferred from dated sediment cores, Biogeochemistry, 40, 163-173, 1998 Na (wt%) 15 20 25 La (µ µ µ µg.g
2015-03-27T18:11:09.000Z
2009-12-02T00:00:00.000
{ "year": 2009, "sha1": "8749cd6716028bc802afabf3ad3eb3037f4f016d", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/10/3443/2010/acp-10-3443-2010.pdf", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "004362ab3ffeb4afbdb5d45ee2e0858883e30595", "s2fieldsofstudy": [ "Environmental Science", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
119636954
pes2o/s2orc
v3-fos-license
Mesoscopic description of the adiabatic piston: kinetic equations and $\mathcal H$-theorem The adiabatic piston problem is solved at the mesoscale using a Kinetic Theory approach. The problem is to determine the evolution towards equilibrium of two gases separated by a wall with only one degree of freedom (the adiabatic piston). A closed system of equations for the distribution functions of the gases conditioned to a position of the piston and the distribution function of the piston is derived from the Liouville equation, under the assumption of a generalized molecular chaos. It is shown that the resulting kinetic description has the canonical equilibrium as a steady-state solution. Moreover, the Boltzmann entropy, which includes the motion of the piston, verifies the $\mathcal H$-theorem. The results are generalized to any short-ranged repulsive potentials among particles and include the ideal gas as a limiting case. Introduction The adiabatic piston problem is an old problem in Thermodynamics and Statistical Mechanics. As described by Callen [1] many years ago, this is to predict the equilibrium state of a cylinder containing two subsystems (usually two gases) separated by a movable adiabatic piston, i.e. a wall that isolates the two subsystems when kept fixed. The same author, and others [2], stated that Thermodynamics can only provide a partial solution to the problem, namely the equality of pressures. Lieb [3] went one step further by pointing out that a Statistical Mechanics solution to the problem is in contradiction with Thermodynamics. Namely, as described by Landau and Lifshitz [4] and by Feynmann [5], the relaxation of the system, through the motion of the piston, is towards a global equilibrium, where all subsystems including the piston have the same temperature and the pressures of the gasses are equal. But this implies that the initially hotter subsystem decreases its entropy by doing work to the other subsystem, which may be in contradiction with the Second Principle [6,7,8]. According to Lieb and Yngvason [3,9,10] the adiabatic piston problem can be solved by reconsidering the principle of maximizing the entropy in the absence of constrains. Most works on the adiabatic piston use simple models, such as non-interacting particles. In this context, rigorous results can be found in the literature. If the initial condition is of a fixed piston surrounded by two ideal gases in equilibrium with the same pressures and temperatures, the dynamics of the piston occurs in several stages [11]: the initial condition becomes unstable after a short time and the piston start oscillating [12,13,14], the amplitude of the oscillations attenuates exponentially, and a final thermalization leads to the equilibration of the whole system. For more general initial conditions, where the two gases may be in different states, the evolution is in two stages [7,15,16]: a first fast evolution towards mechanical equilibrium, where the two gases have the same pressure, is followed by a much slower relaxation towards thermal equilibrium, dominated by the energy flux through the piston. Moreover, if the effect of recollisions (when a particle collide with the piston more than once) can be removed under some limits, the motion of the piston converges to an Ornstein-Uhlenbeck process [7,17,18]. See [7] for an attempt to include recollisions. The last stage of the dynamics, under the condition of mechanical equilibrium but with the temperatures of the gases still different, has attained much attention. As a matter of fact, the existence of this stage is closely related to the original controversy of the adiabatic piston problem. An interesting model, that keeps the system always in this last stage, considers two semiinfinite ideal gases in each sides of the piston having the same pressure and temperature difference. Even the fact that the net force to the piston is zero, the system reaches a steady state where the mean velocity of the piston is constant, different from zero, and towards the hotter gas [19,20,21,22,23,24]. When the mass of the piston goes to infinity, its mean velocity vanishes [25,26,27]. See also [28] for a kinetic description of the model in close relation with Thermodynamics. From a mesoscopic and macroscopic scales perspective, the major challenge when dealing with the adiabatic piston is to derive closed equations for the piston and the gases. This has been done rigorously [29,30] and approximately [31,32,33,34,35,36] for the case of non-interacting and similar models. For more realistic ones, such as models with hard spheres or disks, most of the existing results are numerical [37,38], or use rude approximations [39]. As in the case of non-interacting particles, the numerical simulations show an exponential relaxation of the system towards thermal equilibrium [40,41]. So far, the numerous theoretical results have not been compared against experiments. This is partly because the last stage of the evolution, actually the interesting one for the adiabatic piston problem, is too much slow to be observed in macroscopic system. As far as we know, there is only one experimental work that can be relevant for the present discussion [42]. It provides experimental measurements on the thermalization of a piston surrounded by a few hundreds of grains in a very dense configuration, close to the jamming transition. What we learn from this experiment, and also from the theory done in the dilute regime [43], is that the equilibration in the inelastic or granular case is completely different from the classic or elastic one. Specifically, when there is no energy source supplying the dissipation of the inelastic collisions, each of the gases reaches a state close to the homogeneous cooling state [44], and the "thermal" equilibration occurs when the cooling rates (or rates of energy loss) of both gases and the piston are the same. The new "equilibrium" criterion, as opposite of the temperatures being the same for the elastic case, explains the emergence of non-equilibrium phase transitions [45,46,47]. See also [43,48,49,50,50,51] for theoretical and numerical studies relevant for experimental situations. The works on the adiabatic piston in contact with granular matter suggest the need to include at least two new ingredients in the study of the (elastic) adiabatic piston. Firstly, if we are interested in making some contact with reality, a systematic derivation of a theory for the piston in contact with a finite number of particles is needed. Secondly, the theory should include interactions between particles. The aim of the present work is to propose a mesoscopic description of a realistic model of the adiabatic piston. The model considers a large but finite number of interacting particles, and reduces to the case of non-interacting particles when the diameters of the particles tend to zero. The organization of the work is as follow. The model is presented in Sec. 2 where we also show, by computing some relevant quantities of the system at thermal equilibrium, the importance of keeping the spatial correlations between the gases and the piston. Section 3 contains the main results of the work, namely the derivation of a closed set of kinetic equations for the gases, conditioned to a position of the piston, and the piston. It is shown that the equations support the equilibrium solution of Sec. 2. As an application to the results of Sec. 3, in Sec. 4 we demonstrate the H-theorem, which implies the evolution of the system towards thermal equilibrium from any initial condition. Finally, Sec. 5 includes a discussion together with some conclusions. Model The system is a d-dimensional cylinder of length L and section S divided into two parts by a moving wall of section S, zero width, and mass m p (the piston). The normal direction of the piston remains parallel to the axis of the cylinder, taken as the X-axis. The system contains N i d−dimensional hard spheres with masses m i and diameter σ i , to the left of the piston for i = 1 and to the right of the piston for i = 2. See figure 1 for a sketch of the system. The microstate of the system is given by the set of positions and velocities as {r 1 , . . . , r N , is the total number of particles. In the latter expressions, the indexes i = 1, . . . , N 1 stand for the particles to the left of the piston and i = N 1 + 1, . . . , N for the particles to the right. The state of the system changes because of the free motion of the particles and the piston, as well as of instantaneous collisions among particles and between particles and the piston. The collisions conserve energy and linear momentum. More specifically, for two particles colliding with velocities v k and v l , their postcollisional velocities, denoted by a prime, are whereσ is a unit vector pointing from the center of particle k to the center of particle l when in contact. The collision rule for a particle with mass m i and velocity v and the piston with velocity v p is when v x > v p if the particle is on the left (i = 1) and v x < v p if the particle is on the right (i = 2). The symbol ⊥ represents the component of the velocity normal to the X-axis. An alternative characterization of the dynamics, more suitable when using the phase space representation, is provided by the hamiltonioan where V is the volume of the phase space accessible to the system (no overlaps). The main focus in this work is on a mesoscopic description of the system. First, we consider the canonical equilibrium and the probability density of the phase space ρ(R, x p ; P, p p ) and related probability densities. Later, a nonequilibrium description based on Kinetic Theory will be proposed and the distribution functions for the gases and the piston, to be defined later, will be used instead. Many of the results to be obtained at equilibrium, although of trivial derivation, will deserve as a reference and a guide for a more general out-of-equilibrium description. Canonical equilibrium If the system is big enough or if it is in contact with a thermal bath with temperature T , i.e. if the walls of the container vibrates in equilibrium with temperature T , then the probability density of the whole system is given by the canonical distribution [52] where h is the Planck constant, β ≡ 1 kB T with k B the Boltzmann constant, H is the hamiltonian of Eq. (6), and Z the partition function. The latter is difficult to compute in general, due to the exclusion effects. However, for N (σ 1 +σ 2 ) ≪ L, that is if we neglect the volume of particles, an approximation to be taken along the work, the canonical distribution is well approximated as where 2πmi is the thermal length associated to a mass m i , i = 1, 2, p. From the probability density we can obtain other relevant quantities, such as the probability density of a particle to the left ρ 1 , to the right ρ 2 , and of the piston ρ p . For the first one, we have where I(x; a, b) is the regularized incomplete beta function. Proceeding analogously, These expressions describe two non-homogeneous gases and a fluctuating piston, as shown in Fig. 1 for two representative cases. Only for N 1 , N 2 → ∞ (regardless the mass of the piston) are the two gases spatially homogeneous, with the position of the piston fixed (although with a fluctuating velocity). Despite the previous results, the system is spatially homogeneous at equilibrium, namely the probability of finding any particle (including the piston) in a given position is uniform. This can be seen from the density of particles, defined as for the particles on the left (i = 1), the particles on the right (i = 2), and the piston. At thermal equilibrium, each of the quantities depends on x, however it is easily seen that the global density n is spatially uniform Similar arguments allow us to show that the pressure is is N +1 LSβ along the Xaxis and N +1 LSβ along any other normal direction. The calculation can be made using the partition function (then we should change the volume by fixing S in the first case and fixing L in the second one) or by computing the net flux of linear momentum across an imaginary (d − 1)-surface. Conditional probabilities As already shown, the fluctuations of the position of the piston make the probability densities for the gases to be non-homogenous at thermal equilibrium. Interestingly, the conditional probability densities of the gases to a position of the piston, ρ i (r, p|x p ) for i = 1, 2, are spatially homogenous at equilibrium. Using the definition of the conditional probabilities, and neglecting volume exclusion effects, we have dr j dp j dp p ρ(R, x p ; P, p p ) and There are two remarkable aspects of the previous results. On the one hand, the conditional distributions characterize the equilibrium state as if the piston where fixed. On the other hand, at thermal equilibrium the piston always "feels" homogeneous gases. The latter implies that we can also compute the equilibrium probability density of the piston as if it where in equilibrium under the force F (x p ) exerted by the gases. Namely, at equilibrium, where are the change of the linear momentum of the piston due to collisions with the gases. Operating, The associated "effective" potential and is, up to an additive constant, Hence, the desired probability density of the piston is with Z p a normalization constant. The latter expression of ρ p coincides with Eq. (11). Kinetic description As shown in the previous section, even at equilibrium, there are spatial correlations between the gases and the piston. These correlations should be kept in order to have an accurate description of a finite, out-of-equilibrium system. Moreover, a easier characterization of the thermal equilibrium of the gases is possible using the conditional distributions. For these reasons, in this section we derive equations for the conditional distribution functions f i (r, v, t|x p ) for i = 1, 2 of the gases, as well as for the distribution function of the piston f p (x p , v p , t). Preliminary definitions The distribution function of the gases f i (r, v, t) are defined such as f i (r, v, t)drdv is the mean number of particles on the left (i = 1) or on the right (i = 2) with position between r and r + dr and velocity between v and v + dv at time t. They are related with the probability densities ρ i (r, p, t) as The distribution function of the piston f p (x p , v p , t) is the probability density of finding the piston with position x p and velocity v p at time t, now We define the conditional distribution of the gases as where f i (r, v, x p , t) is the joint probability distribution for a particle of gas i with position r and velocity v and the piston with position x p , regardless its velocity, at time t. It is convenient to express the new probability density in terms of the more general joint probability f ip (r, v, x p , v p , t) where the velocity of the piston is also specified, The other quantity in Eq. (24) is the density of the piston n p (x p , t) already defined in Eq. (13), now given in terms of f p as Finally, it is convenient for forthcoming discussions to define the conditional probability distribution of a position and a velocity of the gases to a position and velocity of the piston as It is forth observing that f i (r, v, t|x p ) = f i (r, v, t|x p , v p ) in general, due to velocity correlations. An exception is when the system is at thermal equilibrium. Kinetic equation for the piston If we were to proceed as usual, we should write down a set of equations for the distribution functions, of the gases and the piston, from the Liouville equation. This would result in a non-closed set of equations, since higher order distribution function would appear. The difference in the present case with respect to other problems is that the conventional molecular chaos hypothesis does not work, and new approximations are needed. Take first the kinetic equation for f p (x p , v p , t), resulting form integrating the Liouville equation over all variables except the position and velocity of the system, It is nothing but a probabilistic balance equation, where the lhs accounts for the free-of-collision motion of the piston and the rhs accounts for the collisions with the gases. The latter includes the collision operators, which are functions of v p and functionals of the distribution functions. The asterisk on the velocities indicates precollisional values, for i = 1 or 2. The functional dependence of the operator can be made more clear if we replace the joint probabilities using the following relation Assumption 1. The usual molecular chaos hypothesis for precollisions is translated in this work to for precollisions. That is, we keep the correlations in the position but remove precollisional velocity correlations. Observe that the functional dependence of the collision operators are evaluated when a particle is at the position of the piston, dissregarding the diameter of the particle. Since we are interested in a mesoscopic description where the relevant spatial variation of the distribution function occur over a mean free path of the particles, the simplification is expected to be accurate for the gases dilute enough. Kinetic equations for the gases To complete the kinetic description, we need equations for the conditional probabilities f i (r, v, t|x p ). For that purpose, and according to the definition of Eq. (24), we need equations for n p (x p , t) and f ip (r, v, x p , v p , t). Integrating Eq. (28), we get where is the mean velocity of the piston. In Eq. (34) we have used the conservation of the number of particles in collisions, The kinetic equation for the joint probability f 1p (r, v, x p , v p , t), resulting from the Liouville equation after a proper integration, is where the collision operators take into account the different mechanisms the distribution function f 1p can change through collisions. For the operator J 1p,11 ′ the collision is between two particles on the left of the piston, for J 1p,p1 ′ the collision is between the piston and a second particle on the left, and J 1p,p2 accounts for collisions between the piston and a particle on the right. These collision terms involve three-body distribution functions. There is still one kind of collision to consier, involving a particle with position and velocity given by the argument of f 1p and the piston, when they are in contact (x = x p ) and about to collide (v x ≥ v p ). In this case, the usual procedure is to impose a boundary condition to f 1p , which is a consequence of the conservation of probability: for v x ≥ v p . The primed variables are given by the collision rule in Eqs. (1)- (2). For the gas on the right, an analogous equation and boundary condition result. Before proceeding with the derivation of the equation for f i (r, v, t|x p ), it is worth focusing on some consequences of the boundary condition in Eq. (38). If we integrate it over velocities for v x ≥ v p , after a change of variable and taking into account that dv x dv p = dv ′ x dv ′ p , we get with the mean velocity of the gases u i (r, t) (i = 1, 2) defined as For nonzero densities, the velocity of the gases and the piston in contact coincide. Integrating Eq. (37) over the velocity of the piston, we have where we have used the fact that the collision operators conserve the number of particles. Using last equation and Eq. (34) with the temporal derivative of Eq. (24), after some manipulations, we arrive at which is not a closed equation yet. Assumptions 2 and 3. In order to close the equation we include additional approximations that go beyond the standard molecular chaos hypothesis, namely and with J 11 the collision operator of two particles on the left, which is a function of v and a functional of f 1 (r, v, t|x p ). For hard-sphere collisions, the latter approximation is where b −1 is the restitution operator, which replaces the velocities of the colliding particles by their precollisional values, and Finally, we arrive at a closed equation for f 1 (r, v, t|x p ), With similar approximation, we also obtain the kinetic equation for the gas on the right of the piston, with For the boundary conditions, we have to differentiate from particles in contact with the walls of the cylinder, denoted by ∂V , and the piston. In the first case, the conservation of probability imposes for v ·n = −v ′ ·n, wheren is a unit vector normal to ∂V . Assumption 4. For the boundary conditions at the position of the piston, different approximations are possible. We take the one resulting from Eq. (38), after integration over the velocity of the piston and assuming and These approximate boundary conditions have the advantage, if compared to others, to conserve the exact relation of Eq. (39), which is of crucial importance for the conservation of particles, as seen later. Canonical equilibrium The proposed system of equations (28),(47)-(48) with boundary conditions (51)-(52) has the canonical equilibrium as a solution, given by with To demonstrate the latter assertion, take J p1 at canonical equilibrium. Similarly, which has expression (56) as the unique normalized solution. The considerations for the conditional probabilities are straightforward. Since J 11 = J 22 = 0 for gaussian distributions, Eqs. (47) and (48) clearly support the equilibrium solutions. Some comments and remarks The equations for the distribution function of the piston (28) and the conditional distribution functions for the gases (47)-(48) constitute one of the main results of the present work. The chose of the conditional distribution as the function to describe the gases has been motivated by the analysis of the canonical equilibrium, since there we see the need of including spatial correlations between gases and the piston, and the equation of the piston where the conditional probabilities emerge in a natural way. The derivation of the kinetic equations used several approximations, namely the molecular chaos for inter-particle and piston-particle collisions (assumption 1), and a generalization of molecular chaos for particle-piston collisions (assumptions 2-4). The molecular chaos hypothesis neglects pre-collisional velocity correlations, as usual when dealing with the Boltzmann equations. The generalized molecular chaos hypothesis goes one step further by also removing "some" post-collisional velocity correlations, as specified by Eqs. (43)- (44) and (51)- (52). The approximate expression of Eqs. (43)- (44) are expected to be correct as far as f i (r, v, t|v p , x p ) ≃ f i (r, v, t|x p ) inside the integrals. This, in turn, is expected to be true in a general situation with x not very close to x p , since collisions between particles and the piston, as well as the collisions among particles, tend to destroy the velocity correlations. When x ≃ x p , which is the case of Eqs. (51)- (52), the removal of correlations is not that obvious. In fact, the boundary conditions (51)-(52) are strictly wrong. However, if the section of the piston S is big enough S/σ d−1 i ≫ 1, we could find a regime of dilute gases where the postcollisional velocity correlations decay to zero on a distance much smaller than the mean free path of the particles, meaning that the boundary conditions become correct in the mesoscale. An exact analysis of this considerations needs a formulation with the functions f i (r, v, t|v p , x p ), which is beyond the scope of the present work. Observe that the structure of the kinetic equations for the conditional distributions of the gases is different from the one expected from naive considerations, as terms proportional to the mean velocity of the piston appear. Although the presence of these terms can be justified by means of physical considerations, i.e they are important in order to compute the work done by the piston correctly, they can compromise some key properties of the equation. Namely, for a given physical initial condition, the kinetic equations for the conditional distribution functions should provide positive and normalized solutions for all later times. Both aspects could be indirectly demonstrated from the equation of f ip and n p or directly. As an example, we show in the sequel that the kinetic equations (28)-(47)- (48) preserve the normalization of distribution functions. Consider first the equation of the density of the piston Eq. (34). Integrating over x p ∈ (0, L), using the divergence theorem, and the fact that the piston can not leave the system, n p (0, t)u p (0, t) = n p (L, t)u p (L, t) = 0, it turns out that dx p n p (x p , t) is a conserved quantity. Hence, the normalization of f p is ensured, provided it is initially normalized. Consider now the gas on the left. After integration of Eq. (47) over v, we face several terms: with being the conditional density; dv with the conditional velocity defined as and dv J 11 = 0. Hence, the balance equation for the conditional density is and a similar one for the conditional density of the gas on the right of the piston n 2 (r, t|x p ). Integrating over x ∈ (0, x p ) and all possible values of r ⊥ : with We have used the condition withn a unit vector normal to boundary of the cylinder ∂V , which is a direct consequence of the kinetic boundary condition (50). The quantity u 1,x is the X component of u 1 (r, t|x p ). Now, Hence, The first term on the rhs is zero, using the kinetic boundary condition at the piston (51), which, making a change of variable on the second integral, turns if n p (x p , t) = 0. Hence, the balance equation for N 1 results From the nature of this equation we infer that if the conditional distribution function for the gas on the left is normalized to N 1 for all allowed values of x p , that is N 1 (0|x p ) = N 1 , then N 1 (t|x p ) = N 1 for t ≥ 0. Similar considerations allow us to conclude that the conditional distribution of the gas on the right of the piston is also normalized to N 2 for all times. H-theorem In this section we demonstrate the H-theorem for the system of kinetic equations derived along the previous section. It states that the H function, to be defined below, mush approach a limit where the distribution functions are of thermal equilibrium given by Eqs. (53)- (55). We first prove H is a decreasing function, then that its time derivative is zero at thermal equilibrium, and finally that it is bounded from below by its value at equilibrium. Definition of H Following a recent work [53], take the H function as with ρ N the probability density of the whole system, Γ ≡ (R, x p ; P, p p ), and ρ 0i constants that make the expression dimensionless. Removing velocity correlations, as usual, it is and the H-function turns H ≃ N 1 drdx p dpdp p ρ 1 (r, p, t|x p )ρ p (x p , p p , t) ln ρ 1 (r, p, t|x p ) ρ 01 +N 2 drdx p dpdp p ρ 2 (r, p, t|x p )ρ p (x p , p p , t) ln ρ 2 (r, p, t|x p ) ρ 02 + dx p dp p ρ p (x p , p p , t) ln ρ p (x p , p p , t) ρ 0p . In terms of the distribution functions, with the new constants f 0i having the dimensions of their respective distribution functions. Proof of d and similarly Hence, Eq. (100) is where E eq is the mean energy at thermal equilibrium, with is the same as the mean energy E for any other state, since collisions conserve energy. This way, H ≥ H 0 . Discussion and conclusions In this work, we have proposed a mesoscopic description of a system made of a cylinder including a finite number of hard spheres in d dimensions divided by the adiabatic piston. We started by computing some equilibrium properties of the system as given by the canonical distribution. The analysis showed that the fluctuations of the position of the piston make the probability distribution of the gases to be spatially inhomogenous. However, a direct calculation also showed that the conditional distributions of the gases to a position of the piston are spatially homogenous, which provide a direct characterization of the global equilibrium. Furthermore, it can be shown that the conditional distributions of the gases to a position and a velocity of the piston depend on the position but not on the velocity, which reflect the presence of spatial correlation as well as the absence of velocity correlations at thermal equilibrium. The derivation of the kinetic equations depend on few assumptions. As it is usual in Kinetic Theory, we have used the molecular chaos hypothesis by removing velocity correlations when two particles or a particle and the piston are about to collide. In addition, we have also remove some velocity correlations between particles and the piston after a collision. As already discussed, this is expected to be a good approximation if the piston suffers from many collisions in a mean free time of the surrounding particles, when the gases are dilute enough. As a result of the approximations, we ended up with a closed system of equations which solved the adiabatic piston problem, that is to say describes the evolution of the system towards thermal equilibrium where the two gases have the same pressure and all temperatures, including that of the piston, are equal. As a main difference with respect to other theories one may find in the literature, the one here is not restricted to the thermodynamic limit, nor to small particle to piston masses ratio, and include short-ranged collisions between particles. Many of the results of the present work are easily generalized along different directions. Even though the kinetic description is for a system of hard spheres, more general interactions among particle can be considered, by replacing the collision operators J 11 and J 22 with the appropriate ones. Moreover, for most the of the collision operators conserving the number of particles, linear momentum, and energy, the H-theorem still holds. More in general, we can also include dissipation, thermostats, vibrating walls, more pistons as in [55,56], and so on. An interesting aspect not addressed in this work is to precise the conditions under which the assumptions are expected to be correct. This would need a formulation including velocity correlations, which are seems to be present for small system even at thermal equilibrium [57]. This aspect is left for future works.
2019-03-04T21:49:43.000Z
2019-03-04T00:00:00.000
{ "year": 2019, "sha1": "d25aa2f5f19f1bb3dba6e71f76f32acbcadcfe5d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.01557", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d25aa2f5f19f1bb3dba6e71f76f32acbcadcfe5d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
225794645
pes2o/s2orc
v3-fos-license
Heavy Oil Hydrocarbons and Kerogen Destruction of Carbonate–Siliceous Domanic Shale Rock in Sub- and Supercritical Water This paper discusses the results of the influences of subcritical (T = 320 ◦C; P = 17 MPa) and supercritical water (T = 374 ◦C; P = 24.6 MPa) on the yield and composition of oil hydrocarbons generated from carbonaceous–siliceous Domanic shale rocks with total organic content (Corg) of 7.07%. It was revealed that the treatment of the given shale rock in suband supercritical water environments resulted in the decrease of oil content due to the intensive gas formation. The content of light hydrocarbon fractions (saturated and aromatic hydrocarbons) increased at 320 ◦C from 33.98 to 39.63%, while at 374 ◦C to 48.24%. Moreover, the content of resins decreased by almost twice. Insoluble coke-like compounds such as carbene–carboids were formed due to decomposition of kerogen after supercritical water treatment. Analysis of oil hydrocarbons with FTIR method revealed a significant number of oxygen-containing compounds, which are the hydrogenolysis products of structural fragments formed after destruction of kerogen and high-molecular components of oil. The gas chromatography–mass spectroscopy (GC–MS) method was applied to present the changes in the composition of monoand dibenzothiophenes, which indicate conversion of heavy components into lighter aromatic hydrocarbons. The specific features of transforming trace elements in rock samples, asphaltenes, and carbene–carboids were observed by using the isotopic mass-spectrometry method. Introduction In recent years, the share of unconventional hydrocarbon resources from shale deposits in the structure of world oil production has been increasing [1][2][3]. Low-permeable organic-rich Bazhen formations of West Siberia and Domanic deposits of Volga-Ural petroleum province correspond to such resources in Russia [4,5]. Domanic organic-rich shales are oil-generating source rocks [6]. A specific feature of such sedimentary rocks is the coexistence of free hydrocarbons and insoluble organic matter (OM), such as kerogen. The latter is an irregular structured "geopolymer" that generates oil and contains significant total organic content in sedimentary rocks [7,8]. The obtained experiences from the study of shale rocks in the USA show that Eagle Ford Shale Play in Mexican borders is the analogue of Sargaevskian-Tournaisian carbon-rich Domanic shale rock in Tatarstan Republic (Russia) in terms of geochemical characteristics [9]. Currently, Domanic sediments have been studied only at the laboratory scale [10][11][12][13]. However, it is crucial to develop the production technology of such hydrocarbon resources in field scale. Superheated organic solvents have been effectively applied in thermal cracking and chemical conversion of a wide range of high-molecular organic matter into volatile and soluble substances [14]. In recent years, there has been considerable interest in supercritical water (SCW) that accelerates thermochemical decomposition of fossil fuels [15,16]. Relative dielectric permittivity (e) of SCW varies between 2 and 30, depending on the pressure and temperature. Thereby, SCW may behave like favorable nonpolar hexane solvent (e = 1.8) or even like polar methanol solvent (e = 32.6) with high solubility for OM [17]. However, SCW is not only able to dissolve hydrocarbons, but it can also react with OM, its destruction products, and with rock minerals [18][19][20]. Due to these unique properties, supercritical fluid extraction attracted much attention by many scholars and researchers [21][22][23][24]. There are many studies which deal with the conversion of heavy oils and vacuum residues in subcritical water (sub-CW) and SCW conditions. Caniaz et al. [25] studied refinery of bitumen and domestic unconventional heavy oil upgrading in SCW. They made conclusions that SCW forms layers between asphaltenes, preventing their agglomeration and slowing down the formation of coke in the conversion products. Similar conclusions were made by researchers from Imperial College London regarding heavy oil upgrading in sub-CW and SCW [26]. They identified the effect of sub-CW and SCW on the conversion of modeled polyaromatic hydrocarbons that exist in asphaltenes and other heavy oil fractions. It has been established that SCW acts mainly on central rings of polyaromatic structures, rather than peripheral ones, initiating cracking reactions. In Reference [27], the authors investigated the transformation of vacuum residue fraction in SCW at 450 • C and established a double increase in the yield of valuable gasoline and diesel fractions and a significant decrease in the coke yield, in comparison with traditional refining methods. The cracking of the vacuum residue occurs with the intensive formation of hydrocarbon gases, light aliphatic hydrocarbons, alkylbenzenes, and alkylnaphthalenes. Other researchers studied the conversion of vacuum residue in SCW medium in order to obtain light oil [28]. Studies have shown that the most favorable condition for the use of SCW in the cracking of heavy hydrocarbons is at a temperature of 420 • C, a water density of 0.15 g/cm 3 , a water-oil ratio of 2:1, and a reaction time of 1 h. In the scientific literature, there are several works on the influence of sub-CW and SCW on the conversion of kerogen into light petroleum hydrocarbons, as well as their extraction. The hydrocarbon extraction from shale rock samples and Turkish lignite showed that SCW is the most effective method for conversion of asphaltenes and kerogen [29]. Water in supercritical state is able to penetrate the kerogen and heavy oil hydrocarbons structure and break their structural skeleton, leading to the formation of light petroleum hydrocarbons. Raising reaction temperature and pressure, or increasing reaction duration, results in more intensive decomposition of kerogen and heavy hydrocarbons. Funazukuri et al. [15] studied supercritical extraction of Chinese shale oil and observed that decomposition of polar components in SCW was more favorable than in supercritical toluene. A similar comparison of SCW and supercritical toluene was carried out by Olukcu et al. [30]. They state that the conversion degree due to transformation of Beypazari oil shale in SCW was higher than in supercritical toluene. Moreover, the content of asphaltenes and polar substances was higher in the oil after supercritical toluene treatment, in contrast to the oil after SCW treatment. In general, an analysis of the current research in the field of heavy oil and kerogen conversion in sub-CW and SCW shows good prospects for this method as an alternative way to improve the quality of heavy hydrocarbons and the efficient petroleum hydrocarbons extraction from dense low-permeability Dominic rocks. The aim of this study was to reveal the transformation behaviors of high-molecular components of heavy oil and insoluble kerogen from carbon-rich low-permeable carbonaceous-siliceous Domanic shale rocks in sub-and supercritical water. Experimental Procedures The object of the study was the Domanic rock, which was taken from 1720 m depth of the Semiluki-Mendym horizon of the Chishminskaya area, which is located on the largest Romashkino field territory (Tatarstan, Russia). The given rocks are characterized by the following mineral composition: 43% quartz, 19% calcite, 19% microcline, 12% mica, and 6% dolomite [31]. According to the data obtained by using the Rock-Eval pyrolysis method, the total organic content (C org ) in the rock sample is 7.07%. It shows that the content of free light hydrocarbons in the given rock sample is very low, and a significant part of OM corresponds to heavy components and insoluble kerogen [32]. The process was carried out by using a 500 mL high-pressure reactor with a stirrer-Parr Instruments 4560 (Parr Instruments, USA)-for 60 min. The processes in reactor were carried out at sub-CW at 320 • C and 17 MPa and SCW at 374 • C and 24.6 MPa. To achieve a water density that ensures the transition of water to a sub-and supercritical state at the experimental temperatures, the initial nitrogen pressure and the volume of required water were selected in accordance with the reference data of the National Institute of Standards and Technology [33]. In total, 100 g of crushed rock and 130 mL of distilled water were introduced into the reactor, for each experiment. The reactor was purged with nitrogen for 15 min before starting each experiment, and the given initial pressure was 1 MPa. The heating rate was 11 • C/min. After cooling the reactor up to 25 • C, the reactor was connected to gas chromatography (Chromatec-Crystal 5000.2), to analyze the gas phase formed after the process, using computer data processing and recording the signal of the detector for thermal conductivity. For saturation, gases were purged through a chromatographic column for 15 min. Gas separation was carried out on a 100 m long capillary column, at the following temperature conditions: 90 • C for 4 min, and then heating 10 • C/min to 250 • C. The temperature of the evaporator was 250 • C. The carrier gas was helium, with a flow rate of 15 mL/min. The transformed experimental products were separated from water phase and prepared for further investigations by physical and chemical instrumental analysis methods, such as gas chromatography, SARA analysis, Rock-Eval, GC-MS, FTIR, and isotope mass spectroscopy ( Figure 1). The object of the study was the Domanic rock, which was taken from 1720 m depth of the Semiluki-Mendym horizon of the Chishminskaya area, which is located on the largest Romashkino field territory (Tatarstan, Russia). The given rocks are characterized by the following mineral composition: 43% quartz, 19% calcite, 19% microcline, 12% mica, and 6% dolomite [31]. According to the data obtained by using the Rock-Eval pyrolysis method, the total organic content (Corg) in the rock sample is 7.07%. It shows that the content of free light hydrocarbons in the given rock sample is very low, and a significant part of OM corresponds to heavy components and insoluble kerogen [32]. The process was carried out by using a 500 mL high-pressure reactor with a stirrer-Parr Instruments 4560 (Parr Instruments, USA)-for 60 min. The processes in reactor were carried out at sub-CW at 320 °C and 17 MPa and SCW at 374 °C and 24.6 MPa. To achieve a water density that ensures the transition of water to a sub-and supercritical state at the experimental temperatures, the initial nitrogen pressure and the volume of required water were selected in accordance with the reference data of the National Institute of Standards and Technology [33]. In total, 100 g of crushed rock and 130 mL of distilled water were introduced into the reactor, for each experiment. The reactor was purged with nitrogen for 15 min before starting each experiment, and the given initial pressure was 1 MPa. The heating rate was 11 °C/min. After cooling the reactor up to 25 °C, the reactor was connected to gas chromatography (Chromatec-Crystal 5000.2), to analyze the gas phase formed after the process, using computer data processing and recording the signal of the detector for thermal conductivity. For saturation, gases were purged through a chromatographic column for 15 min. Gas separation was carried out on a 100 m long capillary column, at the following temperature conditions: 90 °C for 4 min, and then heating 10 °C/min to 250 °C. The temperature of the evaporator was 250 °C. The carrier gas was helium, with a flow rate of 15 mL/min. The transformed experimental products were separated from water phase and prepared for further investigations by physical and chemical instrumental analysis methods, such as gas chromatography, SARA analysis, Rock-Eval, GC-MS, FTIR, and isotope mass spectroscopy ( Figure 1). The total organic content in rock before and after the autoclave experiments was determined on elemental CHN-analyzer after dissolving carbonates from the given samples in hydrochloric acid. The transformation degree of organic matter in rock samples was evaluated via pyrolytic Rock-Eval analysis in a Pyro-GC-MS system (Frontier Lab EGA/PY-3030D, Agilent 7890B, Agilent 5977B), at the temperature range from 20 to 600 °C. The amount of thermally freed hydrocarbons in the sample in milligrams of hydrocarbons per gram of rock by temperature of 300 °C is designated as S1. Residual oil potential, i.e., the content of hydrocarbons pyrolyzed from kerogen, was estimated by peak S2. The maximum hydrocarbon yield within peak S2 was determined by the temperature Tmax. Based on these parameters, the oil generation potential of rock samples was evaluated, GP = S1 + S2, their productivity index, PI = S1/(S1 + S2), and the degree of kerogen conversion. Extraction of heavy oil from the rock samples was done in Soxhlet for 72 h, with the mixture of organic solvents that was composed of chloroform, toluene, and isopropanol mixed in equal proportions. The rock extracts were separated according to SARA analysis into four fractions: The total organic content in rock before and after the autoclave experiments was determined on elemental CHN-analyzer after dissolving carbonates from the given samples in hydrochloric acid. The transformation degree of organic matter in rock samples was evaluated via pyrolytic Rock-Eval analysis in a Pyro-GC-MS system (Frontier Lab EGA/PY-3030D, Agilent 7890B, Agilent 5977B), at the temperature range from 20 to 600 • C. The amount of thermally freed hydrocarbons in the sample in milligrams of hydrocarbons per gram of rock by temperature of 300 • C is designated as S 1 . Residual oil potential, i.e., the content of hydrocarbons pyrolyzed from kerogen, was estimated by peak S 2 . The maximum hydrocarbon yield within peak S 2 was determined by the temperature T max . Based on these parameters, the oil generation potential of rock samples was evaluated, GP = S 1 + S 2 , their productivity index, PI = S 1 /(S 1 + S 2 ), and the degree of kerogen conversion. Extraction of heavy oil from the rock samples was done in Soxhlet for 72 h, with the mixture of organic solvents that was composed of chloroform, toluene, and isopropanol mixed in equal proportions. The rock extracts were separated according to SARA analysis into four fractions: saturated, aromatics, resins, and asphaltenes. Asphaltenes from the extracts were precipitated by aliphatic solvent n-hexane with the mass ratio of 1:40. The malthenes part of extracts was further separated in a liquid-chromatography column filled with previously calcined (at 425 • C) aluminum oxide. The saturated hydrocarbons, aromatics, and resins were eluted by hexane, toluene, and a mixture of benzene and isopropyl alcohol in equal proportions, respectively. Aromatic hydrocarbons extracted from the heavy oil before and after autoclave experiments were studied by using a chromatomass-spectrometric system, including gas chromatography (Chromatec-Crystal 5000) with mass-selective detector ISQ with mass-selective detector ISQ LT manufactured by Thermo Fisher Scientific and software application Xcalibur for processing results. The chromatography was equipped with a capillary column, which was 30 m in length and 0.25 mm in diameter. The rate of gas-carrier (helium) phase and the temperature of injector was 1 mL/min and 310 • C, respectively. Moreover, the temperature was adjustable from 100 up to 300 • C, with a speed of 3 • C/min, and the process is isothermal through whole analysis. The potential of ion source-70 W and temperature 250 • C. Compounds were identified by digital library of NIST mass-spectrum and the literature data. The structural group composition of SARA analysis fractions, saturated, aromatics, resins, asphaltenes, and carbonaceous substances, was determined by FTIR spectroscopy with Bruker Vector 22 IR spectrometer, in the range of 4000-400 cm −1 , with a resolution of 4 cm −1 . To assess changes in the structural-group composition of the products of the experiments, several parameters were used, defined as the ratio of the absorption values at the maximum of the corresponding absorption bands. Investigation of the trace elements composition of rock samples, asphaltenes, and carbene-carboids was carried out in isotope mass spectrometer with inductively coupled plasma iCAP Qc manufactured by Thermo Fisher Scientific, Germany. The test sample weighed 100 mg. Concentrated hydrochloric, hydrofluoric, and nitric acid were added to the sample in Teflon autoclaves by dispensers. To account for the background, a mixture of acids without a sample was prepared. Hermetically sealed Teflon autoclaves were placed in the furnace of microwave digestion system, Mars 6 (CRM Corporation, USA), where samples were heated up to 210 • C for 30 min. After this, a boric acid solution was added to form complexes and transfer rare-earth fluorides to the solution. Teflon autoclaves were heated to 170 • C and kept at this temperature for 30 min. After cooling, the resulting solution was transferred to a test tube and diluted with deionized water and hydrochloric acid. The resulting solution was analyzed on a mass spectrometer pre-calibrated by using multi-element standards with a concentration in the range from 1 to 100 ppm of each element. The obtained concentration values were recalculated to the initial concentration, taking into account an empty sample, a sample, and dilution of the solution. Results and Discussion The results of pyrolytic Rock-Eval analysis and elemental analysis of oil-saturated rock samples before and after experiments in sub-CW and SCW conditions, as well as rock samples after extraction of heavy oil by organic solvents, are presented in Table 1. By elemental analysis, the total organic content, C org , in the source rock is 7.07%. According to Tisso and Welte [8], a rock sample with total organic content less than 3% corresponds to a class of good source rock. The initial rock sample is characterized by high oil-generation potential (22.17 mg HC/g of rock) and hydrogen index value (313.58 mg HC/g C org ). The value of T max (429 • C) indicates a low thermal maturity of OM. The value of the realized generation potential S 1 , which is the fraction of free hydrocarbons in the rock, is rather low, at 1.75 mg/g of rock, which characterizes the low productivity index of this rock. It is proposed that the given sample is an excellent rock sample which did not pass through a high catagenesis stage. * = the same sample after heavy oil extraction. C org = the total organic content in rock, wt%. H/C org = ratio of hydrogen to atomic organic carbon. S 1 = number of free hydrocarbons in the rock, mg HC/g of rock. S 2 = the number of hydrocarbons produced during the destruction of kerogen, mg HC/g of rock. T max = temperature at which the highest hydrocarbons yield intensity is observed within the peak S 2 . GP = S 1 + S 2 , the oil-generation potential, mg HC/g of rock. PI = S 1 /(S 1 + S 2 ), the productivity index, mg HC/g of rock. HI = S 2 /C opr 100%, the hydrogen index, mg HC/g C org . Sub-CW = subcritical water. SCW = supercritical water. The sub-CW and SCW treatment of rock samples resulted in the transformation of organic matter, which was similar to natural maturation [34]. Residual oil potential, i.e., the content of hydrocarbons pyrolyzed from kerogen, decreases from 22.17 to 17.17 mg HC/g of rock in sub-CW, and to 1.95 mg HC/g of rock in SCW. This was also justified by the increase in T max value from 429 to 435 • C, and decrease in hydrogen index value from 313.58 to 47.79 mg HC/g C org , which leads to an increase in the productivity index from 0.06 to 0.48 mg HC/g of rock in the SCW medium. The decomposition of OM in SCW influenced the decrease in C org , as well, from 7.07 to 4.08%. The atomic ratio of H/C org in the products of SCW treatment is higher than in initial rock sample. This indicates a participation of water in the conversion reactions of heavy components of OM into light alkane hydrocarbons. Similar results were obtained by Han et al. from an experimental investigation of high-temperature coal tar upgraded by SCW [35]. According to the SARA analysis presented in Table 2, a common behavior in group composition of extracts was revealed as the temperature of experiments was increased: The content of saturated fractions increased, while the content of resins decreased. In contrast to the initial extract, the content of aromatic hydrocarbons and asphaltenes after the treatment at sub-CW condition was increased, while the atomic H/C org ratio decreased. In the previous studies [31,32], an influence of sub-CW on improving extraction of asphaltenes and high-molecular n-alkanes C 22 -C 30 from rock samples was shown. However, SCW effects decomposition of resins and asphaltenes with formation of low-molecular saturated hydrocarbons (33.91%wt.), as well as insoluble solid coal-like compounds, such as carbene-carboids (14.49%wt.). Formation of last may be the destruction result of alkyl chains of asphaltenes, as well as kerogen decomposition [36][37][38]. Gas chromatography data for hydrocarbons and inorganic gases, formed during the process of OM transformation, are presented in Table 3. Uncondensed gases, formed from the shale rock in sub-CW and SCW conditions, are mainly composed of non-hydrocarbon gases such as H 2 , O 2 , and CO 2 , as well as low-molecular hydrocarbon gases C 1 -C 4 and alkenes C 2 -C 4 . The higher the temperature of the experiment, the more hydrocarbon gases, such as CH 4 , C 2 H 6 , C 3 H 8 , C 4 H 8 , and n-C 4 H 10 , were released. It indicates that destructive processes via radical chain mechanism were carried out [30]. In experiments in the sub-CW medium, neither C 2 H 6 nor C 2 H 4 appear in the gas composition, which is consistent with the results obtained with extraction of Tumuji Oil Sand with sub-CW [23]. The high content of CO 2 in the composition of the released gases may be due to decomposition of OM and carbonate rock minerals in sub-CW and SCW. In Reference [39], the conduction of oxidation-reduction reactions of OM and mineral components of shale rocks in the presence of water with production of CO and CO 2 gases was reported. The presence of O 2 in the composition of formed gases also justifies the conduction of oxidation-reduction reactions. The GC-MS method was applied to gain information about individual composition of aromatic fractions (Figure 2). The concentration of methyl-and ethyl-substituted benzothiophenes (peaks 11 and 15) prevailed in the initial extracts. The concentration of 7-ethylbenzothhiophene/5-ethylbenzothiophene (peak 11) significantly decreased at a temperature of 320 • C. However, the relative concentration of 2,5,7-trimethyl-benzo[b]thiophene and 7-ethyl-2-methylbenzo[b]thiophene (peaks 15 and 14) increased, and a peak of 2,5-dimethylbenzothiophene appeared (peak 8). The important changes were observed in the composition of aromatic hydrocarbons after the treatment of rock at SCW condition (374 • C and 24.6 MPa). sub-CW and SCW conditions, are mainly composed of non-hydrocarbon gases such as H2, O2, and CO2, as well as low-molecular hydrocarbon gases C1-C4 and alkenes C2-C4. The higher the temperature of the experiment, the more hydrocarbon gases, such as CH4, C2H6, C3H8, C4H8, and n-C4H10, were released. It indicates that destructive processes via radical chain mechanism were carried out [30]. In experiments in the sub-CW medium, neither C2H6 nor C2H4 appear in the gas composition, which is consistent with the results obtained with extraction of Tumuji Oil Sand with sub-CW [23]. The high content of CO2 in the composition of the released gases may be due to decomposition of OM and carbonate rock minerals in sub-CW and SCW. In Reference [39], the conduction of oxidation-reduction reactions of OM and mineral components of shale rocks in the presence of water with production of CO and CO2 gases was reported. The presence of O2 in the composition of formed gases also justifies the conduction of oxidation-reduction reactions. The The specific features of molecular-mass distributions of aromatic compounds in experimental products were illustrated in diagram ( Figure 3). There were no 7-ethylbenzothhiophene/5-ethyl-benzo-thiophene and 7-ethyl-2-methylbenzo[b]thiophene (peaks 11 and 14) observed in experimental products. However, an intensive peak corresponding to 1-methyl-naphtalene (peak 5) was identified. This confirms the conduction of destructive processes with detachment of alkyl substitutes and probably destruction of aromatic heterocyclic compounds. The specific features of molecular-mass distributions of aromatic compounds in experimental products were illustrated in diagram ( Figure 3). The studies about the SCW technique for shale rock require the selection of a method that can provide knowledge about the converted products' structure and composition. One of the informative instrumental methods is considered the FTIR spectroscopy, which allows for the evaluation of transformations not only in various hydrocarbons, but also in functional groups, at natural and technology-related processes [42,43]. The IR spectra of the obtained hydrocarbon groups are presented in Figure 5. Heteroatomic compounds 1-and 4-methyl-dibenzothiphene are stable enough for degradation processes [40]. Bennete et al. imply that 4-methyl-dibenzothiophene (4-MDBT) is less stable for degradation than 1-methyl-dibenzothiophene (1-MDBT) [41]. That is the reason why the parameter 1-MDBT/4-MDBT (Figure 4d) is an indirect indicator for decomposition degree of kerogen and further destruction of its high-molecular fragments. For transformed samples, the given ratio increased from 0.67 to 0.86, in contrast to the initial rock sample (Figure 4c). The studies about the SCW technique for shale rock require the selection of a method that can provide knowledge about the converted products' structure and composition. One of the informative instrumental methods is considered the FTIR spectroscopy, which allows for the evaluation of transformations not only in various hydrocarbons, but also in functional groups, at natural and technology-related processes [42,43]. The IR spectra of the obtained hydrocarbon groups are presented in Figure 5. Table 4 shows the FTIR structural parameters of hydrocarbon groups as a function of the experimental conditions. Heteroatomic compounds 1-and 4-methyl-dibenzothiphene are stable enough for degradation processes [40]. Bennete et al. imply that 4-methyl-dibenzothiophene (4-MDBT) is less stable for degradation than 1-methyl-dibenzothiophene (1-MDBT) [41]. That is the reason why the parameter 1-MDBT/4-MDBT (Figure 4d) is an indirect indicator for decomposition degree of kerogen and further destruction of its high-molecular fragments. For transformed samples, the given ratio increased from 0.67 to 0.86, in contrast to the initial rock sample (Figure 4c). The studies about the SCW technique for shale rock require the selection of a method that can provide knowledge about the converted products' structure and composition. One of the informative instrumental methods is considered the FTIR spectroscopy, which allows for the evaluation of transformations not only in various hydrocarbons, but also in functional groups, at natural and technology-related processes [42,43]. The IR spectra of the obtained hydrocarbon groups are presented in Figure 5. Table 4 shows the FTIR structural parameters of hydrocarbon groups as a function of the experimental conditions. Figure 5. IR spectra of saturated and aromatic hydrocarbons, resins, asphaltenes, and carbene-carboids of Domanic rock before and after sub-CW and SCW experiments. The composition of saturated hydrocarbons, according to IR spectroscopy, includes aliphatic CH 2 and CH 3 structural groups at 1380-1465 cm −1 . With an increase in the experimental temperature, the values of the spectral indices of the fractions of saturated hydrocarbons practically do not change. A-Factor is an indicator determining the ratio of intensities of IR absorbance bands aliphatic CH 2 and CH 3 structures to aromatic C=C bonds [44][45][46]. C-Factor represents the intensity of the absorbance band of oxygenated functional groups versus aromatic ring in hydrocarbons [44]. CH 3 /CH 2 ratio is an indicator of the length of the aliphatic chain and the branching degree of aliphatic fragments [47,48]. Aromaticity is related to the relative peak intensity of aromatic C-H stretching versus aliphatic C-H stretching [49]. The degree of condensation reflects the degree of aromatic substitution as compared to ring condensation [50,51]. The degree of condensation is determined as the intensity of aromatic C-H stretching divided to aromatic C=C stretching intensity. With an increase in the degree of ring condensation, the value of this parameter decreases. The composition of saturated hydrocarbons, according to IR spectroscopy, includes aliphatic CH 2 and CH 3 structural groups at 1380-1465 cm −1 . With an increase in the experimental temperature, the values of the spectral indices of the fractions of saturated hydrocarbons practically do not change. In the spectra of aromatic hydrocarbons, absorption bands of 1600 cm −1 и650-900 cm −1 are identified corresponding to C=C and C-H bonds in aromatic structures. The intense band in the region of 1380 cm −1 characterizes the CH 3 structures stretching vibrations in aromatic hydrocarbons. Low-intensity bands also appear with peaks at 1710 and 1030 cm −1 , which are characteristic of stretching vibrations of the C-O bonds in the carbonyl groups and S=O bonds in the sulfoxide groups. With increasing temperature, the aromatic triplet in the range of 650-900 cm −1 in the IR spectra of aromatic hydrocarbons becomes more intense. Aromatic hydrocarbons after the experiment in SCW condition are characterized by a low value of DOC (0.55 versus 0.88). This indicates that C=C bonds of aromatic rings prevail over C-H bonds in aromatic structures. In the resins' spectra before and after the experiments, an absorption band of 870-880 cm −1 is detected, which indicates the presence of 1, 3-substituted aromatic rings. The band of 810-820 cm −1 reflects the presence of 1,2,3,4 or 1,4-substituted benzene fragments in the composition of the resins, while the band of 750 cm −1 represents monosubstituted structures. After autoclave experiments, resins become more aromatic, as evidenced by an increase in the intensity of the 750 and 1600 cm −1 absorption bands. Decreasing the values of DOC from 0.94 to 0.4 is due to aromatization process in SCW. The transformed resins are distinguished not only by an increase in the content of aromatics, but also aliphatic by CH 3 and CH 2 structures, corresponding to 2857-2957 and 1380-1465 cm −1 absorption bands. Oxidation of resins in SCW increases the value of C-factor from 0.39 to 0.44 that characterizes a ratio of oxygenated functional groups to aromatic hydrocarbons. The hexane insoluble asphaltenes fraction in the initial rock was 29.02 wt.%. According to IR spectroscopy (Figure 3), asphaltenes include polycondensed aromatic fragments containing carbonyl, carboxyl, and aliphatic functional groups. The spectrum of asphaltenes after sub-CW and SCW experiments differs from the spectrum of the initial asphaltenes by decreasing intensity of 2857-2957 cm −1 bands, corresponding to aliphatic CH 3 and CH 2 structures. Increasing aromaticity and decreasing A-Factor values indicate the conduction of aromatization processes of asphaltenes in SCW. Oxygen-containing C-O bonds in the asphaltenes' structure after SCW experiment are destroyed, which is indicated by a decrease in the intensity of the absorption bands of C-O-C and S=O functional groups at 1160 and 1030 cm −1 . It is also confirmed by decreasing the C-factor from 0.21 to 0.16. However, the content of O-H structures in transformed asphaltenes increase (from the intensity 3342 cm −1 absorption band). A significant number of such oxygen-containing O-H structures was observed in carbene-carboids, the formation of which may be due to the following reasons. Firstly, kerogen of Domanic shales contains a significant amount of oxygen. Particularly, oxygen-containing bonds are revealed to be dominating in Domanic shales of Middle Volga [52]. Kerogen contains monosaccharide units of n-alkyl chains, substituted by alcohol, ketone, and aldehyde groups [52], as well as long-chain C 19 -C 32 carboxylic acids [53]. Hence, during decomposition of kerogen structure free oxygen and oxygen-containing fragments, which were in fractions of asphaltenes and carbene-carboids, may be released. Secondly, water may participate in oxidation-reduction reactions of transforming heavy hydrocarbons in the rock composition [39]. Thus, carbene-carboids were formed in SCW due to kerogen oxidative destruction. The specific feature of such compounds is the absence of long alkyl chains and the presence of only short CH 3 structures, which is confirmed by a high value of CH 3 /CH 2 Ratio-0.99. In connection with the development of Domanic strata and their high enrichment with trace elements [54,55], it is of interest to study their distribution both in the composition of the recovered oil and in the rocks. Metals create many problems in the extraction and processing of heavy hydrocarbons, for example, in terms of their impact on the environment and processing processes, poisoning the catalysts. At the same time, their positive influence can be distinguished-in particular, the possibility of associated extraction from Domanic rocks of a number of valuable industrial and rare metals, such as V, Ni, Co, Mo, etc., and in addition, the catalytic properties of metals in rocks under exposure on the layer of thermal methods in the processes of its development. Information on the distribution of trace elements in rocks allows us to draw conclusions about the genesis of hydrocarbons and about processes that affect the formation of their composition [56]. The following trace elements were determined by isotope mass spectrometry in the rocks, asphaltenes, and carbene-carboids: Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Li, Cd, Sb, Ba, Mo, Ga, Ge, As, and Se ( Figure 6). The predominant trace elements in the composition of the rocks are Ti, Fe, Ni, and Zn. Asphaltenes contain the highest concentration of V, Fe, Ni, and Zn. The impact of SCW on Domanic rock leads to a decrease in the concentration of V in the structure of asphaltenes from 1442 to 634 ppm and Ni from 852 to 508 ppm, compared with asphaltenes from the original rock sample. This is consistent with previously obtained electron paramagnetic resonance data [32], which showed a decrease in the VO 2+ concentration in the converted asphaltenes after processing the rock in the SCW medium. It should be noted that, in the composition of formed carbene-carboids, the Ni content significantly prevails over the V content in contrast to asphaltenes. In the composition of carbene-carboids and transformed asphaltenes, as well as in rock samples, the highest concentrations occur in trace elements V, Mn, Fe, Ni, and Zn, which are probably associated with the transition of trace elements from kerogen and rock minerals to the composition of asphaltenes and carbene-carboids in the sub-CW and SCW environment. Thus, trace elements in rocks and experiment products can exhibit catalytic activity in transformation of high-molecular components of heavy oil and insoluble kerogen in sub-CW and SCW. concentrations occur in trace elements V, Mn, Fe, Ni, and Zn, which are probably associated with the transition of trace elements from kerogen and rock minerals to the composition of asphaltenes and carbene-carboids in the sub-CW and SCW environment. Thus, trace elements in rocks and experiment products can exhibit catalytic activity in transformation of high-molecular components of heavy oil and insoluble kerogen in sub-CW and SCW. Conclusion Our work has led us to conclude that sub-CW and SCW provide conduction of intensive processes on transformation of high-molecular components of heavy oil and insoluble kerogen of Domanic shales into light mobile crude oil. The thermal processes of kerogen decomposition and tar-asphaltene substances' destruction occur more intensively in the SCW than in the sub-CW medium, and they lead to a decrease in the oil-generation potential of rock from 23.7 to 3.7 mg/g and an increase in the productivity index from 0.06 to 0.48 mg/g. In the products of the experiment in SCW, an increase in saturated hydrocarbons content was found to increase by more than two times. In sub-CW condition at 320 °C and 17 MPa, OM is particularly decomposed, and some parts of asphaltenes and high-molecular n-alkanes C22-C30 are extracted. The products of the experiments differ in the content and composition of aromatic hydrocarbons from the initial ones with an Conclusion Our work has led us to conclude that sub-CW and SCW provide conduction of intensive processes on transformation of high-molecular components of heavy oil and insoluble kerogen of Domanic shales into light mobile crude oil. The thermal processes of kerogen decomposition and tar-asphaltene substances' destruction occur more intensively in the SCW than in the sub-CW medium, and they lead to a decrease in the oil-generation potential of rock from 23.7 to 3.7 mg/g and an increase in the productivity index from 0.06 to 0.48 mg/g. In the products of the experiment in SCW, an increase in saturated hydrocarbons content was found to increase by more than two times. In sub-CW condition at 320 • C and 17 MPa, OM is particularly decomposed, and some parts of asphaltenes and high-molecular n-alkanes C 22 -C 30 are extracted. The products of the experiments differ in the content and composition of aromatic hydrocarbons from the initial ones with an increased content of light hydrocarbons of 2,5-dimethylbenzothiophene C 10 H 10 S and 1-methylnaphthalene C 11 H 10 and a reduced content of higher-molecular-weight aromatic compounds, such as 7-ethyl-2-methylbenzo[b]thiophene C 11 H 12 S, methyldibenzothiophenes C 13 H 10 S, and alkylated C 2 -dibenzothiophenes C 14 H 12 S. Intensive decomposition of heavy aromatics, resins, and asphaltenes, as well as insoluble kerogen in SCW, is followed by the formation of insoluble coal-like compounds such as carbene-carboids. Their formation is due to kerogen oxidative destruction, which is confirmed by increasing intensity of absorption band in FTIR spectra, corresponding to O-H structures of asphaltenes and carbene-carboids after experiments. The distribution of trace elements in rocks, asphaltenes, and carbene-carbides was shown before and after sub-CW and SCW exposure. It was established that SCW affects the decomposition of asphaltenes' structures containing V and Ni trace elements. In the composition of transformed asphaltenes and formed carbene-carboids, as well as in rock samples, the highest concentrations occur in trace elements V, Mn, Fe, Ni, and Zn, which are probably associated with the transition of trace elements from kerogen and rock minerals to the composition of asphaltenes and carbene-carboids in the sub-CW and SCW environment. The results of this study allow us to expand our knowledge of the transformation of heavy oil components and kerogen in sub-CW and SCW environment to select the optimal conditions for the development of these low-permeability organic-rich Domanic deposits. Author Contributions: Description of the results, Z.R.N. and G.P.K.; data curation, A.V.V.; experimental analysis, R.D. and A.E.C. All authors have read and agreed to the published version of the manuscript. Funding: The authors declare no competing financial interest.
2020-07-16T09:06:15.818Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "a60cd3ef224611896f4386b1e7d4f3741582e5cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/8/7/800/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b91c72a174a0d52fa4890cc151575632b6624f83", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
269109841
pes2o/s2orc
v3-fos-license
Comparison of cytomorphology and histomorphology in myelodysplastic syndromes Gold standard for the establishment of the diagnosis of myelodysplastic syndromes (MDS) are cytomorphological features of hematopoietic cells in peripheral blood and bone marrow aspirates. There is increasing evidence that bone marrow histomorphology not only aids in the diagnosis of MDS but can provide additional prognostic information, particularly through assessment of fibrosis and cellularity. However, there is only sparse data on direct comparison between histological and cytomorphological findings within the same MDS patient cohort. Therefore, we performed such an analysis under exceptionally well-standardized conditions. We reexamined biopsy material of 128 patients from the Düsseldorf MDS registry who underwent bone marrow trephine biopsy (in addition to bone marrow aspiration) at the time of diagnosis, addressing the following items: a. Analysis of concordance of diagnoses made by histology and cytomorphology b. Analysis of additional information by histology with regard to the diagnosis and prognosis. The respective biomaterials were available at our institution and had been processed according to unchanged protocols between 1992 and 2010. Fresh histopathological sections were obtained from the tissue blocks, stained under identical conditions and re-assessed by a designated expert pathologist (C.B.) without knowledge of the previous histopathological report or the respective cytomorphological diagnosis. The latter, likewise, was uniformly made by the same expert cytomorphologist (U.G.). Histopathology of bone marrow trephine biopsies reliably captured the diagnosis of MDS. Assignment to the diagnostic WHO subgroup was not entirely concordant with cytomorphology, mainly due to incongruences between the proportion of CD34-positive cells on histopathology and the cytomorphological blast count. Histopathology provided additional diagnostic and prognostic information with high diagnostic and prognostic significance, such as fibrosis. Likewise, histopathology allowed more reliable estimation of bone marrow cellularity. Introduction Myelodysplastic syndromes are a heterogeneous group of clonal stem cell disorders, morphologically defined by dysplastic features of hematopoietic cells and increasing impairment of hematopoietic cell differentiation, recognizable by an elevated proportion of bone marrow blasts in higher-risk MDS (1)(2)(3)(4)(5)(6)(7).Disturbed maturation entails functional defects of blood cells, as well as peripheral blood cytopenias (7,8).MDS also carries the risk of transformation into acute myeloid leukemia (9).Primary myelodysplastic syndromes, which account for about 90% of MDS cases, lack an apparent cause, whereas radiotherapy or noxious agents, such as chemotherapy or organic solvents, are present in the medical history of patients with secondary or therapy related MDS.In rare cases, a familial predisposition to clonal hematopoietic disorders is recognized. Myelodysplastic syndromes are categorized by considering dysplastic features, blast count, and cytopenias.The diagnosis is usually established by cytomorphology.Despite peripheral blood cytopenia, the bone marrow in MDS is generally hyperor normocellular. Since the 1980s, several classifications have been established that separate morphologically discernible, prognostically relevant types of MDS.The FAB classification, published in 1982, was solely based on morphological criteria (10).The WHO classification, first published in 2001, refined the MDS subtypes and was the first to require chromosomal analyses and to include a chromosomal aberration (del5q) as a disease-defining marker.Revised versions were published in 2016 and 2022 (11)(12)(13)(14). While diagnostic criteria for MDS mainly rely on cytomorphological features, some MDS patients may show bone marrow fibrosis, which can only be assessed on histopathology.In addition, histopathology is deemed superior to cytomorphology regarding assessment of bone marrow cellularity. Our analysis compares cytomorphological and histopathological findings and their degree of conformity in a cohort of MDS cases with respect to diagnostic accuracy and prognostic significance that is unique in terms of uniform preparation of diagnostic samples and lack of interobserver variability. Materials and methods We analyzed bone marrow aspirates of 152 MDS patients diagnosed between 1992 and 2010 by central cytomorphology according to the WHO classification (12,13).Bone marrow trephine biopsies obtained at the time of diagnosis were also available from these patients.Data regarding clinical features, cell counts, and the course of disease were obtained from the Duesseldorf MDS registry.We considered date of birth, gender, time of diagnosis, WHO/FAB subtype, treatment history, and outcome.Data closure/ end of follow-up was July 1 st , 2012, or date of death.Only a small subset of patients was lost to follow-up.The availability of the abovementioned data set was mandatory for inclusion in the analysis.128 patients were assessable for further analysis. The bone marrow trephine biopsy taken at the time of diagnosis was used for preparing new histopathological sections and stains, which were assessed by a single reviewing pathologist (SB) who had no knowledge of the cytomorphological evaluation, apart from the information "patient with myelodysplastic syndrome".Bone marrow biopsies we carried out and processed according to local standards.Immunohistochemical staining with an anti-CD34 antibody was used to visualize immature hematopoietic progenitor cells.Histological slides were routinely stained with hematoxylin-eosin (HE), periodic acid Schiff reagent (PAS), Giemsa, silver impregnation according to Gomori, and iron staining (Berliner-Blau).In addition, the naphthol AS-D chloroacetate esterase reaction was used to highlight neutrophilic granulopoiesis. The following morphological parameters were assessed by standardized procedures and were evaluated semi quantitatively (Table 1): cellularity of the specimen in comparison to an agerelated control cohort, maturation and dysplasia of megakaryopoiesis, cellularity of erythropoiesis and proportion of erythroid cells relative to granulopoietic cells, degree of fibrosis, bone marrow iron content, and percentage of CD34-positive cells in relation to the overall cellularity of the bone marrow.Length, quality, and number of evaluable intertrabecular areas were also assessed. The correlating cytomorphological findings were taken from the Duesseldorf MDS registry.Cytomorphological assessment was performed according to standard operating procedures as reported by Germing et al. (Table 1) (15).Of note, 20 patients with the diagnosis of RAEB-T according to the FAB classification were included in the analysis. To ensure homogeneity and comparability, histopathological and cytomorphological diagnoses were always established by the same reviewers, respectively (UG for cytomorphology, SB for histopathology).Statistical analyses were performed using SPSS.Comparison of blast count by cytomorphology versus histology was analyzed by nonparametrical Wilcoxon T-Test.Categorical variables were analyzed using Chi-Square-Test.All procedures were in accordance with the current version of the Helsinki Declaration.Informed consent was obtained from all patients included in the study. Patient characteristics Of the 128 patients evaluable, 79.7% had deceased at the time of this analysis, 19.5% were documented alive at the time of data closure and 0.8% were lost to follow up. Histopathological analysis Determinants of the interpretability of histopathological specimens are length of the trephine biopsy, number of evaluable intertrabecular areas, and overall quality of the sample.53.9% of the trephine biopsies had a length between 0.6 and 1.0 cm, 26.6% had a length of >1.0 cm.Evaluability was assessed by subjective rating.93% of all specimens were rated at least satisfactory (grade 3 of 6) and were thus evaluated for all parameters.In 9 cases (7.0%) the number of evaluable intertrabecular areas was less than 5.In 93% of all cases, 5-15 evaluable intertrabecular areas could be analyzed. Medullary blast count/CD34+ cells In direct comparison, the medullary blast count was underestimated by histopathology regardless of the proportion of blasts seen by cytomorphology (Table 3, p=0.001).For instance, in patients with a cytomorphologically assessed blast count of more than 20% (RAEB-T by definition), histopathology identified less than 5% blasts in 22.7% of these cases.The same was true for a cytomorphological blast count of 10-19%, where histopathology found a normal blast count (<5%) in 43%, and a blast count of 5-9% in 29% of these cases. Patients with a hypocellular marrow according to histopathology were more likely to present with a cytomorphologically assessed blast count below 5%, whereas hypercellularity correlated with blast counts above 10% (60.7% of patients with 10-19% blasts and 63.6% with ≥20% blasts, respectively).Nevertheless, 46.9% of patients with <5% medullary blasts presented with hypercellular marrow when Dysplastic features of megakaryopoiesis Comparing dysmegakaryopoiesis according to histopathology and cytomorphology, there were 44 cases where megakaryopoiesis appeared inconspicuous on cytomorphology but showed at least mild to moderate signs of dysplasia on histopathology (Table 4, p=0.009).Conversely, among 9 cases that appeared normal on histopathology, 8 showed signs of dysplastic megakaryopoiesis on cytomorphological assessment. Histopathological assessment of cellularity showed a positive correlation with the degree of dysmegakaryopoiesis.Hypercellular marrow was associated with a greater degree of dysplastic features (Table 4, p=0.009).More pronounced signs of dysmegakaryopoiesis, as assessed by histopathology, were also found in higher-risk MDS subtypes according to WHO 2016 that are characterized by elevated blast count as well as greater cellularity. A high level of dysmegakaryopoiesis was less common in patients with a high degree of fibrosis (Table 4, p<0.00005).This may be due to an increased blast count and less residual normal hematopoiesis, both contributing to a diminished number of assessable megakaryocytes. Cellularity Histopathology is the gold standard for assessing bone marrow cellularity.When compared to the histopathology report, cytomorphology tends to overestimate cellularity (Table 5).A hypocellular marrow was diagnosed in 24.4% of cases by histopathology.Within that group, cytomorphology described a normocellular marrow in 44.8%, and even a hypercellular marrow in 48.3% of cases.Normocellularity was generally congruent when the finding of a normocellular marrow on histopathology was taken as the gold standard.Regarding hypercellularity, almost half of the cases diagnosed as hypercellular on histopathology were characterized as normocellular by cytomorphological assessment. When cellularity assessed by cytomorphology was used to find correlations, no statistically significant results were obtained, in accordance to the abovementioned results of the direct comparison of histopathological and cytomorphological cellularity assessment. Erythropoiesis The proportion of erythropoiesis did not correlate well between cytomorphological and histopathological review.Although histopathology showed superiority regarding overall cellularity assessment, only erythropoiesis diagnosed by cytomorphology showed a statistically significant correlation with cellularity (p=0.019).When assessed by histopathology, there was only a trend towards increased erythropoiesis in hypercellular marrows. Neither WHO subtype nor medullary blast percentage correlated with the proportion of erythropoiesis in the marrow, irrespective of assessment by histopathology or cytomorphology. The degree of dysmegakaryopoiesis, on the other hand, showed a trend towards positive correlation with the proportion of erythroid cells, when assessed by histopathology for both attributes.There was no patient with expanded erythropoiesis who did not demonstrate signs of dysmegakaryopoiesis. WHO diagnosis As shown in Table 6, we compared the histopathological and cytomorphological diagnoses.There was no case where histopathology did not confirm the diagnosis of MDS.48% of MDS diagnoses were identical according to WHO type.However, in 56 cases (44%), the WHO type diagnosed by histopathology differed from the WHO type diagnosed by cytomorphology.The main reason was discordant estimation of medullary blast count.32 patients were diagnosed with at least 5% medullary blasts by cytomorphology (MDS-EB1, MDS-EB2 and RAEB-T), while histopathology reported a normal blast count.Overestimation of blast count by histopathology occurred in 5 cases.In 8 cases, multi-versus unilineage dysplasia was identified as the discrepancy (MDS-SLD versus MDS-MLD, with or without ring sideroblasts).There were no cases where histopathology failed to diagnose CMML, but correlation regarding the distribution among CMML 0, I or II was week, reflecting the tendency of histopathology to underestimate the blast count. Fibrosis Evaluating the degree of fibrosis in a bone marrow specimen up to this day remains the preserve of histopathology.A positive correlation was found in our cohort between bone marrow cellularity and the degree of fibrosis.A high degree of fibrosis was predominantly observed in patients with hypercellularity (89.5% of fibrotic cases were hypercellular by histopathology).Similarly, a higher number of patients with a high degree of fibrosis was found in the high-risk subgroups of WHO 2016, namely MDS-EB2 (26.3%),RAEB-T (10.5%) and CMML I/II (31.6%).The same was true when the degree of fibrosis was compared with the percentage of medullary blast count, assessed by cytomorphology (Table 7). The positive correlation between cellularity and fibrosis may appear counterintuitive, and the result should be interpreted with caution, due to the low number of patients with a high degree of fibrosis (n=20).However, a proportion of higher-risk MDS cases with elevated cellularity and blast counts may indeed have a tendency for fibrosis, which may have been underestimated so far. Influence of histopathological and cytomorphological findings on overall prognosis When the entire patient cohort was regrouped according to the blast count assessed by cytomorphology, the blast count showed a trend towards influencing median overall survival, especially in the patient groups with >5% blasts.The lack of statistical significance (p=0.128) is most likely attributable to the small size of the cohort.Regrouping based on histopathological assessment of blast count did not separate the cohort into subgroups with statistically significant different overall survival. The presence of dysmegakaryopoiesis, identified on histopathology, did not show any prognostic impact, irrespective of the degree of dysmegakaryopoiesis. Cellularity assessed by histopathology separated the cohort into three groups with different median survival times.A hypercellular marrow was associated with the worst outcome, even though statistical significance was not reached. The proportion of erythropoiesis, again assessed by histopathology, seemed to influence overall survival when the patient cohort was divided into 5 groups (0-20%, 21-40%, 41-60%, 61-80%, >80%).Patients with 20-40% erythropoiesis showed a trend for the best overall survival.On cytomorphology, this effect had not been detectable.This might reflect the superiority of histopathological assessment already seen with regard to the overall cellularity. Presence of a high degree of fibrosis as assessed by histopathology translates into an inferior median survival as the degree of fibrosis separates the cohort into different subgroups with a statistically significant prognosis.Based on the degree of fibrosis, the entire patient cohort could be divided into two groups (no or mild signs of fibrosis versus high or very high degree of fibrosis) that showed a statistically significant difference in prognosis (Figure 1).The prognostic impact of fibrosis was also visible in WHO 2016 low-risk subgroups with a blast count <10%.The importance of fibrosis is reflected by the latest WHO classification for MDS, which now includes MDS with myelofibrosis (MDS-f). Discussion Although cytomorphological examination of bone marrow aspirates, focusing on dysplastic features and blast counts, represents the gold standard for MDS diagnosis, additional information can be gained through histopathological evaluation of trephine biopsies (1,2,14).We compared cytomorphological and histopathological bone marrow analyses under well standardized c) When comparing the assigned MDS subtype within our cohort by WHO 2016 with the most recent WHO 2022 classification, it shows that 5,5% of patients are reassigned to the newly created subgroup of MDS-f and is constituted by patients with a former subtype of EB-1 and EB-2.Acknowledging this subtype with high prognostic significance, even in lower blast MDS as described above, is only possible when performing histopathologic assessment.All patients with hypocellular MDS, by definition, are patients of the former low blast subgroups MDS-SLD and -MLD.As in our cohort no patient apart from SF3B1 was included with MDS-defining cytogenetic or molecular aberrations such as del(5q) or biallelic TP53 there was no assignment to the respective subgroups by WHO 2022.As we classified patients according to the WHO classification, there is no shift in additional cases classified as AML as we only have cases with IB1 order IB2 and no additional AML cases as proposed in the ICC using a cut-off of more than 10%.d) Fibrosis is an important prognostic factor in MDS that can only be assessed by histopathology.We found that fibrosis shows a positive correlation with bone marrow cellularity and the medullary blast count.The new WHO 2022 classification pays tribute to the importance of fibrosis by including MDS with myelofibrosis (MDS-f) as one of the MDS subtypes ( 16). e) Dysmegakaryopoiesis seems to be another feature that is properly assessed by histopathology.We found that the degree of dysmegakaryopoiesis correlates with cellularity and unfavorable WHO categories and MDS risk groups. We consider histopathology a valuable supplement in the diagnostic workup of MDS.Superiority to a cytomorphologically assessed MDS diagnosis could not be demonstrated, mainly due to its inability to assess subtle morphological features at the level of individual cells except megakaryocytes.Nevertheless, histopathology offers complementary information regarding fibrosis and cellularity that contributes substantially to prognostic assessment.The importance of histopathology is reflected in the new WHO 2022 classification, which includes MDS types (MDS-h and MDS-f) that require histopathology for proper assessment (17). TABLE 2A Patients´characteristics according to WHO 2016 and WHO 2022. TABLE 2B Comparison of patients' distribution between WHO 2016 and WHO 2022. TABLE 3 Comparison of blast percentages assessed by cytomorphology with blast percentage assessed by staining of CD34 by histopathology (p<0.001). Red marked numbers indicate the either most statistically relevant or most strinkingly differing parameters within the comparison of histo-and cytomorphology. Red marked numbers indicate the either most statistically relevant or most strinkingly differing parameters within the comparison of histo-and cytomorphology. Red marked numbers indicate the either most statistically relevant or most strinkingly differing parameters within the comparison of histo-and cytomorphology. Red indicates strongly different findings between histopathology and cytomorphology, brown indicates more slightly different assessments (e.g.IB1 vs IB2).
2024-04-13T15:01:55.157Z
2024-04-11T00:00:00.000
{ "year": 2024, "sha1": "11cc59d8c56f1b892e489b60d7be0964aaa7e7c5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2024.1359115/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b64aed5bacc1facb4ff94964ae734764115ac09", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
16054487
pes2o/s2orc
v3-fos-license
A lower bound for the error term in Weyl's law for certain Heisenberg manifolds, II This article is concerned with estimations from below for the remainder term in Weyl's law for the spectral counting function of certain rational (2l+1)-dimensional Heisenberg manifolds. Concentrating on the case of odd l, it continues the work done in part I which dealt with even l. Γ r is a uniform discrete subgroup of H ℓ , i.e., the Heisenberg manifold H ℓ /Γ r is compact. According to Gordon and Wilson [6], Theorem 2.4, the subgroups Γ r classify all uniform discrete subgroups of H ℓ up to automorphisms: For every uniform discrete subgroup Γ of H ℓ there exists a unique ℓ -tuple r and an automorphism of H ℓ which maps Γ to Γ r . Lattice points in a circle. The quantity yields the major contribution to N (t) for this rational Heisenberg manifolds. Its asymptotic evaluation amounts to the enumeration of the integer points (n 0 , n 1 ) ∈ (Z + 0 , Z + ) in the planar domain u 2 + u(2v + ℓ) ≤ t/(2π) , with the weights 2n ℓ 0 r 1 · . . . · r ℓ n 1 +ℓ−1 ℓ−1 as indicated. This observation may be considered as one motivation to make reference to the state-of-art with the Gaussian circle problem, the "ancestor and prototype" of all planar lattice point problems. As a second link, one may notice that M = R 2 /Z 2 , the 2-dimensional torus, is the simplest example of a Riemannian manifold with a non-trivial (2) Compare the discussion below concerning the bound (3.3) which applies to "almost all" metrics g . spectral theory: In fact (3) , the eigenvalues of the Laplacian on R 2 /Z 2 are given by 4π 2 k , where k ranges over all nonnegative integers with r(k) > 0 , r(k) denoting as usual the number of ways to write k as a sum of two squares of integers. The corresponding multiplicities are given by r(k) , hence the spectral counting function N (t) now equals the number of lattice points in an origin-centered compact circular disc of radius √ t/(2π) . For enlightening accounts on the history of the Gaussian circle problem in textbook style, the reader may consult the monographs of Krätzel [18], [19], and Huxley [11], along with the recent quite comprehensive survey article [13]. The sharpest upper bound for the lattice point discrepancy P (x) of the compact unit circular disc D 0 , linearly dilated by a large real parameter x , is nowadays due to Huxley [12] and reads It is usually conjectured that P (x) = O x 1/2+ε for every ε > 0 . This is supported by Cramér's [3] classic mean-square asymptotics with an explicit constant C > 0 . Thus, roughly speaking, P (x) ≪ x 1/2 in square-mean, but it has been known for a long time that there exist unbounded sequences of x -values for which P (x) attains "exceptionally large" values, even of either sign: By 1961, the state-of-art in this direction was that (4) and (2.4) P (x) = Ω + x 1/2 (log 2 x log 3 x) 1/4 , due to Hardy [9], resp., Gangadharan [5]. Here and throughout, log j stands for the j -fold iterated logarithm. Later on, these estimates have been improved: Corrádi and Kátai [2] obtained See also the detailed discussion in part I of this work [20]. (4) Recall the usual Ω -notation: For real functions F and G > 0 , and * denoting either + or − , F (x) = Ω * (G(x)) means that lim sup( * F (x)/G(x)) > 0 , as x → ∞ . Further, F (x) = Ω(G(x)) means that lim sup |F (x)/G(x)| > 0 . and Soundararajan [22] proved that The bounds (2.5) -(2.7) depend on the special multiplicative structure of the arithmetic function r(n) , and on the analytic properties of its generating Dirichlet series (Epstein zeta-function). 3. Results on the spectral counting function of Heisenberg manifolds. Returning to rational Heisenberg manifolds M = (H ℓ /Γ r , g ℓ ) as described in section 1, we give an account of what is known about the error term in (1.1), i.e., For ℓ = 1 , Petridis and Toth [21] proved that R(t) ≪ t 5/6 log t . This estimate was sharpened and generalized to arbitrary ℓ ≥ 1 by Khosravi and Petridis [16] who established R(t) ≪ t ℓ−7/41 . In a recent paper, Zhai [24] applied Huxley's "discrete Hardy-Littlewood method" [11], [12] to derive, for any ℓ ≥ 1 , Actually, it is just the special "rational" choice of the metric g ℓ which makes the error term (possibly) large. As Khosravi and Petridis [16] showed, for "almost all" metrics g , the much sharper bound where C ℓ > 0 is an explicit constant. A recent paper of Zhai [24] is concerned with estimates and asymptotics for higher power moments of R(t) . In fact, (3.3) and (3.4) may suggest the conjecture that for every ε > 0 . The results described so far show a lot of analogy to the Gaussian circle problem discussed in section 2. In the present work, it is our objective to estimate R(t) from below, in order to arrive again at a statement saying that " R(t) ≪ t ℓ−1/4 in mean-square, with an unbounded sequence of exceptionally large values t ". In fact, we are able to find for each ℓ ≥ 1 an explicit function ω ℓ (t) tending to ∞ , such that Theorem. For any fixed positive integer ℓ , let (H ℓ /Γ r , g ℓ ) be a rational (2ℓ + 1)dimensional Heisenberg manifold with metric g ℓ , as described above. Then the error term R(t) for the associated spectral counting function, defined in (3.1) , satisfies where Remarks. 1. The case of even ℓ has been treated in the first part of this work [20]. After approximating R(t) by a suitable trigonometric sum, the Dirichlet approximation theorem was applied to give all its terms the positive sign. In the present article we shall deal with the case of odd ℓ , employing a quantitative version of Kronecker's theorem instead. Technically, this will be stated in terms of uniform distribution theory -see Lemma 4 below. 2. Our results obviously are comparable to the bounds (2.3) and (2.4) for the circle problem. It seems very difficult to obtain improvements as sharp as (2.5) -(2.7), since the coefficients θ ℓ (n) (defined in (5.11) below) fail to share the useful properties of r(n) . 3. For the circle problem, the two different types of arguments (Dirichlet's theorem vs. Kronecker's) were used to establish Ω − -and Ω + -results. For Heisenberg manifolds they are needed to deal with ℓ of arbitrary parity, yielding Ω + -bounds in both cases. Then the following inequality holds true: Proof. This is one of the main results in Vaaler [23]. A very well readable exposition can also be found in the monograph by Graham and Kolesnik [7]. , and suppose that, for positive parameters X, Y, Z , we have 1 ≪ B − A ≪ X and Proof. Transformation formulas of this kind are quite common, though often with worse error terms. This very sharp version can be found as f. (8.47) in the recent monograph [14] of H. Iwaniec and E. Kowalski. Lemma 3. For a real parameter T ≥ 1 , let F T denote the Fejér kernel Then for arbitrary real numbers Q > 0 and δ , it follows that where the O -constant is independent of T and δ . Proof. This useful result is due to Hafner [8]. It follows from the classic Fourier transform formula Lemma 4. For an arbitrary integer s ≥ 2 , let a = (a 1 , . . . , a s ) ∈ R s so that 1, a 1 , . . . , a s are linearly independent over Z. Suppose further that there exists a function φ : R + → R + such that φ(t)/t increases monotonically and for all h ∈ Z s \ {o} , where · denotes the distance from the nearest integer. Then for any positive integers N 0 and N , the discrepancy modulo 1 D N 0 ,N (na) of the sequence (na) where c is an absolute constant, φ −1 denotes the inverse function of φ, and N is supposed to be so large that φ −1 (N ) ≥ e. Proof. This is essentially Theorem 1.80 in the monograph of Drmota and Tichy [4], p. 70, with the dependance on the dimension worked out explicitly. Proof of the Theorem. As already stated, the case of even ℓ has been treated in part I of this work [20]. Therefore, we may suppose throughout that ℓ is odd. We start from Lemma 3.1 in Zhai [24] which approximates the error term involved by a fractional part sum. Let U be a large real parameter, u ∈ [U − 1, U + 1] , and put Then according to Zhai (5) [24], Lemma 3.1, for arbitrary (6) ℓ ≥ 1 , We apply Lemma 1 in the form −ψ ≥ Σ H − Σ * H , choosing H = [U ] . Thus we get (5) In fact, Zhai in his notation tacitly assumes that r 1 = . . . = r ℓ = 1 , which means no actual loss of generality. We have supplemented the factor r 1 · . . . · r ℓ in (5.1). (6) At this stage, we write up the argument for general ℓ , in order to point out the importance of the condition that ℓ is odd later on. We split up the range 1 ≤ m ≤ u into dyadic subintervals M j =]M j+1 , M j ] , M j = u 2 −j for j = 0, . . . , J , where J is minimal such that (U − 1)2 −J−1 < 1 . We thus have to deal with exponential sums We transform them by Lemma 2, with On each interval M j the conditions of Lemma 2 are fulfilled with the parameters X = M j , By straightforward computations, as in [20], we obtain It is plain to see that the overall contribution of the error terms to ( where γ h,[U] stands for either α h, [U] or β h, [U] . Using the real and imaginary part of this result in (5.3), we arrive at and c 1 is an appropriate positive constant. The next step is to eliminate the majority of the terms of the last double sum. To this end, let T be another large parameter, with the constraint that U ≥ T 2 . Using Lemma 3, we multiply S(u, U ) by the Fejér kernel F T (u − U ) and integrate over U − 1 ≤ u ≤ U + 1 . Thus (5.7) The O -term here is in fact O(1) : see [20], f. (5.9). Now recall that (h, k) ∈ D(U ) explicitly means that Hence the summation condition on the right hand side of (5.7) can be simplified to For any h, k satisfying (5.8), write h(2k − h) = r 2 q , with r an integer and q square-free. Now suppose we can choose U so that for all square-free q ∈]1, T 2 ] . Here · denotes the distance from the nearest integer and ε 0 > 0 is a suitably small constant. Then, for r 2 q ≤ T 2 , q > 1 , But it is obvious that their contribution to the right-hand side of (5.12) is ≪ m −3/2+ε ≪ 1 .
2008-09-23T14:15:20.000Z
2008-09-23T00:00:00.000
{ "year": 2009, "sha1": "a1fda0e410103ad399fdaca160f5d7e9b9c798dc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0809.3924", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f6115bccabd44cecdc06d260723d312d8a0d666c", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
253771466
pes2o/s2orc
v3-fos-license
Nomenclature and listing of celiac disease relevant gluten T-cell epitopes restricted by HLA-DQ molecules Celiac disease is caused by an abnormal intestinal T-cell response to gluten proteins of wheat, barley and rye. Over the last few years, a number of gluten T-cell epitopes restricted by celiac disease associated HLA-DQ molecules have been characterized. In this work, we give an overview of these epitopes and suggest a comprehensive, new nomenclature. Celiac disease (CD) is caused by an abnormal intestinal immune response to proline and glutamine-rich wheat gluten proteins and to similar proteins in barley and rye (Green and Cellier 2007). Oat is generally considered safe for consumption by CD patients (Garsed and Scott 2007), although some patients appear to be sensitive to oat as well (Lundin et al. 2003). The only established treatment for the disease is a lifelong gluten exclusion diet. CD4 + T cells of CD patients (Lundin et al. 1993), but not healthy subjects (Molberg et al. 1997), recognize gluten peptides when presented by disease associated HLA-DQ molecules. This was first shown for DQ2.5 and DQ8 (Lundin et al. 1994), and recently it was demonstrated that DQ2.2 patients (Bodd et al. 2012) and a patient who carries DQ8.5 in a rare cis configuration ) also have glutenreactive T cells in the intestinal mucosa. Gluten-reactive T cells can readily be established from intestinal biopsies cultured in vitro (Camarca et al. 2009;Lundin et al. 1993Lundin et al. , 1994Tye-Din et al. 2010;Vader et al. 2002b;van de Wal et al. 1998b). T cells recognizing the same gluten epitopes are normally not detected in the peripheral blood (Anderson et al. 2000), but can be found in the blood of treated CD patients on day 6 after a 3-day oral gluten challenge (Anderson et al. 2000(Anderson et al. , 2005Ráki et al. 2007). HLA is the most important genetic factor in CD, and carriage of certain HLA alleles is a necessary, but not sufficient, factor for disease development (Sollid 2002). The other factors required for disease development are non-HLA genes, of which 39 loci have been identified so far (Trynka et al. 2011), and possibly environmental factors other than gluten. To note, mice transgenic for HLA-DQ2.5 and gluten specific T-cell receptors do not develop a CD-like enteropathy (de Kauwe et al. 2009;Du Pre et al. 2011). The reason why these mice do not develop enteropathy may relate to fundamental differences in the gut physiology between mouse and man, and to the lack of appropriate non-MHC genes in the mouse strains tested that parallel the non-HLA susceptibility genes of CD patients. The differential risk of DQ2.5 and DQ2.2 is linked with the T-cell response to gluten. It has been demonstrated that DQ2.5 binds a larger gluten peptide repertoire compared to DQ2.2 (Vader et al. 2003b). Further, gluten T-cell epitopes form stable complexes with DQ2.5, and the increased risk of DQ2.5 over DQ2.2 correlates with a different ability of the two HLA molecules to form stable complexes with many gluten peptides (Fallang et al. 2009). Characteristically, gluten-reactive T cells of CD patients recognize their antigenic peptides much better when specific glutamine residues are converted to glutamate by the enzyme transglutaminase 2 (TG2) van de Wal et al. 1998a). Deamidated gluten peptides bind with increased affinity to DQ2.5 and DQ8 (Arentz-Hansen et al. 2000;Camarca et al. 2009;Henderson et al. 2007;Kim et al. 2004;Moustakas et al. 2000;Quarsten et al. 1999), and the rate of dissociation of deamidated gluten peptides from DQ2.5 has been shown to be substantially slower than for their native counterparts (Xia et al. 2005). The ability to form stable peptide-MHC complexes again seems to be a key factor for the initiation of the anti-gluten T-cell response. Gluten is defined as the cohesive mass that remains when dough is washed to remove starch (Shewry et al. 1992). Traditionally and strictly speaking, gluten is a name of wheat proteins only, but gluten is now increasingly used as a term to denote proline-and glutaminerich proteins of wheat, barley, rye and oat. In wheat, gluten consists of the gliadin and glutenin subcomponents. The gliadin proteins can be subdivided into α-, γand ω-gliadins, while the glutenin proteins can be subdivided into high molecular weight (HMW) and low molecular weight (LMW) subunits. Common bread wheat is a hexaploid species, and in addition some of the gluten protein encoding genes originate from duplicated loci. Thus, in a single wheat variety there exits up to several hundred different gluten proteins, many of which only differ by a few amino acids. The proline-and glutamine-rich proteins of barley, rye and oat are termed hordeins, secalins and avenins, respectively. Given the heterogeneity of the wheat gluten proteins, it is no surprise that many distinct gliadin and glutenin derived T-cell epitopes exist (Table 2). T-cell epitopes derived from either α-, γ-, and ω-gliadins as well as from HMW and LMW glutenins have been reported (Arentz-Hansen et al. 2000;Sjöström et al. 1998;Vader et al. 2002b;van de Wal et al. 1998b). T-cell epitopes in both hordeins and secalins have been described and they are highly homologous to those found in wheat (Tye-Din et al. 2010;Vader et al. 2003a). The avenins of oat are more distinct, and although oat is considered safe for CD patients (Garsed and Scott 2007), some CD patients are clinically sensitive to oat (Lundin et al. 2003). Avenin specific as well as cross-reactive responses have been described (Arentz-Hansen et al. 2004;Vader et al. 2003a). There is at present no standard nomenclature for CDrelevant gluten epitopes. Here, we propose such a nomenclature based on the following three criteria: 1. Reactivity against the epitope must have been defined by at least one specific T-cell clone. 2. The HLA-restriction element involved must have been unequivocally defined. 3. The nine-amino acid core of the epitope must have been defined either by an analysis with truncated peptides and/or HLA-binding with lysine scan of the epitope or comparable approach. Searching the literature using these criteria, we have compiled a list of epitopes (Table 2). This list includes sequences from α-gliadin, γ-gliadin, ω-gliadin, LMWand HMW-glutenins, hordeins, secalins and avenins. To note, most gluten-reactive T cells have minimal epitopes longer than those listed in Table 2. This is because MHC a In the epitope names, these short terms are used to denote the type of proteins that the epitopes derive from: 'glia-α' denotes α-gliadin, 'glia-γ' denotes γ-gliadin, 'glia-ω' denotes ω-gliadin, 'glut-L' denotes low molecular weight glutenin, 'glut-H' denotes high molecular weight glutenin, 'hor' denotes hordein, 'sec' denotes secalin and 'ave' denotes avenin b Glutamate residues (E) formed by TG2-medited deamidation which are important for recognition by T cells are shown in bold. Additional glutamine residues also targeted by TG2 are underlined class II restricted T-cell receptors usually are sensitive to a few residues flanking the nine-amino acid core region of the epitopes. In Table 2, some sequences that previously were defined as individual epitopes have been grouped together as members of the same family. This pertains to the DQ2.5-glia-α1a and DQ2.5-glia-α1b as well as the DQ2.5-glia-γ4a, DQ2.5glia-γ4b, DQ2.5-glia-γ4c and DQ2.5-glia-γ4d epitopes. The reason is that the sequences defining these epitopes are very similar, although there are occasionally T-cell clones that can distinguish between members of the same epitope family (Arentz-Hansen et al. 2002;Qiao et al. 2005). Most T-cell clones appear to cross-react between peptide sequences of the same family. The nine-amino acid core sequences of some of the DQ2.5 restricted epitopes are identical, but because these epitopes derive from different cereal species they are still listed as unique epitopes. This applies to the DQ2.5-glia-ω1, DQ2.5hor-1 and DQ2.5-sec-1 epitopes, as well as DQ2.5-hor-2 and DQ2.5-sec-2 epitopes. T-cell cross-reactivity between proteins of different species hence does often occur, but T cells reactive with these epitopes can also be species-specific as the T-cell receptors may be sensitive to unique residues flanking the nine-amino acid core region of the epitopes. In addition, there are epitopes, like DQ2.5-hor-3 (Tye-Din et al. 2010), that have sequences which are hordein or secalin specific and which elicit species specific T-cell responses. The majority of gluten-reactive T cells generated from DQ2.5 positive CD patients can recognize their epitopes when confronted in vitro with antigen presenting cells expressing the closely related HLA-DQ2.2 molecule (DQA1*02:01, DQB1*02:01). Yet, DQ2.2 positive CD patients do not mount a T-cell response to the same gluten epitopes, but rather have responses to gluten peptides that bind stably to DQ2.2 (Bodd et al. 2012). When defining an epitope, it is thus important to assess and categorize the epitope only in the context of the HLA molecules that are expressed by the T-cell donor. The selection of gluten T-cell epitopes is best understood in HLA-DQ2.5 positive CD patients, and is influenced by at least three factors: (a) resistance of the polypeptide sequence to proteolytic degradation, (b) specificity of TG2 and (c) HLA binding specificity. The proline-rich nature of gluten makes the gluten proteins resistant to proteolytic degradation in the gastrointestinal lumen, and long gluten peptide fragments ranging from 15 to 50 residues therefore survive in the small intestine (Shan et al. 2002). T-cell epitopes are often localized within such long fragments (Arentz-Hansen et al. 2002). The resulting proline and glutamine rich peptides are often good substrates for TG2 (Dørum et al. 2009(Dørum et al. , 2010Fleckenstein et al. 2002;Vader et al. 2002a). Proline is influencing the specificity of TG2 as the enzyme typically recognizes glutamine residues in glutamine-X-proline sequences (Fleckenstein et al. 2002;Vader et al. 2002a). T-cell epitopes in their native form are usually very good substrates for TG2. TG2, being an important factor in the selection of T-cell epitopes, is demonstrated by the fact that TG2 is selectively targeting peptides which harbor T-cell epitopes from a digest of gluten with extreme complexity (Dørum et al. 2010). Finally, determinant selection by MHC is influencing repertoire of T-cell epitopes. In general, gluten peptides are poor binders to HLA class II molecules with the exception of HLA-DQ molecules associated with CD (Bergseng et al. 2008). Some gluten peptides also bind HLA-DR53 (Clot et al. 1999), although so far no T cells of celiac lesions recognizing these peptides have been described. The selective force of HLA is illustrated by the observation that the DQ2.5 and DQ8 epitopes generally come from non-overlapping sequences of gluten proteins (Tollefsen et al. 2006). The glutamate introduced by TG2 is usually in position 4 (P4), P6 or P7 in HLA-DQ2.5 restricted epitopes and at position P1 and/or P9 in HLA-DQ8 restricted epitopes (Table 2). These glutamate residues serve as anchor residues important for binding of the peptides. Both HLA-DQ2.5 and DQ8 prefer negatively charged residues at these anchors sites. This positioning of deamidated glutamine residues is strongly related to the positioning of proline residues, which is particularly strict in the case of DQ2.5 epitopes, as DQ2.5 only accepts proline at certain position in the peptide binding groove (Kim et al. 2004). This results in a dominant presence of proline at P1, P6 and P8 and leads to the modification by TG2 of the glutamine residues at P4 and P6, respectively. Such positioning of proline residues is less strict in the case of the DQ8 epitopes. Although polyclonal T-cell responses to multiple T-cell epitopes are almost invariably found in CD patients, responses to the DQ2.5-glia-α1a, DQ2.5-glia-α1b, DQ2.5-glia-α2 epitopes, DQ2.5-glia-ω1, DQ2.5-glia-ω2, DQ2.5-hor-1 and DQ2-sec-1 are dominant in DQ2.5 positive patients (Arentz-Hansen et al. 2000;Camarca et al. 2009;Tollefsen et al. 2006;Tye-Din et al. 2010). In DQ8-positive patients, responses to the DQ8-glia-α1 epitope are most frequently found (Tollefsen et al. 2006;van de Wal et al. 1998b). The list of gluten epitopes recognized by T cells of CD patients presented in Table 2 will be extended as new epitopes become known in the future. A dedicated website (http://www.isscd.org/EpitopeNomenclature) will update this list as more epitopes are identified. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2022-11-23T15:08:33.485Z
2012-02-10T00:00:00.000
{ "year": 2012, "sha1": "7e91ca41a7ff2f866d69d6219bc8677b146e596d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00251-012-0599-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "7e91ca41a7ff2f866d69d6219bc8677b146e596d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
115585409
pes2o/s2orc
v3-fos-license
Regarding the Influence of Execution , Assembly and Functioning Errors on the Teeth Profile Modification of Spur Gear in Front Plane The paper presents general aspects related to the influence of execution, assembly and functioning errors on the teeth profile modification from the spur gear in the front plane, in the case when the gear is fixed to the shaft in an asymmetrical position relative to the bearings. The numerical researches were done with FEM. 1 General Considerations The spur gears have an important place among all the gears used in industry. Until today, many investigations have been presented about the production of the spur gears [1]. Every manufactured gear contains certain types and magnitudes of errors depending on the quality level imposed. Such errors often contribute to the loaded transmission error to affect the meshing dynamics of gears [2]. In the industry, gears have important application areas in the transmission of motion. Many different manufacturing methods have been tried out other than conventional methods [3]. Some researchers [4] manufactured the gear wheel by computerizing the gear tooth profile. Ozek et al. have investigated the possibility of manufacturing spur gears with CNC milling machines by the method of vertical machining [5]. Ozel [6] had investigated the cutting errors in the tooth profile of the spur gears which are manufactured by end mill according to radial cutting method in three-axis CNC milling machine. The results of his study showed that the cutting errors in the involute curve increase according to increments of module, tooth number, pressure angle and cutting angle. The impact of the operating conditions of the gears in the modern era (increasing efforts are being made to reduce the functional dimensions) as well as on the environmental protection (noise reduction and improvement of the medical-social conditions vibrations) guided the research in the field of gearing to a higher level. The realization of these requirements is possible with the help of the modification of the gear tooth profile, both in the frontal and longitudinal planes. This allow to compensate the elastic deformations of the gear teeth, the execution and mounting errors and to locate the contact patch. Nowadays the design and manufacturing of the gears has as a base the computerised simulation of the meshing, using a Numerical Controlled Machine in manufacturing and for control a 3D Coordinate Measuring System Corresponding author: borzan_cristina@ymail.com © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). MATEC Web of Conferences 137, 01007 (2017) DOI: 10.1051/matecconf/201713701007 MTeM AMaTUC 2017 (Machine). Numerical simulation is a very effective method for simulation various processes in many domains. A multitude of problems in engineering was solved by Fine Element Analysis [7]. Many authors used numerical simulations [7,8,9,10,11]. Nistor [8], had investigated the influence of gear geometry such as teeth number and the deformation mechanism by 3D finite-element analysis using FORGE software in terms of teeth forming and forming loads evolution. Based on these simulations, the experimental investigations were carried out to obtain a spur gear form with the good quality, using several billet dimensions. Berri et all [9] have introduced a developed simulator on rotational vibrations of a power transmission spur gear set with one cracked pinion tooth. The simulated outputs depicted precisely the experimental behaviours. Material and Methods Numerical research was carried out by applying FEM to determine the influence of execution, assembly and functioning errors on the correction depth of tooth gear (flank or correction depth) mounted asymmetrically between bearings. The spur gear is considered to be made of alloy steel, 41Mo Cr10. The cylindrical gears teeth are mainly loaded for bending and contact pressure. The load was made on the peaks of the spur gear teeth to obtain the maximum elastic deformation at bending during operation (deformation which influences the depth value of the gear tooth head modification). The Finite Element Analysis algorithm started from the gear tooth profile that was subjected to profile changes (correction of the head tooth gear). The gear tooth profile was obtained using the MAIN.CPP program. The MAIN.CPP program was developed in [10]. In this software, the 3D meshed model of the gear was made (Figure1), the limit conditions were placed, the forces were loaded and the stresses and the corresponding elastic deformations were determined. The gear model was then exported as DXF files for the ALGOR finite element analysis software. The force loading was made on the peak of the spur gear tooth as linear distributed forces over a band with a certain width due to the contact area between the flanks and they were not concentrated in points. The limitations of Algor's finite element program did not allow another representation. The elastic deformations occurring during the functioning under the load were determined by finite element analysis for each individual gear case. Execution errors, in particular those related to limit deviations for the pinion pitch and for the gear were taken from the Standards, and the assembly and execution errors were taken into account by the way the load was distributed over the tooth width of the gear. Numerical Researches with Finite Element Method It has been considered a cylindrical spur gear, with z= 30 teeth, module m=4 mm, loaded with 1500 N linear force distributed over the teeth, asymmetrically mounted on a shaft, that sustain onto two bearings. In the case of the elastically deformed gear, asymmetrically fixed between bearings and loaded with linearly distributed forces, when bcal <b (the gear is slightly loaded and / or the total deviation of the tooth direction has a high value), the values of the elastic deformations of the gear teeth are shown in Table 1, and the model gear-shaft, elastically deformed in Figure 3. Analysing the values from Table 1, it can be observed that for the case of asymmetric position of the gear to the bearings, the maximum value was 5,606 μm, on the end portion of the gear from the side where is applied the force that creates the torque moment. The correction depth of the profile (Δa), determined with relation 1 [10], is 22,6 μm. • fΣpb is the mean square deviation of the limit deviations of the pitches of the pinion and gear; it is calculated with: For the bcal> b case encountered when the gear is heavily loaded and/or the total deviation of the tooth direction has a low value, the gear-shaft model (designed for the asymmetric position of the gear relative to the bearings) is shown in a elastic deformed state, in Figure 4. Analysing these values, it results that the elastic deformations for the gears teeth in case of loading with linearly distributed forces are lower for bcal>b. It means that the elastic deformations during operating time, in the case of the direction errors of the gear tooth (due to the executions or mounting errors), are lower and/ or the gear is strongly loaded. The Load Distribution Calculation on the Tooth Width The load distribution on the tooth width, when the force has a linear distribution is illustrated in figure 5 and the required formulas for calculate the distribution are presented in the following (3) -(9) relations. The force linear evolution of the b width of the gear tooth can be described as a linear function, such as: Fig. 5. The forces linear distribution, for the case b=bcal. and equation (3) becomes: The F force, which is applied on the tooth, can be calculated by: On the b width of the tooth, a number (n) of knocks is established, so (n-1) subintervals are obtained. The force corresponding to each subinterval can be calculated: The following recurrence formula was established: where k represents the subinterval numbers and Fk is the linear variable force on k subinterval. If is considered the case when b>bcal or b<bcal respectively, the linear variable force on every subinterval is calculated as follows: where: bcal =ib / (n-1) and i represents the number of subintervals on the tooth width. The table 3 presents the linear distributed forces, calculated with the above presented formulas, applied into the knocks of the finite element network, within the FEM analyses performed for the considered cases. Conclusions The paper presents a method of determining the value of the modification parameters of the teeth profile for a cylindrical spur gear in the frontal plane. Finite element analysis was used to determine the elastic deformations during operating. The spur gear was considered to be asymmetrically positioned between the bearings, a case commonly encountered in practice. Execution and assembly errors have been highlighted by the force loading distribution on the teeth width of the spur gear (distributed linear forces). Future researches will be orientated to perform studies which should lead to establish the following things: the influence of the tooth width load distribution on parameters value for gear teeth profile modification; the influence of the gear with respect to the roller bearings, on parameters value for gear teeth profile modification.
2019-04-16T13:26:35.163Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "dadfd482f3f73cea206b898a279f31029c55b1b3", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/51/matecconf_mtem2017_01007.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "47ae0d916a7080827ffe5d84795ef5c8336c12ea", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
64656596
pes2o/s2orc
v3-fos-license
Design of a Personalized Massive Open Online Course Platform — Focusing on the massive open online course (MOOC) platform, the purpose of this study is to realize personalized adaptive learning according to the needs and abilities of each learner. To this end, the author created a personalized adaptive learning behaviour analysis model, and designed a personalized MOOC platform based on the model. Through the analysis of learning behaviours on the MOOC platform, the model digs deep into the pattern of learning behaviours, and lays the basis for personalized intervention in the learning process. The comparison experiments show that our prediction method is more accurate than the other prediction algorithms. This research sheds new light on the design of learner-specific MOOC platform. Introduction A massive open online course (MOOC) [1] is an online course aimed at unlimited participation and open access.First introduced in 2006, MOOC has developed into a novel and popular platform of distance learning in recent years [2][3][4]. The open access to content, structure and learning goals is an essential feature of early MOOCs [5], which promotes the reuse and remix of resources.With the proliferation of MOOCs, an increasingly number of new features continue to emerge.Despite the partial similarity in the learning data, the MOOC platform differs greatly from traditional classroom in behaviour collection and analysis.Unlike traditional classroom, MOOC platform can record various user operations and capture each submission from the user.Besides, the platform contains much greater details on user behaviours than traditional classroom.The gigantic amount of data makes it possible to implement the technology of big data analytics in the platform. The era of data-driven education has come, big data technology has a profound impact on education based on MOOCs.In MOOC platforms, teachers and administrators can use big data technology to inform teaching decisions, give grades, award credits and so on.And even big data analytics can predict and adjust student progress to inform both instructional and institutional choices. The big data-based learning analytics was defined as: "the measurement, collection, analysis and reporting of big data about learners…for purposes of understanding and optimizing learning and the environments in which it occurs" [6].This technology predicts the future performance of learners based on their learning progress, enables them to self-define the learning contents and select preferred courses, and notifies instructors in advance when learners need academic guidance.In this way, the learners can unlock their full learning potential and have a good command of knowledge. This paper proposes a personalized adaptive online learning analysis model for MOOCs.The model mainly analyses the big data generated by the learner operations on MOOC platform, and provides effective guidance on the teaching and learning on the platform. The remainder of this paper is organized as follows.OVERVIEW OF MOOCS briefly introduces MOOCs; LITERATURE REVIEW reviews the related studies; PERSONALIZED MOOC PLATFORM BASED ON BIG DATA ANALYTICS puts forward the PAOLA model based on big data analytics; EXPERIMENTS AND ANALYSIS carries out several experiments and analyses the experimental results; CONCLUSIONS wraps up this research with some meaningful conclusions. Overview of MOOCS The concept of MOOC was invented by Stephen Downes, as an online course aimed at unlimited participation and open access via the web [7].The first MOOCs emerged from the open educational resources movement in 2008.With the development of MOOCs, there appear to be two distinct types: cMOOC and Xmooc [8].Following the connectivist philosophy, cMOOCs require that teaching resources must be remixable and re-purposable.The basic methods of cMOOCs attempts to connect learners to each other to complete the learning process.By contrast, xMOOCs have a much more traditional course structure.They employ elements of the original MOOC.The instructor is the knowledge provider and problem solver, but learner interactions are usually limited to asking for assistance and advising each other on difficult points. The timeline of MOOCs is shown in Figure 1. Instructor, learner, course, resource and context are five essential factors of a MOOC system [9].The instructor needs to simplify the learning process and give guidance to the learner; the learner should study the course by logging in the MOOC platform; the course is the externalization of the resource and the context on the platform; the resource of the platform can be accessed via multiple education methods; the context represents such components of the platform as online social networks, IT solutions, communication systems, and so on.Through the integration of the above five factors, MOOCs can support the learning of a large community, allowing the students to acquire knowledge anytime, anywhere [10]. Compared to traditional distance education, MOOCs mainly have the following advantages [11].First, MOOCs help the students replace passive reception of instruc-tions with autonomous learning; second, MOOCs encourage cooperation and teamwork, which are not highlighted in traditional distance education. Over the years, MOOCs have led to numerous changes to higher education [12].They can realize team-based course design.The construction of MOOCs requires the collaboration of all faculty members, and the support from designers, software developers, teaching researchers, librarians and videographers.They can focus on teaching process.MOOCs shine a light on teaching and learning in universities.They offer a proving ground to proving ground, a viable strategy to promote active learning in traditional courses.They can give space for innovation.The supporting structures of MOOCs provide an ideal space for innovation in teaching and learning, and give birth to thoughtful and bold ideas for higher education in a digital era. Literature Review The rapid development of MOOCs creates a perfect environment for the integration of big data and teaching research.The lifecycle of the big data generated from MOOCs is illustrated in Figure 2. Some MOOC data are the same with those generated in traditional classrooms, namely, teaching resources, enrolment information, and test scores.The major difference between MOOCs and traditional classroom teaching lies in the behavioural data of learners.The MOOC platform can capture and record mouse clicks, video controls and even all submissions to the platform.The application of big data analytics to these behavioural data opens up an effective way to enhance the learning/teaching efficiency and effectiveness. In the past, learner behavioural data have been relied on to determine the influencing factors on the drop-out rate of MOOCs.For example, Thille et al. developed a personalized content delivery method based on the average time-on-page of learners, seeking to identify different cohorts of learners [13]. Besides, many strategies have been developed to tackle the prediction issue.Eriksson et al. proposed a method that can predict the performance and drop-out probability of learners in MOOC-based learning [14].Similarly, Hughes et al. constructed a reliable model to forecast the early drop-outs [20].Romero et al. predicted GPA with a regression analysis algorithm.Based on big data analytics [21], Polyzou et al. set up grade prediction models for specific students and courses [15].These models are applicable to pre-registration grade prediction and in-class grade prediction.Elbadrawy et al. projected the next-term grades of students after integrating new performance prediction techniques into recommender systems [16].To sum up, the final results of MOOC-based learning rely heavily on theoretical framework, research design and analysis methods. Much research has also been done on personalized learning behaviours and functions of the MOOC platform.Through scientific means, Ren et al. compared the personalized learning features of several well-known MOOC platforms [17].Marin et al. developed a personalized feedback mechanism for learners to reflect on their understanding of MOOC courses [18].Focusing on several specific issues, Zeide evaluated big data-driven learning environments, and advised educators and policymakers to consider the explicit implications of data-driven education [19]. In spite of the multi-angle research on MOOC-based learning, the previous studies fail to provide personalized adaptive learning according to the needs and abilities of each learner. Therefore, this paper explores the learning process, learner behaviours and learning rules through big data learning analysis model, so that the MOOC platform can automatically recommend reasonable learning path and learning resources to each learner. Personalized Mooc Platform Based on Big Data Analytics This section aims to design a personalized MOOC platform that provides timely and accurate feedbacks to learners.The platform should automatically adjust the learning path, provide learners with adaptive resources, and enable instructors to give personalized guidance according to the learning behaviours and needs of each learner.To this end, the collaborative filtering was introduced to push the learning information of a user to those with the same or similar interests. Personalized adaptive learning behaviour analysis model The basic data flow of our personalized adaptive learning behaviour analysis model is shown in Figure 3. The proposed model consists of six components.In Figure 3, (1) stands for the content presentation and delivery component, which interactively delivers personalized contents to learners and evaluates their learning performance; (2) refers to the big data repository of student learning, which stores the learner inputs and behaviours captured during their learning on MOOC platform; (3) represents the future behaviour/performance prediction model, which relies on the learning and behavioural data from the big data repository; (4) symbolizes a function that provides learners with visible feedbacks based on the predicted results; (5) indicates the adaptation engine that adjusts the content presentation based on the predicted results, and delivers resources depending on learner-specific performance and interests; (6) expresses the intervention engine, which improves the learning process by instructor intervention in the automated platform.The prediction model is obviously the centrepiece of our model.Two methods were investigated for the establishment of the model.In the first method, the model is created by linear regression algorithm, with learner data and related features being the predictor variables.In the second method, the model is built up based on matrix factorization algorithm, aiming to find a low-dimensional space that represents both the learner and the content. In this research, the personalized linear regression model employs a linear combination of m learner-specific regression models.The predicted value !!" for learner i in course j can be expressed as: where !! is the global bias value; !! is a bias term for learner i; !! is a bias term for course j; !! is the !!! vector for learner i; U is the !!! coefficient matrix; !!" is the feature vector associated with learner i and course j.The two bias terms reflect the average score of a learner in the past and the average score of a course in the past.The information in !!" includes the factors related to learners and those related to courses. In our matrix factorization algorithm, each learner i and each course j are respectively described by k-dimensional feature vectors !! and !! .The formula below predicts the score of learner i in course j. Design of personalized MOOC platform Figure 4 shows a prototype of the personalized MOOC platform.In our personalized MOOC platform, learners can receive feedbacks on their future choices and activities during the learning process.The next activities are recommended on the learner dashboard (Figure 5).Apart from the recommended activities, the learner dashboard also compares the learning performance of a learner with that of other learners.In accordance with the comparison, the learner can adjust the learning path and content on the platform. The instructor dashboard lists the performance of each learner (Figure 6).Based on the centralized display of learner performance of the whole class, instructors can adjust their teaching behaviours and the learning pace of each learner. The administrator dashboard lists the detailed data on different classes (Figure 7).The information on administrator dashboard explains the effect of a particular policy on learning performance.In view of the information, administrators can improve the teaching and learning policies. Experiments and Analysis To verify the effect of our model, the related data were gathered from a personalized MOOC platform deployed in Qingdao University.Firstly, our matrix factorization (MF) algorithm was contrasted with several commonly used prediction algorithms.Table 1 lists the predicted results on the personalized MOOC data by these algorithms.From Table 1, it can be seen that our MF algorithm had the lowest error in prediction.For the density of learner-course score matrix, the MF algorithm outperforms course-specific regression (CSpR) algorithm and course-specific matrix factorization (CSpMF) algorithm.Figure 9 presents the predicted results on the personalized MOOC data by these algorithms.2. Table 2 shows that the MF algorithm is the one least affected by cold start problem. Next, the author evaluated the performance of the personalized MOOC platform.During the construction of the platform, the MapReduce-based structure was intro-iJET -Vol.13, No. 4, 2018 duced with different number of parallel nodes.The introduction may affect the speedup of big data analytics.Through simulation, the relationship between the number of login learners and platform speedup is shown in Figure 10.As shown in Figure 10, the advantage of MapReduce was not fully displayed at a small number of login learners.However, the speedup rocketed up when that number increased continuously.Hence, a large number of nodes is conducive to the performance of big data analytics. Conclusions With the popularity of data collection and mining to feed learning analytics, it will influence more and more of MOOCs education.The data created by each learner is part of the "big data", and each learner is a producer and a consumer of big data.Under the support of big data analysis, there had been lots of studies for content push and quality analysis of learning resources on MOOC platforms.Most of the existing studies on MOOC data analytics had focused on predicting the drop-out rate or the learner performance, and overlooked the practical use of personalized MOOC platform.To make up for the gap, this paper develops a personalized adaptive learning behaviour analysis model, and designs a personalized MOOC platform based on the model.The model digs deep into learning behaviours, and reveals the relationship between learning behaviours and implicit data.Our work helps to grasp the essence of the learning process and implement personalized teaching.Through the big data analysis model, we can explore the learning process of learners, discover learning rules, and provide personalized adaptive learning according to the needs and capabilities of every student.In the future, the author will examine even more features of learning behaviours, and design suitable learning materials for each learner. Fig. 2 . Fig. 2. The lifecycle of big data generated from MOOCs Fig. 3 . Fig. 3.The basic data flow of personalized adaptive learning behaviour analysis model Fig. 5 . Fig. 5.The learner dashboard showing the detailed learning information Fig. 6 . Fig. 6.The instructor dashboard showing performance of each learner Fig. 7 . Fig. 7.The administrator dashboard showing the detailed data on different classes Fig. 9 . Fig. 9. Predicted results of three algorithms Secondly, the author tested the influence of cold start on MF, random forest (RF) and polarimetric L-band multibeam radiometer (PLMR) algorithms.The predicted results of these algorithms for cold start (CS) and non-cold start (NCS) records are given in Table2. Fig. 10 . Fig. 10.The relationship between the number of login learners and platform speedup Table 1 . Predicted results of several algorithms Table 2 . Predicted results comparison of three algorithms for cold start and non-cold start records
2019-02-17T14:19:45.538Z
2018-03-30T00:00:00.000
{ "year": 2018, "sha1": "a4789937e66bcaa88c0b04bb55794d4e07f09950", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jet/article/download/8470/4880", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf9690d88ba61f2dcff6e8a2f505cc3671b08e07", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246196061
pes2o/s2orc
v3-fos-license
Reproductive Health Experiences of Females Diagnosed with Young-Onset Colorectal Cancer: A Multi-Method Cross-Sectional Survey Objective: Given the increasing risk of young-onset colorectal cancer (yCRC) among adults under 50 years, it is important to understand impacts on reproductive health. Our objective was to assess experiences with reproductive health after yCRC diagnosis among females. Methods: We conducted a cross-sectional study among females, 18 years or older, who have been diagnosed yCRC and are able to communicate in English. Data were gathered using an online survey involving both quantitative (e.g., multiple choice) and qualitative (e.g., open-ended text) questions on pregnancy history, influence of yCRC on reproductive decisions, and experiences with reproductive healthcare. Results: Altogether, 101 females with yCRC participated, including 23 who had never been pregnant and 78 who had been pregnant. yCRC influenced family planning goals for one-third of participants. Furthermore, compared to participants who completed treatment, those currently undergoing treatment had higher odds of indicating their yCRC diagnosis influenced family planning goals (adjusted odds ratio 4.93; 95% confidence interval 1.29 to 18.78). Although 53 (52.5%) participants indicated having discussions regarding reproductive health with healthcare provider(s), 44 (43.6%) did not. Content analysis of open-ended survey questions identified themes on the emotional impacts, experiences with reproductive healthcare, reproductive and family planning considerations, and the related issue of sexual health impacts of yCRC. Conclusions: Gaps in care, related to limited reproductive health discussions, influence of yCRC on family planning, and experiencing lasting reproductive health impacts highlight the need for improving reproductive healthcare, particularly for females diagnosed with yCRC. Introduction Reproductive years among females range from 15 to 49 years of age [1] and in the past 50 years, the average age for a first pregnancy has increased from 24 years of age in the 1970's [2] to 29.2 years of age in 2016 [3]. Since the likelihood of cancer increases with age, females today are more likely to be diagnosed with cancer during their reproductive years. Although the incidence of cancer during pregnancy is low (0.07-0.1%), cancer is a leading cause of death among females of childbearing age (15-34 years of age) [4,5]. Colorectal cancer (CRC) has historically been considered a disease of older persons. However, growing evidence on the increasing incidence of young-onset CRC (yCRC) in individuals below 50 years of age [6][7][8][9][10] including a 2020 worldwide systematic review that estimated a pooled annual percent change in incidence (APCi) for yCRC of 1.33% (95% confidence interval (95% CI), 0.97 to 1.68) [11]. Given the increasing risk of yCRC and its estimated occurrence in two out of every 100,000 pregnancies [12], it is important to consider implications among younger adults, particularly impacts on reproductive health, which may span fertility, pregnancy, and early menopause. Prior research on yCRC and reproductive health is limited and has largely focused on clinical aspects during pregnancy and difficulties of diagnosing yCRC given that symptoms tend to be similar to those experienced in normal pregnancies (e.g., nausea, vomiting, constipation, and anemia) [4,13,14]. Understanding of reproductive health impacts, particularly on family planning decisions, of yCRC among women who experienced pregnancy around the time of their yCRC diagnosis (e.g., before or after) as well as those who have never been pregnant are lacking. In general, cancer patients who are unaware of post-treatment fertility options experience more conflict regarding future family planning decisions [15]. There also tends to be discrepancies between what patients and healthcare providers find to be helpful when discussing reproductive health [16]. These discrepancies between patient and healthcare provider expectations around reproductive health can impact the care received [17]. As these prior studies have not been conducted among those with yCRC, it remains unclear whether women are having discussions about the reproductive health implications of yCRC with their healthcare providers. To address these gaps, our objectives were to: (1) characterize reproductive health outcomes of women after yCRC diagnosis according to pregnancy history (e.g., never been pregnant, have been pregnant); (2) determine how yCRC diagnosis influences family planning decisions; and (3) assess experiences with reproductive healthcare, including discussions with healthcare provider(s). Participant Recruitment We invited females who are 18 years or older, have received a diagnosis of yCRC (before the age of 50 years), and are able to communicate in English. To recruit participants, we used the authors' and their affiliated institutions' social media channels (e.g., Twitter, Facebook, and Instagram) as well as drawing from a list of individuals who had participated in previous CRC-related research conducted by our research team and indicated their consent to be notified of future studies [18][19][20][21]. Individuals were directed to a study website, designed using Qualtrics, an online survey platform, which included information about the objective of the study, eligibility criteria, and what participation involves. Individuals who consented to participate were directed to the study survey; individuals who do not wish to participate were asked to close their browser. Study Design and Data Collection We conducted a cross-sectional study and administered an online health survey which included 12 pages and consisted of four sections (Supplementary Figure S1). The first section on yCRC characteristics comprised nine quantitative questions, with multiple choice or drop-down response formats: on type of cancer (e.g., 'colon', 'rectal', 'both sites'), age at diagnosis, date of diagnosis, stage, symptoms (e.g., 'blood in the stool', 'diarrhea', 'constipation'), treatments received (e.g., 'radiation', 'surgery', 'chemotherapy'), and treatment status ('completed' or 'in treatment'). The second section covered questions on pregnancy history (e.g., 'never been pregnant,' 'have been pregnant'), when the pregnancy occurred in relation to yCRC diagnosis (e.g., 'before', 'after'), estimated start of pregnancy, and whether pregnancy resulted in a live birth. This section also included questions on reproductive decisions, namely on family planning, such as whether yCRC diagnosis influenced future reproductive decisions (e.g., 'yes', 'no'), whether they considered having children after yCRC diagnosis, and, if so, family building options considered (e.g., 'childbearing', 'adoption', 'surrogacy', 'assisted reproductive technology '). An open-ended question provided participants the opportunity to indicate how their yCRC influenced or changed their reproductive decisions. The third section included seven questions on reproductive healthcare in relation to their yCRC diagnosis. Questions included whether they received discussion regarding reproductive health during treatment for yCRC (e.g., 'yes', 'no'), who initiated the discussion (e.g., 'myself', 'healthcare provider'), and the type(s) of healthcare provider(s) involved in the discussion (e.g., 'oncologist', 'surgeon'). This section also included questions on reproductive health topics covered (e.g., 'hormone replacement', 'menopause') and resources provided (e.g., 'pamphlets'). The final question in this section was open-ended and invited participants to share, in a textbox, anything else they wished they were told about reproductive health and yCRC. The fourth section on demographic information consisted of six questions, including country of residence, current age, marital status, level of education, ethnicity, and area of living (e.g., 'urban', 'rural'). Statistical Analysis We analyzed close-ended quantitative questions by calculating descriptive statistics to characterize participants and summarize their responses. Given potential differences in experiences according to history of pregnancy, we grouped participants accordingly (e.g., 'never been pregnant', 'have been pregnant') and compared characteristics and outcomes using Chi-square tests. In exploratory analyses, we evaluated the impact of yCRC diagnosis on reproductive decisions by creating a binary categorical outcome representing participants' responses on whether their yCRC diagnosis influenced family planning decisions ('yes', 'no') and evaluated determinants using multiple logistic regression. Potential determinants represent variables we surveyed on such as: age at yCRC diagnosis, treatment status (in treatment versus completed treatment), and education (less than college versus college or more). To support quantitative analyses, we used Microsoft Excel 2016, Qualtrics XM Stats iQ, and SPSS. Qualitative Analysis We exported all narrative responses to open-ended qualitative questions into NVivo 12 (QSR International), which was used to categorize our analysis. We applied descriptive content analysis [22], following three coding steps of initial open coding (assigning concepts to phrases and sections of the text responses), sorting and organization into categories, and construction into themes. Initial coding was conducted by the first author (LA). Subsequently, three study authors (LA, NR, and MDV) were involved in the formation of categories and the themes and collaborated to reach consensus for the final reporting. Ethical Approval and Consent We obtained ethical approval from the University of British Columbia Behavioural Research Ethics Board. Data were collected and stored securely using an online survey platform, a survey platform that is compliant with the British Columbia (BC) Freedom of Information and Protection of Privacy Act and meets institutional and jurisdictional privacy requirements. Participants provided informed consent prior to participating in the survey and their confidentiality was maintained throughout the study. Results Between 20 March 2020 and 22 April 2020, 121 individuals accessed the survey. We excluded 20 records with incomplete responses. Among 101 participants included, 23 (22.8%) had never been pregnant and 78 (77.2%) had been pregnant at least once ( Table 1). As shown in Table 2, characteristics of yCRC were similar between groups according to pregnancy history. For women who had never been pregnant, colon and rectal cancer were equally diagnosed (11,47.8% for both) and the most commonly reported symptoms were blood in the stool (15,65.2%), gas/cramps/feeling bloated (10, 43.5%), narrow stool (9, 39.1%), and weakness/fatigue (9, 39.1%). For women who had been pregnant, 47 (60.3%) were diagnosed with colon cancer with the most commonly reported symptoms being blood in the stool (53, 67.9%), gas/cramps/feeling bloated (39, 50%), and narrow stool (33, 42.3%). Of note, we observed a higher proportion of women who had never been pregnant report bowel obstruction as a symptom as compared to those who had been pregnant (34.8% vs. 11.5%, Chi-square p-value = 0.009). Experiences with Reproductive Health after yCRC Diagnosis-Quantitative Responses As shown in Figure 1, among participants that had been pregnant, 73 (93.6%) were pregnant before yCRC diagnosis, three (3.8%) were pregnant after yCRC diagnosis, and two (2.6%) experienced pregnancies before and after yCRC diagnosis. Of the 75 pregnancies that occurred before yCRC diagnosis, 74 (98.7%) resulted in live births. With respect to timing of yCRC diagnosis and pregnancy, six participants had their diagnosis within 12 months, six within 12-24 months, and 63 greater than 24 months from the start of pregnancy. Of the five pregnancies that occurred after yCRC diagnosis, two (40%) resulted in a live birth. With respect to timing of yCRC diagnosis and start date of pregnancy, one participant became pregnant within 12 months of their yCRC diagnosis, one within 24 months, and one at greater than 24 months. Curr. Oncol. 2022, 29 6 to timing of yCRC diagnosis and pregnancy, six participants had their diagnosis within 12 months, six within 12-24 months, and 63 greater than 24 months from the start of pregnancy. Of the five pregnancies that occurred after yCRC diagnosis, two (40%) resulted in a live birth. With respect to timing of yCRC diagnosis and start date of pregnancy, one participant became pregnant within 12 months of their yCRC diagnosis, one within 24 months, and one at greater than 24 months. With respect to the impacts of yCRC on reproductive decisions, one-third of participants (34, 33.7%) indicated that their yCRC diagnosis influenced their family planning goals, including nine (39.1%) of those who had never been pregnant and 25 (32.1%) of those who had been pregnant. As shown in Table 3, age at yCRC diagnosis was a predic- With respect to the impacts of yCRC on reproductive decisions, one-third of participants (34, 33.7%) indicated that their yCRC diagnosis influenced their family planning goals, including nine (39.1%) of those who had never been pregnant and 25 (32.1%) of those who had been pregnant. As shown in Table 3, age at yCRC diagnosis was a predictor, particularly among participants diagnosed at between 20 and 29 years (adjusted odds ratio (aOR), 22.73; 95% confidence interval (CI), 3.53 to 146.39) and 30 to 39 years (aOR, 21.94; 95% CI, 5.59 to 86.18) as compared to those diagnosed at between 40 and 49 years. Participants currently undergoing treatment were more likely to indicate that their yCRC diagnosis influenced their reproductive decisions as compared to those who had completed treatment (aOR, 4.93; 95% CI, 1.29 to 18.78). When asked whether they had considered having children after their yCRC diagnosis, one-fifth (20,19.8%) of participants indicated that this remains of interest, including options of childbearing, surrogacy, and adoption. Of note, we found that more participants who had never been pregnant than those who had been pregnant indicated that they were considering surrogacy (83.3% versus 28.6%, Chi-square p-value = 0.01) and adoption (50.0% versus 14.3%, Chi-square p-value = 0.04). With respect to having discussions regarding reproductive health, 53 (52.5%) participants indicated having discussions with healthcare provider(s) after being diagnosed with yCRC, 44 (43.6%) did not, and 2 (4.0%) were unsure if these discussions occurred ( Figure 2). Among those that had discussions, the majority (36, 67.9%) indicated that they were initiated by healthcare provider(s) and nearly a quarter (24.5%) of participants indicated they were initiated by themselves. The most frequently discussed topics among participants who had never been pregnant was embryo/egg freezing (61.5%) followed by sexual activity (46.2%) and menopause (38.5%). Among participants who had been pregnant, the most frequently discussed topics were menopause (57.5%), sexual activity (50%), and intimacy (25%). The healthcare providers that most frequently discussed reproductive health after diagnosis were medical oncologists (84.6% among participants who had never been pregnant; 50% among those who had been pregnant). Among participants who had discussions regarding reproductive health, 57.7% indicated that they were not provided with any additional resources (e.g., brochures or websites). Experiences with Reproductive Health after yCRC Diagnosis-Qualitative Responses Finally, 70 participants provided narrative responses to open-ended questions regarding reproductive health experiences in relation to their yCRC diagnosis. The average response length was 44 ± 11 words. Descriptive content analyses identified four themes. The first theme on the emotional impacts of yCRC reflects how the psychological and reproductive burden of a cancer diagnosis are closely related. This theme includes three categories of: (1) processing yCRC diagnosis; (2) worries and fears; and (3) coping strategies. The second theme, experiences with reproductive healthcare after yCRC diagnosis, encompassed whether participants had a discussion with healthcare providers or not and the resources they were offered. We noted a range of experiences including from those who (1) did not have a discussion with healthcare providers ("no discussion was ever had") as well those who (2) received a referral for reproductive healthcare or (3) had a discussion with healthcare providers. A category within this theme, (4) online resources, also captured experiences with information and support gained from these sources ("found online forums a huge help in this area"). The third theme, reproductive and family planning considerations with yCRC diagnosis, touched on reasons women chose not to have children after diagnosis and the impact yCRC diagnosis had on family planning. This theme consisted of four categories: (1) reproductive and pregnancy history; (2) impact of yCRC on family planning; (3) role of genetic testing ("my health has pushed me towards not having children"); and (4) fertility. The fourth theme revealed a related aspect of sexual health impacts of yCRC capturing areas where yCRC and treatments had devastated participants, specifically, (1) vaginal side effects ("damage to vagina") and (2) intimacy and intercourse ("difficult sex"). Impacts tended to be issues that participants did not anticipate and were not discussed with healthcare providers prior to participants experiencing them. Table 4 provides a summary along with representative quotes. Experiences with Reproductive Health after yCRC Diagnosis-Qualitative Responses Finally, 70 participants provided narrative responses to open-ended questions regarding reproductive health experiences in relation to their yCRC diagnosis. The average response length was 44 ± 11 words. Descriptive content analyses identified four themes. The first theme on the emotional impacts of yCRC reflects how the psychological and reproductive burden of a cancer diagnosis are closely related. This theme includes three categories of: (1) processing yCRC diagnosis; (2) worries and fears; and (3) coping strategies. The second theme, experiences with reproductive healthcare after yCRC diagnosis, encompassed whether participants had a discussion with healthcare providers or not and the resources they were offered. We noted a range of experiences including from those who (1) did not have a discussion with healthcare providers ("no discussion was ever had") as well those who (2) received a referral for reproductive healthcare or (3) had a discussion with healthcare providers. A category within this theme, (4) online resources, also captured experiences with information and support gained from these sources ("found online forums a huge help in this area"). The third theme, reproductive and family planning considerations with yCRC diagnosis, touched on reasons women chose not to have children after diagnosis and the impact yCRC diagnosis had on family planning. This theme consisted of four categories: (1) reproductive and pregnancy history; (2) impact of yCRC on family planning; (3) role of genetic testing ("my health has pushed me towards not having children"); and (4) fertility. The fourth theme revealed a related aspect of sexual health impacts of yCRC capturing areas where yCRC and treatments had devastated participants, specifically, (1) vaginal side effects ("damage to vagina") and (2) intimacy and intercourse ("difficult sex"). Impacts tended to be issues that participants did not anticipate and were not discussed with healthcare providers prior to participants experiencing them. Table 4 provides a summary along with representative quotes. Discussion In this cross-sectional study, we characterized the reproductive health experiences of 101 women who were diagnosed with yCRC, including reproductive decisions and the reproductive healthcare received, taking into consideration pregnancy history where relevant. Indeed, a diagnosis of yCRC is associated with considerable reproductive health burden with one-third of participants (34, 33.7%) indicating that their diagnosis influenced their family planning goals. Although just over half (52.5%) of participants indicated having discussions regarding reproductive health with healthcare provider(s), 43.6% did not and a further 4.0% were unsure. Providing further context to quantitative responses, qualitative responses highlighted care and information gaps ("I wish a longer discussion and basic overview of my reproductive health was done"). Evidence in recent years on the increasing risk of yCRC has mobilized efforts into not only understanding and addressing this risk but also identifying areas to support patients and improve both short-and long-term outcomes [23][24][25][26]. Indeed, treatments for yCRC, including pelvic surgery, radiotherapy, as well as chemotherapy, have detrimental effects on reproductive health outcomes [27], necessitating investigations on impacts among patients. Survey questions regarding having pregnancies and corresponding timing related to yCRC diagnosis allowed us to characterize these experiences, which had not been achieved in prior studies. Indeed, with an estimated incidence of CRC during 2 of every 100,000 pregnancies [12], yCRC is among the cancers with negative implications on pregnancy. Even so, prior research on yCRC and pregnancy is limited and has largely focused on clinical aspects during pregnancy such as treatment options, maternal and fetal outcomes, and effects of treatment on reproduction [12,28] as well as difficulties of diagnosing CRC during pregnancy given the similarity to symptoms experienced in normal pregnancy (e.g., nausea, vomiting, constipation, and anemia) [4,13,14]. A study on specific and non-specific symptoms of CRC by Rasmussen et al. found that women of 20-39 years of age experienced the following symptoms in association with CRC: abdominal pain (38.0%), blood in stool (7.7%), diarrhea (19.9%), constipation (23.5%), and tiredness (73.4%) [29]. We were also interested in surveying participant symptoms as common symptoms of yCRC are also typically experienced post-partum. Aside from confirming prior findings by Rasmussen et al. particularly among participants who had been pregnant, our study provides new insights on symptoms experienced by women diagnosed with yCRC who had never been pregnant. Particularly, we observed that bowel obstruction was reported at a higher frequency among those who had never been pregnant compared to those who had been pregnant (34.8% vs. 11.5%; Chi-square p-value = 0.009). Nonetheless, we did not observe any other differences between participants who had never been pregnant and who had been pregnant with respect to yCRC characteristics (CRC type, stage) or treatments. A concern highlighted in our study was the impact of yCRC on reproductive decisions, with a third of participants indicating that their yCRC diagnosis influenced their family planning goals, including 39.1% who had never been pregnant and 32.1% who had been pregnant. For many participants, this meant a change from wanting children to deciding not to have children. Drawing insights from qualitative analyses of open-ended survey responses allowed us to identify reasons participants chose not to have children after yCRC diagnosis. Aside from the direct impacts of yCRC and treatments ("After so much surgery and invasion of my body the thought of giving birth or being pregnant makes me feel dread rather than any sense of excitement"), genetic testing was a factor for changing their family goals for 11.4% of participants. Concerns about recurrence due to genetic predisposition prompted some participants to choose not to have children. For others, the fear of passing on genetic predisposition for CRC determined decisions not to have children. Another concern identified in our study is a large proportion of participants, 43.6%, indicating that they did not engage in a discussion about reproductive health while undergoing treatment for yCRC. Our qualitative analyses of responses to open-ended questions provide insights into these findings, as they were designed to allow participants to indicate what topics are most important to them and bring to light issues and concerns that they may have such as impacts of CRC treatments on fertility and resultant early menopause. Qualitative analyses also revealed the importance of the related issue of sexual health impacts of yCRC with participants sharing issues they did not anticipate or were often not discussed with healthcare providers, namely vaginal side effects, intimacy, and intercourse (mentioned in 11.4%, 7.1%, and 7.1% of text responses, respectively). These concerns, as described by Schover in a 2007 report on reproductive issues after cancer, are common concerns for patients of young onset cancers [30] and are now demonstrated in our current study for women with yCRC. It is important to note that previous studies on patient-healthcare provider discussions on reproductive health have, in general, found discrepancies between expectations. In their 2016 study of 346 young adult female cancer survivors including 27 with yCRC, Benedict et al. found that 35% of study participants did not believe they had enough support to make reproductive decisions [15]. They also found that conflict around decisions was associated with having higher unmet fertility information. Lack of support for cancer patients' informational needs likely comes from a disconnect between patients and healthcare providers on what information is useful. Canzona et al. found that between patients and healthcare providers, there was a divide in what constituted helpful communication with patients, indicating that they preferred direct recommendations, verbally acknowledged distress, and not being questioned on the importance of their concerns [16]. The strengths and limitations of our study warrant discussion. A strength of our study was the multi-method approach whereby quantitative findings were supported and/or further contextualized by qualitative findings. However, as our study was administered as an online survey, this may have prevented individuals with limited access to the internet from participating. By allowing participants to choose whether to complete the survey or not, we do not have any knowledge about individuals that chose not to access the survey or to end the survey before completing. Although all 101 participants provided responses to the quantitative questions in the survey, 31 did not provide responses to the qualitative questions and we were not able to gather information on reasons for these non-responses. In addition, by allowing self-reporting through a survey, there may be bias and reduced accuracy to the participants' responses. On the other hand, given the sensitivity of the topic (i.e., reproductive and sexual health and family planning), it is likely that this method of gathering confidential information provided the needed privacy to allow participants to share their experiences in a way that would have been attenuated with face-to-face data collection. Among participants who had been pregnant, the majority occurred >24 months before yCRC diagnosis. As well, the distribution of participants' current age suggest that many have received yCRC diagnosis for more than 10 years. Taken together, there may be potential limitations with participants' recall. We also advise that caution be taken in the interpretation of some of the quantitative findings, particularly with respect to influence of age at yCRC diagnosis on family planning given that the study sample size may have resulted in imprecise estimates. Finally, although our sample consisted of predominantly white participants (90.9% of those ever having been pregnant; 80.5% of those having been pregnant) with a postsecondary education or higher (100% of those never having been pregnant; 93.6% of those having been pregnant) and we limited inclusion to Englishspeaking participants, we identified important gaps in their reproductive and sexual health care. It is critical that further studies examine the experience of patients with historically marginalized racial, gender, and sexual identities. Conclusions In conclusion, through a multi-method approach our findings provide a better understanding of the reproductive health experiences of women diagnosed with yCRC. The influence of yCRC on future family planning decisions along with gaps in care, particularly related to limited reproductive care discussions for a number of participants, highlight the need for improving the reproductive health standard of care for women with yCRC.
2022-01-23T16:34:29.958Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "02fafa06a9d33a98ebb690548b3b41a5cbcdbd4e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1718-7729/29/2/42/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfeeb75570a0be162f724c63228505f28c6744f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247627680
pes2o/s2orc
v3-fos-license
Revisiting the Effects of Leakage on Dependency Parsing Recent work by Søgaard (2020) showed that, treebank size aside, overlap between training and test graphs (termed leakage) explains more of the observed variation in dependency parsing performance than other explanations. In this work we revisit this claim, testing it on more models and languages. We find that it only holds for zero-shot cross-lingual settings. We then propose a more fine-grained measure of such leakage which, unlike the original measure, not only explains but also correlates with observed performance variation. Code and data are available here: https://github.com/miriamwanner/reu-nlp-project Introduction Syntactic parsing has long been one of the core natural language processing (NLP) tasks, and the proliferation of the Universal Dependencies project (UD; de Marneffe et al., 2021;Nivre et al., 2017) has allowed the development and comparison of monolingual and multilingual models under the same syntactic framework. The performance of the dependency parsers, however, varies wildly across languages, with stateof-the-art performance ranging from labeled attachment scores below 20 (e.g. for Amharic, Erzya, Komi, or Yoruba) to more than 90 (e.g. for Spanish, Polish, Russian, or Greek). As the UD treebanks follow mostly similar annotation guidelines, comparisons of the parsing performance across languages are now possible, to an extent. 2 In an effort to explain these cross-lingual performance differences, researchers have proposed treebank size (Vania et al., 2019), linguistic variation (Nivre et al., 2007), test data sentence length * Equal contribution. Work performed at GMU. 1 Code and data are available here: https://github.com/ miriamwanner/reu-nlp-project 2 Different treebank creation protocols followed across languages (whose effects are hard to isolate or measure) can be a significant source of variation. Nevertheless, some of the observed variation can be possibly explained by other factors. We direct the reader to footnote 2 of (Søgaard, 2020). Figure 1: Only labeled reductions produce different graphs for these fundamentally different sentences. Under unlabeled leakage, the two trees do leak. When taking labels into account the two trees belong to different isomorphisms and are not considered "leaky". or average gold dependency length (McDonald and Nivre, 2011), and domain differences between training and test data (Foster et al., 2011), as potential predictors. Recently, Søgaard (2020) proposed that the proportion of isomorphic graph structures between the training and testing data (leakage) is a stronger predictor of the parsers' performance than any of the previously listed attributes other than training treebank size. Søgaard (2020) concludes that "some languages seem easier to parse because their treebanks leak." This finding is potentially crucial for current parser evaluation on the existing treebanks, as well as for future treebank construction. It implies, for instance, that parsers are perhaps not as good as they seem, because they are tested on "leaky" test data. Perhaps one should also consider designing treebanks that do not leak between train and test, as such a test set would not have a bias toward more common phenomena. In this work, we examine this finding more closely. We extend Søgaard's definition to include labeled leakage, and study it over multiple parsers in both monolingual and cross-lingual settings. We show that the finding does not hold up when tested against more modern parsers and more languages. We do identify, though, that leakage indeed predicts parser performance in zero-shot cross-lingual settings, and we dive deeper in this phenomenon with an extensive study focusing on Faroese and other Germanic languages. Last, we propose a modification of the leakage measure that both predicts and correlates with parser performance in such settings. Leakage and How to Measure it In this section we first define leakage based on graph isomorphisms and reproduce Søgaard's experiments. We then show that parsers make local decisions that allow them to generalize to unseen graphs, and explore additional measures of leakage, studying whether they help explain parser performance. Last, we argue that sub-trees are more meaningful units than label-free, tree-level representations. Leakage Definition Leakage can be broadly defined as the portion of test trees that have isomorphic counterparts in the train set. While dependency trees are labeled, directed graphs with labels both on the nodes and on the edges, Søgaard (2020) performed a reduction by removing labels from both nodes and edges. Given these reduced graphs, Søgaard (2020) finds the different isomorphisms that are present in the training and the test set, using the VF2 algorithm (Cordella et al., 2001). We note that the isomorphism may or may not rely on node or edge labels. In the experiments below, we perform an ablation between using completely unlabeled directed graphs, node-labeled (but not edge-labeled) directed graphs, and between using the full information of the graphs to compute isomorphisms, namely both node and edge labels. Reproducing (Søgaard, 2020) Examples of the reductions needed for computing leakage for two sentences are shown in Figure 1. Now, assume that the first sentence is in the training set and the second is part of the test set. Measuring leakage without labels implies that the first dependency tree is somehow informative for producing the tree for the second sentence, which we believe is counterintuitive. Hence, our first hypothesis is that a more informed leakage calculation is going to explain more of the performance variance. We reproduce the experiments of Søgaard (2020) comparing the three different reductions (denoted as "none" for unlabeled graphs, "edges" and "nodes+edges" for respectively labeled graphs). The experiment consists of correlating the factors φ assumed to influence syntactic dependency parser performance with the performance of the parser under study. We train a simple linear regression model 3 with treebank size and φ as input and parser performance as output. φ will correspond to our measure of treebank leakage. Mathematically, we have αt s +βφ+γ with t s treebank size and α, β, γ learned parameters. Following Søgaard, we will focus on explained variance and mean absolute error (MAE) from five-fold cross-validation to avoid overfitting. Unlike Søgaard, we will additionally report Spearman's ρ correlation coefficients 4 between factor and performance, which will reveal whether indeed leakage leads to better parser performance. 5 The results on the same data as Søgaard (2020) (using the best reported parser performance from the CoNLL 2018 shared task) are presented in the top three rows of Table 1. We find that unlabeled graph leakage produces positive explained variance, in line with previous work. However, we have to reject our hypothesis, as a more informed leakage measurement fails to meaningfully explain the output variance, producing negative scores. In fact, the more information we use when computing the graph isomorphisms, the less the model can explain output variance! 6 To further solidify this finding, we repeat the above experiment, this time using UDify, the stateof-the-art multilingual parser of Kondratyuk and Straka (2019). 7 The result is shown in the bottom 3 Exactly as Søgaard (2020) does, just on different data/settings. 4 We do not expect the correlation, if any, to be linear, hence we prefer Spearman's measure to Pearson's. 5 Note that the explained variance is basically the correlation squared. As such, it cannot reveal whether the correlation is positive or negative. Negative explained variance means that the model is a poor fit for the data (worse than just predicting the average). 6 Søgaard (2020) gives this possible explanation: "The result is perhaps not too surprising, since graph isomorphisms correlate with syntactic constructions, which in turn correlate with the occurrence of linguistic markers and tail linguistic phenomena." 7 The model is trained jointly on all UD treebanks (that have a training set), and hence in this experiment we compute leakage multilingually (i.e. we compute leakage between the complete training set and the test set of each treebank). three rows of Table 1 (left Table), and they present more negative evidence for our hypothesis: there is minimal explained variance in the unlabeled version, and still negative explained variance in the labeled leakage versions. Hence, we have to -for now-reject our hypothesis: using labeled graph isomorphisms to compute leakage does not explain more downstream parser performance variations, at least when using treelevel leakage measures; we revisit this hypothesis for sub-tree leakage below. Concurrently, we need to highlight the fact that for all cases we focused on this experiment, there was a negative (inverse) correlation between leakage and parser performance. While Søgaard (2020) was correct (for the languages/parsers they studied) to state that there is a correlation between leakage and parser performance, we believe they reached an incorrect conclusion. The metric they used (explained variance) does not reveal the direction of the correlation, just that there is a correlation. Because of this they came to the wrong conclusion that there was a positive correlation between leakage and parser performance. Our results (Table 1, left table) instead imply that as leakage increases, parser performance worsens! Clearly, something is wrong and we need to re-examine Søgaard's reasoning. Sub-Trees are More Meaningful Units We turn our attention to the parsers whose performance we are trying to explain. The three parsers that Søgaard uses and UDify are graph-based ones. This means that they do not necessarily score or produce whole trees. Graphbased parsers score pairs of words, and from these scores a minimum spanning tree is selected to produce the final dependency parse. As such, we argue that whole trees as a measure of leakage are not appropriate for graph-based parsers. To drive this point forward, we perform synthetic experiments removing adjectival modifiers from nominal subject or object. In particular, we created training data in which the data either did not contain adjectival modifiers on subject/object nouns. We then tested the models on gold unmodified test data which contained such modifiers. By removing adjectival modifiers only from the subjects (similarly for objects) in the training data, we ensure two things: that test instances with adjectival modifiers on subjects are not leaky; as such, if whole-tree leakage is a proper indication for parser performance, then the parser should perform poorly in producing such constructions. Table 2 shows the results of our experiment over the German HDT treebank. 8 We found that the parsers trained on our counterfactual data, which have zero leakage for these test instances, still produce the local constructions that they have never observed during training. The parsers trained without training subject modifiers produced about half of the expected subject modifiers (similarly for object modifier experiments). Nevertheless, they were still able to generalize based on other similar constructions seen in training, correctly parsing a nonzero amount of unseen-in-training constructions. This observation is not unexpected. In the experiment above, even removing all adjectival modifiers from nouns that are subjects (hence a subtree -and consequently a whole tree containing it-has never been observed), the parser still observes adjectivenoun modifying pairs elsewhere in the sentences and is able to generalize, producing a tree that has never been observed at training. The fact that the parsers make local output decisions, along with the proven corollary that they can easily produce unseen trees, guides us to search for a leakage measure focusing on sub-trees. Sub-Tree Based Leakage We define a leakage measure where a dependency tree is first decomposed into a set of subtrees, and then each subtree reduces into the graphs defined above to compute isomorphisms. These subtrees are created for each node (word), connecting it to its parent and to all its children. See example in Figure 2. We repeat our experiments, this time using our proposed leakage measure, and present the results in the right-hand side of Table 1. As before, for Søgaard's and for the UDify combinations of models/languages the explained variance is negative. However, now more information (edge/node labels) leads to higher Spearman's ρ coefficients, implying that indeed the more test subtrees we have observed in training, the better the parser performance. In the unlabeled setting, every sub-tree created by the parser was found in the training data, which was true of most gold files as well. We interpret this observation to mean that unlabeled sub-trees are not meaningful units, a point further reinforced by the negative explained variance and correlations. At the same time, our measure still fails to explain any of the observed performance variance. Thus, we have to reach a conclusion opposite of Søgaard (2020), that in a monolingual setting the performance of modern graph-based parsers is not particularly explained by train-test leakage, however we compute that leakage. Table 3: Leakage explains zero-shot parser performance. Sub-tree leakage also correlates with it. Leakage Explains 0-Shot Performance Modern dependency parsing models trained on many languages perform well on languages unseen (zero-shot setting) during training (Muller et al., 2021;Glavaš and Vulić, 2021, inter alia). We focus again on UDify, since it performs well in zero-shot settings. This is generally attributed to two factors: the presence of related languages in the training set, and the multilingual capabilities of the underlying representation model (here, a multilingual BERT (Devlin et al., 2019) model). Table 3 reports results with Søgaard's (wholetree) and our sub-tree leakage measures under all three settings, focusing only on 35 zero-shot test languages. 9 Now leakage 10 can indeed explain downstream parser performance. Our proposed measure explains as much variance as the original whole-tree measure and also correlates with performance. Analysis on Faroese Looking deeper into the zero-shot setting, we perform an experiment on a simplified bilingual zero-shot setting. We train parsers in five Germanic languages (German, Swedish, Danish, Norwegian, Icelandic) and test on Faroese in a zero-shot fashion. For each language, we train a model on: (a) a 'leaky' sample of the portion of training treebank, so that all training data overlapped with (some) test data, (b) a 'non-leaky' sample of trees such that there was no train-test overlap, (c) a control random sample from the training treebank, and (d) a 'diverse' training sample including a single tree from each isomorphism equivalence class. Figure 3: Zero-shot results on Faroese. Training on non-leaky and diverse data is best. The leaky portion of the test set is far easier than the rest. All of the above models are size-controlled for each language, so that the training data sizes are exactly the same. Leakage here is measured with unlabeled full-tree leakage, for simplicity. We similarly split the test set, for each language, into 'leaky' and 'non-leaky' subsets (also reporting numbers for the whole test set). For example, take German training and Faroese test sets. First all German instances leaking into Faroese are added to the "German leaky" train set and the corresponding leaked Faroese sentences are put into the "Faroese-leaky" test set. Then the remaining sentences from the German training set are added to the "German nonleaky" train set and the remaining sentences from the Faroese test set are the "Faroese nonleaky" test set. Last, we take a random sample of same size across all settings, so that training data size is not a confounding factor for our analysis. See Figure 3 and extensive results in Appendix C. For all languages, models trained on leaky data perform worse than models trained on the same amount of non-leaky or random data. For most transfer languages, in fact, training solely on nonleaky data performs better than training on other subsets! In addition, the leaky part of the testing data is clearly easier to parse in general, while the non-leaky part is more challenging. The models trained on perfectly 'diverse' treebanks generally perform just as good as those trained on non-leaky or randomly sampled data and often better on the non-leaky test set, which means they generalize better. This indicates a way to reach better cross-lingual performance without the need for large training data, as long as the training set is diverse enough. The large performance difference between models trained on leaky and non-leaky trees reveals that something is different about the parts of the treebanks that leak. We measured the diversity of the leaky, non-leaky, and randomly selected trees, defined as the number of unique trees divided by the total number of trees. We found that leaky treebanks were far less diverse and therefore contain fewer unique structures than non-leaky or randomly sampled counterparts. Across all treebanks, leaky instances are also generally shorter (8.4 vs 21.6 avg length), shallower (2.2 vs 4.8 average tree depth), and with shorter avg dependency length (2 vs 3.3). We argue that the reasoning should be reverse: short "easy" examples are more likely to leak; it is not leakage that makes them easy! A Graph Reduction Examples Shown in Table 4.
2022-03-25T01:15:39.175Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "3ccff5cf6619ed96474b81337b70629f3ea62f28", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "7de699066da6634e080a7d16f76b61c90f72344a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
164800774
pes2o/s2orc
v3-fos-license
Self-monitoring of intraocular pressure using Icare HOME tonometry in clinical practice Purpose: To determine the value of self-monitoring of diurnal intraocular pressure (IOP) by Icare Home rebound tonometer in patients with glaucoma and ocular hypertension. Methods: Patients with open-angle glaucoma or ocular hypertension, controlled IOP at office visits, and at least 3 years of follow-up in the glaucoma clinic were included. Progression of glaucoma was based on medical records and defined by documented structural and/or visual field change. Patients were trained to correctly perform self-tonometry and instructed to measure diurnal IOP in a home setting for 3 days. IOP characteristics (mean, peak IOP, fluctuation of IOP as range, and SD of IOP) were documented and compared between the progressive and stable eyes. Results: Ninety-four patients (50 females) with a mean (SD) age of 57.1 (14.7) years were included. Among the 94 eyes from 94 subjects, 72 (76.6%) eyes had primary open-angle glaucoma, ten (10.6%) had pigmentary glaucoma, four (4.3%) had exfoliative glaucoma, and eight (8.5%) eyes had ocular hypertension. Thirty-six eyes showed progression and 58 eyes were stable. Patients with progression were older than those with stable disease (mean (SD) 65.8 (8.4) years vs 51.7 (15.3) years, P<0.001). The progression group had higher average IOP (mean (SD) 15.8 (4.0) mmHg vs 13.3 (3.7) mmHg, P=0.002), peak IOP (mean (SD) 21.8 (5.8) mmHg vs 18.6 (4.8) mmHg, P=0.01), and greater IOP fluctuation range (mean (SD) 11.6 (4.8) vs 9.1 (3.5) mmHg, P=0.011) compared to non-progression group. Conclusion: Self-monitoring of IOP using Icare Home tonometry provides more complete data on variability of IOP to assist in the management of glaucoma. Introduction Intraocular pressure (IOP) is an important factor in the management of glaucoma. Lowering of IOP delays the onset and progression of glaucoma and remains the main treatment to maintain visual function. IOP is usually monitored using Goldmann applanation tonometer (GAT). In the clinical setting, despite the importance of IOP assessment, the efficacy of IOPlowering treatment is usually based on a few IOP measurements annually. A substantial proportion of patients show progression in visual field defects at apparently controlled IOP. Studies have shown that IOP fluctuates during the day and over longer periods. In one of the first studies, Drance found that the range of diurnal variation in patients with glaucoma was two to three times that of normal individuals, and emphasized that a single office IOP reading may not be representative of the IOP most of the time. 1 Later on, several researchers reported that highest IOPs were measured outside office hours and that pressure peaks and higher diurnal fluctuation in patients with controlled office IOP were associated with progression of visual field. [2][3][4][5][6] In the literature, there is no consensus which IOP parameter (mean IOP, peak IOP, fluctuation of IOP) is the most important risk factor for glaucoma progression. 7,8 IOP variation over time may be divided into diurnal fluctuation measured on a daily basis, short-term fluctuation occurring over days, and long-term fluctuation occurring over months to years. Also, the definition of fluctuation varies across studies. One definition refers to the difference between highest and lowest IOP value over 24 hours or less, or over a certain period. 7,9 Many studies define IOP fluctuation as the SD in IOP over time. [10][11][12] Important randomized clinical trials have shown different results about long-term IOP fluctuation as a risk factor for glaucoma progression. [13][14][15] These studies have different study populations, designs, and definitions of IOP fluctuation. Twenty-four hour monitoring of IOP may provide the most accurate measurements. 16 However, it is hospital-based, inconvenient, costly, and it is questionable whether pattern of IOP remains similar over the following days, or over longer periods. 17,18 Collecting more IOP readings at home has led to development of self-tonometers and continuous pressure measurement devices. At present, continuous monitoring of IOP is not clinically useful to assess treatment response in glaucoma patients. 19 Recently, a novel model of self-tonometer, Icare Home rebound tonometer (Icare Finland Oy), has been commercially available for self-use. Compared to its predecessor, Icare One, Icare Home tonometer has EyeSmart eye recognition and EasyPos alignment feature, both of which improve comfort and ease of handling. 20 Icare Home tonometer has shown good agreement with GAT and good-to-excellent repeatability. 21,22 Topical anesthesia is not required and no adverse ocular surface changes have been noted. 23 The purpose of this study was to evaluate diurnal IOP using self-tonometer in patients with open-angle glaucoma and ocular hypertension with at least 3 years follow-up, and to further assess whether there are any differences in IOP parameters (mean IOP, peak IOP, fluctuation of IOP as range, and as SD of IOP) between progressing and stable eyes. Methods Participants Participants were patients with open-angle glaucoma or ocular hypertension attending the Glaucoma Clinic at the Department of Ophthalmology of the University Medical Centre Ljubljana, Slovenia. Patients were recruited from November 2016 until the end of June 2017. Inclusion criteria were: subjects with open-angle glaucoma or ocular hypertension at baseline, ≥3 years of follow-up and controlled IOP at regular examinations. Controlled IOP is eyespecific target IOP and was based on staging of glaucoma at baseline examination and determined from both a percentage reduction (at least 20% from baseline for ocular hypertension and early glaucoma eyes, 30% for eyes with moderate and advanced disease) and an absolute IOP threshold. 24 These examinations included slit-lamp exam, standard automated perimetry, dilated ophthalmoscopy every 6 months, and annual photography of optic nerve head. Exclusion criteria were: visual acuity ≤0.1, corneal anomalies (keratopathies, keratoconus, patients with severe dry eye disease, etc), less than 2 months after refractive surgery and those with tremor, arthritis or other disorders affecting self-handling of tonometer. Progression was defined by documented change of the optic nerve head or retinal nerve fiber layer (eg, thinning of rim, disc hemorrhage, appearance or widening of retinal nerve fiber layer defect) or confirmed significant deterioration of the visual field using EyeSuite™ progression analysis function of the Octopus perimeter. For trend analysis, the last six reliable visual fields were selected. The results of the home tonometry readings were not known to the glaucoma specialists at the time they assessed glaucoma progression. The study was approved by the National Medical Ethics Committee and adhered to the tenets of the Declaration of Helsinki. All participants signed informed consent after a complete explanation of the study. Procedure A certified health care professional explained the tonometer, instructed, trained, and supervised all the participants. The subject was deemed able to obtain reliable selfmeasurements if the following criteria were satisfied: 1. the first of the three Icare Home readings taken by the trainer and subject differed 5 mmHg or less. 2. The range of the three readings (max-min) taken by the patient was 7 mmHg or less. 3. The positioning of the tonometer was correct during self-use, as determined by the trainer. 4. They were able to take three reliable selfmeasurements of each eye in 30 minutes or less from the start of training. Subjects had to perform self-measurements several times under observation without any intervention by a health care professional. Then they received the Icare Home kit with the instruction to measure IOP at home from 8 am to 8 pm, every 3 hours for 3 days. The tonometer stores information of each complete measurement including the final IOP, date and time of the measurement, identification of the eye (right or left), and the quality of each measurement. The collected data for each subject were copied to the computer via USB cable and opened in the Icare LINK software. Statistical analysis Data collected for all participants included age, gender, diagnosis, progression of disease (previously defined), number of eye drops, best corrected visual acuity, refractive error, central corneal thickness, mean defect (Octopus perimeter), average IOP, peak IOP, and IOP fluctuation. Average IOP was the mean value of all self-measurements and peak IOP was the highest value of IOP taken during 3 days of self-use of the device. IOP fluctuation was defined as the difference between the highest and lowest IOP values and as the SD of diurnal IOPs measured over 3 days. IOP data from a randomly selected eye per subject were included. The randomization was done using random number generator with even numbers indicating right eye and odd numbers indicating left eye for statistical analysis. If only one eye fulfilled the inclusion criteria, the data from this eye were analyzed. Values for continuous variables are presented as mean (±SD Table 1. Thirty-six eyes showed progression and 58 eyes were stable. Thirty-one out of 72 (43%) eyes progressed in the primary open-angle glaucoma group, two out of ten eyes in the pigmentary glaucoma group, three out of four eyes in the exfoliative glaucoma group, and none of the eight eyes in the ocular hypertension group. Patients with progression were significantly older, with greater proportion of female patients. Eyes that showed progression had significantly higher average IOP, peak IOP, and higher IOP fluctuation expressed as the difference between the highest and lowest IOP value but not as SD of IOP (Table 2). Discussion The current approach is to measure the IOP at routine clinic visits. A single IOP measurement during office visits does not characterize true IOP and many patients progress in this setting. In these patients, 24-hour IOP monitoring, or if not feasible, diurnal IOP curve, can impact glaucoma management. In a busy clinic diurnal phasing is timeconsuming and inconvenient for the doctor and the patient. In our study we evaluated the value of self-tonometry in patients with open angle-glaucoma and ocular hypertension with controlled IOP at office visits for at least 3 years of follow-up. We found that patients with progression were older and had higher diurnal average IOP, peak IOP, and In an early study using a self-tonometer to monitor IOP over several days in patients with glaucoma and controlled office IOP, Wilensky et al 2 found that 29% of patients with visual field progression had IOP peaks, compared with 5% of patients with stable visual fields. Later, Asrani et al reported that large diurnal IOP fluctuation detected by selfmeasurements over 5 days was an independent risk of glaucoma progression in patients with controlled office IOP. 5 In studies, short-term IOP characteristics are usually evaluated using 24-hour measurements in a controlled hospital or laboratory environment. Grippo et al 25 evaluated 24-hour pattern of IOP in untreated ocular hypertensive (OH) patients in supine position during the night and in sitting and supine position during the day, and showed that OH patients who converted to glaucoma had similar diurnal-to-nocturnal changes in IOP as glaucoma patients, both of which were significantly different from controls. Fogagnolo et al 26 evaluated short-and long-term IOP in 52 patients with primary open-angle glaucoma, controlled by topical prostaglandin analogs for at least 1 year. At baseline, 24-hour IOP curve was recorded in hospitalized patients and used to calculate short-term IOP parameters. At follow-up visits, office-hours IOP curves at three office-hours time points were obtained every 6 months for 2 years, and from these 12 IOP measurements, longterm IOP parameters were calculated. Patients with progression in the visual field during the 2-year follow-up, from baseline, showed an increase in the mean IOP, fluctuation of IOP and peak IOP compared to the patients without progression. In the regression analysis, the peak IOP at baseline from the 24-hour phasing was associated with glaucoma progression. Most studies evaluated IOP parameters over the long term and their impact on progression. Different IOP parameters were associated with visual field progression. Whereas some studies found that eyes with higher IOP fluctuation (SD in IOP) demonstrated greater visual field progression, 13,27,28 others reported that mean IOP, but not long-term IOP fluctuation, was associated with glaucoma progression. 7,26,29 From a retrospective cohort of 587 eyes of 587 patients, de Moraes et al 10 reported that peak IOP was a better predictor of visual field progression than mean IOP or fluctuation. To address these inconsistencies regarding IOP characteristics as potential factors for glaucoma progression, a reliable method for continuous measurement of IOP is of paramount importance. Such a device should be accurate, reliable, show good agreement with GAT, be safe, userfriendly, and comfortable. Currently, such an ideal device for continuous IOP monitoring is not available. The Sensimed Triggerfish® contact lens sensor is a device designed to provide continuous 24-hour recordings of ocular dimension changes. It does not not measure IOP directly but curvature changes of the limbal cornea which are related to IOP variation and are therefore considered representative of IOP changes. 30 Recently, Vitish-Sharma et al 31 demonstrated a weak correlation between the Sensimed Triggerfish contact lens sensor data output and IOP measurements taken using the Tonopen XL applanation tonometer. Self-tonometry in a home setting is a suitable way to collect IOP data. Icare Home self-tonometer was found to be safe, reliable, reproducible, usable by the majority of patients, and demonstrated reasonable agreement with the reference standard GAT. Self-monitoring of IOP can provide more information about IOP characteristics and impact glaucoma management. Sood et al 32 reported that 24-hour IOP self-monitoring in patients with NTG with progression revealed higher IOP spikes than those identified during office hours. Following IOP phasing using the rebound selftonometer, a change in management occurred in 56% of patients. Chen et al 33 reported that diurnal IOP pattern taken by Icare One or Icare Home differed between consecutive days in 47% of patients with glaucoma, and IOP peaks outside office hours occurred in up to 16% of the study eyes. Therefore, measurements over several days provided valuable data in adjusting glaucoma treatment and can be used to complement the investigation of patients with glaucoma. When evaluating our results, there are a number of factors that need to be considered. First, patients with controlled IOP at office visits self-measured diurnal IOP over 3 days and progression was defined retrospectively from medical records. The IOP curves over 3 days may not be representative of the IOP pattern in the preceding years during which visual field and/or structural changes may have occurred. Some studies reported that both healthy subjects and glaucoma patients failed to show repeatable diurnal and circadian IOP pattern over a short period of time, 17,34,35 whereas others found no significant differences in diurnal IOP fluctuation on 2 consecutive days. 36,37 Second, blood pressure, which is associated with IOP, was not monitored. 38,39 Third, the patients' follow-up was ≥3 years, which is a short period to detect glaucoma progression. However, visual field progression in all subjects was defined by linear trend analysis using the last six reliable visual fields. It has been reported that linear trend analysis on a shorter sequence has improved the ability to detect progression compared to longer sequence, in particular when treatment effect may confound the outcome. 40 Obtaining IOP measurements at home may be more representative of true IOP than phasing in hospitalized patients. IOP readings were shown to be consistently lower during hospitalization than after discharge from hospital, which has been assigned to the absence of normal activities. 41 For the future, improvements of Icare Home tonometer to enable self-use in a supine position would help to detect nocturnal IOP elevation or patterns of IOP as potential risk factors in individuals for disease progression. Conclusion In our study, self-monitoring of IOP in patients with glaucoma progression despite apparently adequate IOP control at office visits, detected higher average mean IOP, peak IOP, and range of IOP fluctuation. This indicates that selfmonitoring contributes additional information about IOP characteristics which can be useful in supporting treatment decision, as well as in IOP monitoring following treatment change.
2019-05-26T13:35:31.946Z
2019-05-10T00:00:00.000
{ "year": 2019, "sha1": "290d1682636f0d1db80a98ec4a5757ac62ab5c98", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=49775", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "290d1682636f0d1db80a98ec4a5757ac62ab5c98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56008156
pes2o/s2orc
v3-fos-license
Sedimentation Processes and Useful Life of Mosul Dam Reservoir, Iraq The sedimentation process is the most important problems that affects directly the performance of reservoirs due to the reduction of the storage capacity and possible problems effecting the operation. Thus periodic assessment of the storage capacity and determining sediment deposition patterns is an important issue for operation and management of the reservoirs. In this study, bathymetric survey results and an analytical approach had been used to assess the characteristics of sedimentation and estimate the useful life of Mosul Reservoir. It is located on the Tigris River in the north of Iraq. The water surface area of its reservoir is 380 km with a designed storage capacity of 11.11 km at a maximum operating level (330 m a.s.l). The dam started operating in 1986. No detailed study was yet carried out to assess its reservoir. The present study indicated that the annual reduction rate in the dead and live storage capacities of the reservoir is 0.786% and 0.276% respectively. The observed results (bathymetric survey) and algebraic formula show approximately that the useful life of Mosul dam reservoir is about 125 years. Furthermore, the stage-storage capacity curves for the future periods (prediction curves) were established using bathymetric survey data. Introduction The decrease and scarcity of water resources in the Middle East due to increased demand have negative effects on the economic development and prosperity and thus affect political stability in the region [1][2][3][4][5].Until 1970, Iraq was excluded from the neighboring countries that suffer from water scarcity due to the presence of the Tigris and Euphrates rivers despite the fact that in midseventies the Syrian cut the Euphrates water to impound some of their reservoirs [4].The idea of construction of irrigation and flood control systems in Iraq were started in the first half of the twentieth century by the Board of Development created by the Kingdom of Iraq [4].Primarily, it was to protect Baghdad, the capital, and other major cities from flooding.The 1970 to 1990 was the best period of development of Iraq's water systems.The process stopped in the 1990 due to the first Gulf War and UN sanctions.In 1977, the Turkish Government started to utilize the water of the Tigris and Euphrates Rivers through the South-eastern Anatolia Project (GAP).The project includes 22 multipurpose dams and 19 hydraulic power plants which are to irrigate 17,103 km 2 of land with a total storage capacity of 100 km 3 which is three times more than the overall capacity of Iraq and Syrian reservoirs [4,5].Eight of these dams are to be constructed on the River Tigris, only three were built (two in 1997 and one in 1998).The irrigation projects in GAP will consume about 22.5 km 3 of water per year after completion [3][4][5].The total irrigated area in Iraq is estimated before the Iraq-Iran war and the second Gulf War to be around 40,000 km 2 which decreased to 27,800 km 2 after second Gulf War for the Euphrates-Tigris basin [3,4].The reduction of flow in the Tigris and Euphrates Rivers in Iraq is considered to be a national crisis and will have severe negative consequences on health and on environmental, industrial and economic development [4,5].In view of the above, the Iraqi Government should work to adopt effective procedures to overcome the water shortages.Among these procedures is the assessment the sedimentation rate in the reservoirs to determine actual storage capacities [4].Mosul Reservoir is one of the strategic projects and its storage capacity needs to be evaluated.The reservoir was operated in 1986 and no detailed studies had yet been carried out to know the characteristics of sedimentation and determine its useful life. In the present study, the two topographic maps of Mosul reservoir dated 1983 and 2011 in "Triangular Irregular Network" (TIN) format were used for the assessment of sedimentation rate and determining the reduction in the storage capacity for the live and dead storages as well as the whole Mosul reservoir during its operational period.The two surveys were used to determine the future shift in the stage-storage capacity curve of reservoir.Furthermore, the observed results and algebraic equation that were proposed by Gill [6] were used to determine the life span of Mosul reservoir. Study Area: Mosul Reservoir Mosul dam is one of the most important hydraulic structures in Iraq which has been built on the Tigris River, north of Iraq.The dam is an earth fill dam, 113 m high, 3650 m long with its spillway, located 60 km north west Mosul city at latitude 36˚37'44"N and longitude 42˚49'23"E (Figure 1).The dam is multipurpose and in operation on July 7th, 1986 for irrigation, floods control and hydropower generation [7].Mosul dam has a designed dead storage of 2.95 km 3 and live storage of 8.16 km 3 ; i.e. a total storage capacity of 11.11 km 3 .The maximum, full and dead storage levels of the reservoir are 335, 330 and 300 m a.s.l respectively.The shape of the reservoir is almost elongated and expands close to the dam site.Its length is about 45 km with width ranges from 2 to 14 km at the full level with 380 km 2 waterspread area [7].The main source of the water and sediment entering the reservoir flows from the River Tigris; Figure 2 shows the average monthly water inflow and outflow of the reservoir during 25 years of its operation.Ten seasonal valleys feed the reservoir, 7 from eastern side and 3 from the western side.These valleys contribute water and sediment during rain events that are less than 2% [8][9][10][11].The catchment area of the River Tigris estimated above Mosul dam is about 54,900 km 2 shared by Turkey, Syria and Iraq [12,13] and the catchment area of the valleys surrounding the reservoir is about 1375 km 2 [14,10]. Data Availability The hydrographic survey is a direct measurement and most accurate technique to determine the total volume of the sediment deposited in the reservoirs, sedimentation pattern and bottom profile in the reservoirs and lakes.The recent advances in Global Positioning System (GPS), echo sounding survey technique and computer programs caused a significant reduction in the efforts, time and cost of the collecting and analyzing survey data [15][16][17].The 1986 and 2011 topographic maps in TIN format for Mosul reservoir area were used to evaluate the sedimentation rate.These maps were provided by Issa et al. [18] (Figure 3).The TIN maps were used to compute the storage capacity and water-spread area for live storage and dead storage zones using Arc/GIS software (Table 1).The reduction in storage capacity of the reservoir for the two surveys at different times represents the total volume of sediment accumulated in it [17].Therefore, the above results were used to compute the volume of sediment deposited and the reduction in the water-spread area for the reservoir during 25 year of operating (Table 1). Results and Discussion The reservoirs are built to achieve certain purposes, e.g.irrigation, hydropower generation, flood control, navigational, urban water supply, etc. Reservoir sedimentation and consequent loss of storage capacity affects directly the future performance of reservoirs.Consequently, it is of prime importance to monitor the rate of sedimentation and the changes in the capacity of the reservoir.To achieve a real situation of the storage volume of water within Mosul reservoir it is important to know the following: Useful Life of Reservoir The useful life or design life is a period that the sediment deposited does not affect the economic feasibility and sustainability of water resources demand.In general, useful life of the reservoir is the time period when the reservoirs depleted 50% of its storage capacity or the dead storage is completely filled with sediment [6,19]. In the present study, the useful life for Mosul reservoir was computed using algebraic equations that were proposed by Gill [6].The equations represent the relationship between initial storage capacity of reservoir, water and sediment inflow into the reservoir and specific weight of sediment deposited as shown in the following equations. For coarse grained sediment where, C o is the initial storage capacity of reservoir; T L is the useful life when the initial capacity reduce to half; I is the annual water inflow; G is the weight of annual sediment inflow; and r' is the specific weight of sediment deposited which was computed depending on the Lane and Koelzer empirical formula presented in 1953 [15,20]. The results of the above approach are presented in the Table 2. Furthermore, the bathymetric survey results (Table 1) were used for estimating the useful life based on the depleted dead storage and 50% loss of the initial storage capacity of the reservoir (Table 2).In such a case, the depositional conditions are assumed to be constant during the life of the dam.The observed results obtained from bathymetric survey and analytical approach were similar.Accordingly, it is possible to assume that the useful life for Mosul reservoir is approximately 125 year. Sedimentation Rate and Pattern According to the observed results (Table 1) the annual reduction rate in the storage capacity of the reservoir is 45.72 × 10 6 m 3 •year −1 which is divided into 23.2 × 10 6 and 22.52 × 10 6 m 3 •year −1 for dead and live zones respectively.This implies that the annual loss of storage capacity within the dead and live zones is 0.787% and 0.276% respectively.Furthermore the annual loss in water-spread area of reservoir at dead storage elevation (300 m a.s.l) zone is 1.34 km 2 (Figure 4). Figure 4 shows the maximum loss in water-spread area (water surface area) at the dead storage level in the northern part of the reservoir where the River Tigris enters the reservoir at this part.That means the most of sediment deposited in that area.This sequence is very logical in reservoirs [21]. The sedimentation in the reservoir caused a shift in the stage-storage capacity curve.The bathymetric survey results were used to compute the sedimentation rate for different water levels of reservoir that was used to predict the storage capacities at these levels for 50, 75, 100 and 125 years, all values are tabulated in Table 3. The results in the above table were used to construct the stage-storage displacement curve during operation of the Mosul reservoir (Figure 5). Summary and Conclusion Reservoir sedimentation and consequent loss of storage capacity affect directly water availability and project operation.In the present study, two topographic plans in TIN format of 1986 and 2011surveys were used for the assessment of reservoir sedimentation in live and dead storage zones.The results showed that the annual reduction in the dead and live storage capacities were 0.787% and 0.276% respectively.The water-spread area of the reservoir at dead storage level reduces annually by 1.34 km 2 (0.79%).Furthermore, stage-storage capacity curves for future periods 50, 75, 100 and 125 years were elaborated using the sedimentation rate on that elevation.The bathymetric results and analytical formulas gave almost similar results (125 year) for useful life storage of the reservoir. Figure 2 . Figure 2. Monthly mean inflow and outflow of the Mosul reservoir for 1986-2011. Figure 4 .Figure 5 . Figure 4. Boundary of water-spread area at dead storage elevation for two surveys calculated using Arc/GIS program. Table 1 . Storage capacity and water-spread area of Mosul reservoir for two surveys. Storage capacity (S.C)Water-spread area (W.S.A)
2018-12-05T16:31:50.595Z
2013-09-12T00:00:00.000
{ "year": 2013, "sha1": "07986bc67780297c22fa40a8c86ece8eae6d4d90", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36741", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "07986bc67780297c22fa40a8c86ece8eae6d4d90", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
16468543
pes2o/s2orc
v3-fos-license
Outcomes of dialysis catheters placed by the Y-TEC peritoneoscopic technique: a single-center surgical experience Background In the last few years, peritoneal dialysis (PD) catheter placement techniques and outcomes have become important because of the growing population of PD patients. Although there are a growing number of catheters placed by the minimally invasive Y-TEC peritoneoscopic technique, there are still limited data on outcomes for these catheters, especially those placed by a surgeon. We aimed to conduct a retrospective study of our experience with PD catheters placed by the Y-TEC peritoneoscopic technique in our institution. Methods We reviewed patients with peritoneoscopic PD catheter insertion over the last decade and described their complications and outcomes. In a secondary analysis, we compared the outcomes and complications of these catheters with those with open placement placed by the same surgeon. Results We had complete data on 62 patients with peritoneoscopic catheter placement during the study period. The mean age was 55 years, 48.4% were females and the most common cause of end-stage renal disease was diabetes mellitus (33%). Surgical complications were seen in only 6/62 (9.6%) and peritonitis in 16/62 (26%) of peritoneoscopic catheters. Most catheters were used after 2 months of placement, while 12.3% were used within 2 months. When compared with 93 patients with open placement of catheters as a secondary analysis, peritoneoscopic catheters were found to have a higher 2-year survival. Conclusion Our large series of peritoneoscopically placed catheters by a surgeon demonstrate low surgical complications and peritonitis rates as well as superior 2-year survival compared with open placement of catheters. Introduction Peritoneal dialysis (PD) has become a modality of choice for renal replacement therapy (RRT) for many end-stage renal disease (ESRD) patients requiring autonomy and having a busy lifestyle [1][2][3][4][5][6]. The PD catheter is the cornerstone of PD and its proper function and survival are crucial. In the last few years, PD catheter placement techniques and outcomes have been the subject of several publications in the medical literature. Meta-analyses and systematic reviews have suggested that straight catheters tend to have less migration than coiled catheters [7][8][9]. Another important issue described in the recent literature is the improved results with fewer malfunctions and longer catheter survival found when performing the procedure with a classic laparoscopic technique using the several ports approach under general anesthesia [10]. However, there are still limited data on the outcomes of catheters placed by the minimally invasive Y-TEC peritoneoscopic technique. In the present study, we review our experience with these catheters over the past decade in a large population of PD patients. We also compared outcomes for these catheters with those placed by the classic open technique. Data collection We performed a retrospective chart review of all ESRD patients who had a PD catheter inserted at Mount Sinai Hospital from 2004 to 2014. We only included the first instance of catheter placement for each patient. We performed a comprehensive chart review and abstracted demographics, comorbidities including diabetes mellitus and whether PD was the first RRT modality. We reviewed the operative notes and abstracted information on the surgical approach ( peritoneoscopic versus open) and the time for first use after insertion. We also collected information on complications (mechanical/infectious/other), overall catheter survival and total time of follow-up (including when the patient discontinued PD due to inability/transplant/preference). The institutional review board of our institution approved the study. Y-TEC peritoneoscopy technique Under local anesthesia with mild sedation, a 2-cm paraumbilical paramedian incision is made in the skin and subcutaneous tissue, exposing the anterior rectus fascia. A 0 Prolene purse-ring stitch is placed in the fascia. Through the center of this pursering stitch, a Varis needle is introduced and 3 L of nitrous oxide is insufflated into the peritoneal cavity. This gas is used instead of CO 2 because it is painless in the peritoneal cavity and allows for the procedure to be performed under local anesthesia. Once pneumoperitoneum is obtained, the Varis needle is removed and a Y-TEC trocar is introduced through the center of the purse ring. This trocar has a metallic cannula/peeling sheath that accepts the Y-TEC scope. Peritoneoscopy is done and the tip of the cannula/peeling sheath is directed toward the pelvis. The metallic cannula is removed, leaving the peeling sheath in place. The PD catheter, which is mounted on a rigid metallic rod, is introduced through the peeling sheath and as the catheter is being introduced, the rod is progressively removed. Before removing the peeling sheath, the distal Dacron cuff is forced into the rectal sheet. The purse-ring suture is tightened snugly around the catheter. The external side of the catheter is brought out through a small superior lateral skin opening, leaving the proximal Dacron cuff subcutaneously. Open technique Under general anesthesia, a 5-10 cm long paraumbilical paramedian incision is made in the skin and subcutaneous layers. The anterior rectus muscle fascia is opened in the same direction and extension. The muscle fibers are split, exposing the posterior fascia/peritoneal membrane. A 0 Prolene purse-ring stitch is placed in this layer and in the center of it, a small opening is made. The PD catheter is introduced through this opening, directing the tip of the catheter to the pelvic area. The distal Dacron cuff is positioned outside of the posterior fascia/peritoneal membrane and the purse-ring suture is tightly tied around the catheter. The anterior rectus fascia is closed with a continuous 0 Prolene suture. The catheter is exited through a small skin incision superior/lateral, leaving the proximal Dacron cuff in the subcutaneous layer. Statistical analysis We summarized differences in continuous variables using mean/ median values depending on their distribution and categorical variables using percentages. We utilized t-test/Wilcoxon ranksum test for continuous and χ 2 test for categorical variables to assess differences between patients who lost their catheters versus those who did not. As a secondary analysis, we analyzed the independent effect of the peritoneoscopic versus open placement technique using Cox proportional hazard modeling after adjusting for demographics and comorbidities. We censored follow-up time at death, transfer to hemodialysis (with a functioning PD catheter) or loss to follow-up. We constructed Kaplan-Meier curves to plot catheter survival over the follow-up period. We used a two-tailed P-value ≤0.05 to determine statistically significant differences. All statistical analyses were performed using STATA 12 SE (StataCorp, College Station, TX, USA). Baseline characteristics From February 2004 to June 2014, a total of 155 ESRD patients had their first PD catheters inserted at the Icahn School of Medicine at Mount Sinai. Table 1 summarizes patients' baseline characteristics both overall as well as stratified by the type of catheter placement. The mean age was 55 years, 51% were males and 63.2% of patients were white. Diabetes mellitus was the most common cause of ESRD in our population, followed by hypertension and chronic glomerulonephritis/HIV-associated nephropathy (HIVAN). PD was the first modality of RRT in 103 (66.5%) patients. In 130 (83.9%) patients, catheters were used after 2 months of insertion, whereas in 19 (12.3%) patients, catheters were used <2 months, mostly in the setting of urgent-start PD. Three catheters had primary nonfunction, one was never used because the patient expired 5 days after PD catheter placement and one was never used because the patient was never started on PD. With the exception of lower body mass index in patients with peritoneoscopic placement, there were no significant differences in baseline characteristics by catheter placement technique. Complications and outcomes The infectious and noninfectious complications of patients with catheters placed by Y-TEC peritoneoscopic techniques are shown in Table 2. The most common infectious complication was Experience in peritoneoscopic catheter placement | 159 peritonitis, which occurred in 26% of patients. The most common noninfectious complication was scrotal leak, occurring in 4.8% of patients. There were no complications of hernias or catheter migration seen in the Y-TEC peritoneoscopically placed catheters. During follow-up, 8 (13%) patients lost their catheters. The most common reason for catheter loss was peritonitis in three patients. Other causes included mechanical dysfunction in two patients, abdominal wall leak in one patient and pleuroperitoneal fistula in one patient. One patient had a malposition; however, a new catheter was placed immediately. When comparing survival of Y-TEC placed catheters with those placed by open technique in a secondary analysis, although this difference was not statistically significant ( Table 2). In a secondary analysis, comparing the survival of Y-TECplaced catheters with those placed by the open technique, there was no difference in hazard ratios in catheter survival after adjusting for demographics, diabetes status and the number of previous catheter placements for overall follow-up. However, at 2 years of follow-up, open placement of catheters had a higher adjusted hazard ratio for catheter loss compared with Y-TEC peritoneoscopic placement [2.53 (95% CI 0.98-6.68), P = 0.06], which was close to statistical significance (Table 3.) This is likely since a majority of catheters were lost within 24 months, with the mean time Other causes of ESRD included interstitial nephritis (n = 2), amyloidosis (n = 2), medication induced (n = 6), polycystic kidney disease (n = 5), congenital (n = 3), sickle-cell disease (n = 1), chronic rejection post-transplant (n = 1), acute kidney injury (n = 2) and unknown (n = 11). Discussion In this article we present one of the largest series of Y-TEC peritoneoscopically placed PD catheters by a surgeon. We report an overall catheter survival rate comparable with previous reports of laparoscopic or open placement [8,[11][12][13], with an average of 80% survival at 2 years. Gadallah et al. [12] showed that a peritoneoscopically placed catheter (using Y-TEC) had fewer complications and higher survival rates compared with those placed by an open technique. A recent meta-analysis showed that peritoneoscopically placed catheters had a better 1-year survival and also less catheter migration than those placed via an open approach [10]. We also observed catheter leak occurring in 4.9% of patients, which is much lower than previously described [11,14]. Also, our rate of mechanical complications was 11%, compared with 17.8% in a recent study by Ouyang et al. [8]. Our report shows that Y-TEC peritoneoscopically placed catheters had a low complication rate and a similar survival rate compared with those placed surgically by an open technique. With regards to the type of catheter used, a recent systematic review and meta-analysis by Hagen et al. [9] favored survival of straight versus coiled PD catheters. Since all of our patients, except one, had coiled PD catheters, we are unable to compare differences in outcome between the two types of catheters. However, the fact that we did not find differences in outcome in our catheters compared with other published experiences with a greater use of straight catheters suggests that coiled catheters might be as safe. A majority of the PD catheters in the USA are placed by surgeons and by an open or laparoscopic technique. The peritoneoscopic technique has the advantage that it is done under local sedation, as opposed to a laparoscopic approach that is done with general anesthesia. Moreover, this technique allows acute use of the catheter as opposed to laparoscopy, which requires a healing time of at least 2 weeks. This report provides evidence that peritoneoscopically placed PD catheters could be utilized as a procedure of choice among surgeons providing access for PD. In summary, this decade-long, single-center experience with peritoneoscopically placed PD catheters by a surgeon demonstrated similar catheter survival rates but lower mechanical complications rates compared with previous reported studies using open or laparoscopic placement. Also of interest, coiled PD catheters showed no significant difference in outcome compared with straight catheters. Experience in peritoneoscopic catheter placement | 161
2018-04-03T04:47:13.923Z
2015-11-14T00:00:00.000
{ "year": 2015, "sha1": "9daa94400279a7014a2c62f66bbd19a93b44c82f", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/9/1/158/7450703/sfv113.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9daa94400279a7014a2c62f66bbd19a93b44c82f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5721357
pes2o/s2orc
v3-fos-license
The Anti-Phytophthora Effect of Selected Potato-Associated Pseudomonas Strains: From the Laboratory to the Field Late blight, caused by the oomycete Phytophthora infestans, is the most devastating disease of potato. In organic farming, late blight is controlled by repeated applications of copper-based products, which negatively impact the environment. To find alternative solutions for late blight management, we have previously isolated a large collection of bacteria from the phyllosphere and the rhizosphere of potatoes. Here we report the antagonistic potential of these strains when co-cultivated with P. infestans as well as with other potato pathogens. We then focused on three Pseudomonas strains and compared their protective impact against late blight to that of well-known biocontrol strains in planta using a high-throughput leaf disk assay with automated picture analysis. When sprayed on the leaves of potatoes in the greenhouse, the strains were able to survive for at least 15 days. Under field conditions, populations decreased faster but all tested strains could still be retrieved after 8 days. The most active strain in vitro, P. chlororaphis R47, was also the best protectant on leaf disks from plants grown in the greenhouse experiment, but its protection potential could not be verified in the field due to unfavorable infection conditions. However, its protective effect against P. infestans in planta, its survival in the phyllosphere as well as its ability to colonize the potato rhizosphere in very high population densities, suggest a potential for field application, e.g., in the form of tuber treatment or leaf spray. INTRODUCTION Over the last decades, the need to move from intensive agriculture to a more sustainable way of food production has risen in the awareness of growers and consumers. However, crop production is threatened by a variety of abiotic and biotic factors, such as changing climate or the occurrence of disease-causing agents. In potato production, yield losses are mostly due to the oomycete Phytophthora infestans, which causes late blight and can lead, upon favorable infection conditions, to massive destruction of the crop within a few days (Fry, 2008). In conventional farming, late blight is typically controlled through repeated applications of various fungicides, whereas copper-based products are commonly used in organic farming (Cooke et al., 2011;Axel et al., 2012). In view of its accumulation in the soil and of its toxicity toward the soil fauna, copper use represents an environmental hazard and alternative solutions to control late blight are needed to ensure sustainable potato production (Dorn et al., 2007). Biocontrol organisms have been suggested as a putative alternative to chemical protectants in the protection against diseases (Velivelli et al., 2014). Such antagonistic bacteria or fungi have been proven efficient under field conditions and some of them are available as commercial products and routinely used by farmers, such as Pseudomonas chlororaphis MA 342, which protects cereals against some seed-borne fungal diseases (Johnsson et al., 1998), or Bacillus amyloliquefaciens FZB42, which acts both as a plant growth promoting and as a biocontrol agent (Chen et al., 2007). However, although numerous studies have tested the effect of microbial inoculants on late blight (reviewed in Axel et al., 2012), none has so far demonstrated a protection against late blight in the field. This might be at least partly due to the fast spreading of Phytophthora during humid conditions, which is mediated by the production of sporangia that, depending on temperature, can either directly infect plant tissues or release motile zoospores, which in turn infect new leaves (Fry, 2008). A successful biocontrol agent would therefore need to be able to inhibit not only the pathogen's mycelial growth but also the formation and/or the germination of its sporangia, as was recently reported for a Lysobacter strain producing cyclic dipeptides (Puopolo et al., 2014). In an attempt to find such a biocontrol agent against potato late blight, we have previously isolated bacterial strains from the phyllosphere and rhizosphere of potato, which we hypothesized to be adapted to the host plant (Hunziker et al., 2015). In this earlier work, we reported the ability of potato-associated bacteria to inhibit growth and sporulation of Phytophthora infestans through the emission of volatile organic compounds (Hunziker et al., 2015). However, the gap between results obtained in controlled laboratory conditions and true protective potential in the field can be very large (Dorn et al., 2007) and further studies are needed that include testing the strains' ability to establish in sufficient densities in the targeted plant organs (roots vs. shoots) and their efficiency in planta as well as under field conditions. The present study investigates these questions using a subset of Pseudomonas strains which showed promising protective effects in vitro. The strains' root and leaf colonization capacity as well as their effect on plant growth and development were assessed. Their protective effect in planta was then analyzed using a newly developed high through-put leaf disk assay. Finally, the strains' survival and efficiency under field conditions was monitored in a microplot experiment. Chemicals and Culture Media Luria-Bertani medium (LB) was used to cultivate bacteria. LB agar plates were prepared by dissolving 20 g L −1 of Difco LB Broth, Lennox (BD) mixed with 15 g L −1 agar (Agar Agar, ERNE surface AG). PIA medium was prepared by dissolving 45 g L −1 of Pseudomonas Isolation Agar (Fluka) in distilled water to which 20 ml L −1 of glycerol (Sigma-Aldrich) was added. Fungi and oomycetes grew on rye agar (RA), malt agar (MA), or potato dextrose agar (PDA). RA was prepared by simmering 200 g rye grains (winter rye cv. Picasso95) in tap water for ca. 1 h. The filtered liquid (1.5 mm mesh) was filled up to a volume of 1 L with tap water and 20 g L −1 agar was added. For the initial screen (Table 1), RA without glucose was used, for later experiments, RA was supplemented with 5 g L −1 glucose. MA was prepared with 15 g L −1 Difco malt extract agar (BD) and 12 g L −1 agar, and PDA contained 39 g L −1 PDA (Oxoid). One experiment was performed on water agar (WA) containing dH 2 O and 6 g L −1 agar. When needed, rifampicin (PanReac AppliChem) was added at a concentration of 50 µg mL −1 . Strains, Culture Conditions and Preparation of Inoculation Suspensions The bacterial strains used in this study are described in a previous publication (Hunziker et al., 2015). Additionally, Pseudomonas protegens CHA0 and Pseudomonas DSMZ 13134 were included as controls in most experiments. Dickeya dianthicola was obtained from S. Schaerer (Agroscope). For the greenhouse and field experiments, rifampicin-resistant derivatives of selected strains were obtained by streaking a high density of pure bacterial culture onto a LB plate containing rifampicin (50 µg mL −1 ). Spontaneous rifampicin resistant colonies were visible 2 days later; one colony for each strain was streaked on a fresh LB plate containing the same concentration of rifampicin. After 2 days, glycerol stock was prepared with each stable mutant strain. Bacterial strains were kept at −80 • C in 25% glycerol for long-term storage. A polyspore isolate of Phytophthora infestans sampled in 2001 in Zurich Affoltern (provided by H. Krebs, Agroscope) was used for all experiments. This isolate was grown on RA in the dark at 18 • C, and was regularly transferred to potato tuber slices for host passages. The fungi Rhizoctonia solani, Helminthosporium solani, obtained from P. Frei (Agroscope), were grown on PDA and MA respectively. Fusarium oxysporum was recovered from a contaminated P. infestans host passage in 2013 and was grown on RA. Long-time storage of all fungi was done in 25% glycerol at −80 • C. All potato pathogens were continuously grown on agar media and agar plugs (ø 5 mm) were transferred to fresh medium plates when borders of the previous plate were reached. Plates were stored in the dark at ca 20 • C. To obtain P. infestans sporangia suspensions, mycelium was detached from overgrown agar plates and suspended in tap water. The suspension was filtered through autoclaved gauze and the number of sporangia adjusted to the desired final concentration using a Thoma chamber (Marienfeld Superior, Germany). The suspension was stored at 5 • C in the dark until use. Unless otherwise specified, bacterial suspensions were prepared by harvesting overnight LB agar cultures and resuspending the cells in 0.9% NaCl. For the microplot field application, densities were adjusted by adding tap water. In Vitro Screening of Bacterial Strains for Activity Against Different Potato Pathogens Thirty two bacterial strains showing antagonistic potential in a preliminary screening (Hunziker et al., 2015) were tested in Plates were incubated at 20 • C in the dark and the pathogen growth area was assessed by picture analysis (after 3 days for R. solani, 7 days for D. dianthicola and F. oxysporum, 14 days for P. infestans and 28 days for H. solani) using the image processing program ImageJ (Schneider et al., 2012). This experiment was carried out in four replicates per bacterial strain (five control plates). The average growth of the pathogens in presence of the different strains was compared with that of the pathogens grown in absence of the strains (control) and a percentage of control growth was calculated. Greenhouse Experiment Rifampicin-resistant derivatives of selected Pseudomonas strains (R47, R76, S35, CHA0, Pseudomonas DSMZ 13134, see section above for the generation of these strains) were inoculated onto potato tuber sprouts and tested for their effect on plant development (cv. Charlotte and cv. Victoria) and for their survival in the rhizosphere. For the sprout inoculation experiment, bacterial cells were suspended in 0.9% NaCl and adjusted to OD 570 = 1. The sprouts were moistened by spraying them with a solution containing 0.9% NaCl and subsequently 1.5 mL of the bacterial cell suspension was pipetted next to the sprout. To prevent a wash out of the inoculated cells, pots were not watered during the first 24 h following inoculation. Additionally bacteria suspensions of the same strains were sprayed onto potato leaves (∼15 mL per plant), in order to assess the survival of the bacterial population for a period of 15 days. Inoculation of potato sprouts was done 1 day before potting, by the application of 1.5 mL unwashed bacterial cells suspended in 0.9% NaCl and adjusted to OD 570 = 1 (0.9% NaCl was used for the control). Each treatment was replicated six times. BBCH stage and plant height (distance between the soil surface and the apical meristem) were measured once a week between the period from sprouting until flowering (4-33 days after planting). The survival of the sprout-inoculated Pseudomonas strains in the rhizosphere was assessed 11 weeks after potting. To this end, the washed root system of two replicate plants was cut in small pieces and suspended in 15 mL 0.9% NaCl. After a sonication step of 5 min, the suspension was tenfold serially diluted and 100 µL of each dilution was plated on a PIA plate supplemented with rifampicin (50 µg L −1 ). After 3 days of incubation at 20 • C in the dark, colony forming units (CFUs) were counted from the most appropriate dilution. The survival of bacteria sprayed on potato leaves (OD 570 = 1) was investigated within a period of 2 weeks at the days 1, 3, 8, and 15. From each treatment three 5 mm diameter leaf disks were cut and suspended in 200 µL 0.9% NaCl. The plant tissue was homogenized using a small plastic pestle and after sonication and tenfold serial dilution (see above), 5 µL of each dilution was spotted twice on a LB plate containing rifampicin, which was then lifted to let the drop fall and spread the CFUs. After 3 days of incubation at 20 • C, CFUs were counted from the most appropriate dilution. Establishment of a High Through-put Potato Leaf Disk Assay with Automated Picture Analysis to Monitor P. infestans Infection In order to determine the appropriate sporangia concentration as well as the application strategy aiming at optimum infection pressure (necrosis development and sporangiophore appearance on the leaf surface), sporangia suspensions in different concentrations (6.25 . 10 4 , 1.25 . 10 5 , 2.5 . 10 5 , 5 . 10 5 sporangia mL −1 ) were applied in a 10 µL drop on the upper or the lower side of potato leaf disks (ø 17 mm), cut from potato plants cv. Victoria 39 days after planting. Leaf disks were placed on a previously watered filter paper in a standard Petri dish and inoculated with the respective sporangia suspension. The Petri dishes were placed in a lightproof plastic box and incubated at 18 • C for a period of 8 days. When the leaf disks showed first infection symptoms (after 3 days), daily pictures (dimensions 5184 × 3456) were taken with a reflex camera (Canon EOS 700D) and the increase of necrotic plant tissue (days 3-7) and sporangiophore cover (day 8) was analyzed by the automated picture analysis macroinstructions developed for this purpose in the freeware program ImageJ (see Supplementary Material). Testing the Protective Potential of Bacteria Applied on Leaf Disk Against P. infestans Using this newly developed leaf disk method with automated picture analysis, the effect of selected bacterial strains (Pseudomonas strains R47, R76, S35, CHA0, DSMZ 13134) on disease progression was monitored. To this end, bacteria and sporangia suspensions were mixed and applied on the lower side of leaf disks (cv. Victoria, 18 replicates). The final sporangia concentration was 1.25 . 10 5 mL −1 and bacteria were applied at two population densities: OD 570 = 0.3 and 3 (corresponding approximately to 2 . 10 8 and 2 . 10 7 cells/ml). The experimental set up was the same as described above and after the application of 10 µL of the combined suspensions, the leaf disks were incubated for 8 days at 18 • C. The necrotic leaf tissue and the sporangiophores were measured with the automated picture analysis macroinstructions (see Supplementary Material) 4 days after inoculation and 8 days after inoculation respectively. A separate set of leaf disks inoculated with the mixed suspension was used to take microscope pictures of the sporangia, which were exposed to the bacteria at a concentration of OD 570 = 3 (corresponding approximately to 2 . 10 9 cells/ml). Pictures were taken 4 days after inoculation, when the necrotic area of the control treatment reached 60% of the leaf disk area. Sporangia Germination The sporangia germination in mixed sporangia-bacteria suspension was analyzed, when sporangia were exposed to the strains in population densities of OD 570 = 3, OD 570 = 0.3 and OD 570 = 0.03 (0.9% NaCl was used as control). Fifteen micro liter of the mixed suspension was applied on a 0.6% WA plug placed on microscope glass slides. The sporangia germination behavior was assessed after 3 days of incubation at 18 • C in the dark. Treatments and controls were replicated four times and randomly placed on the glass slides. Additional control plugs (n = 20) were incubated separately from treated plugs to verify whether the control plugs incubated on the same glass slides as the treated ones would be influenced in any way. The number of germinating sporangia per plug was calculated as percent of germinated sporangia relative to the total number of sporangia (23-106 per plug depending on sporangia density). This percentage was then compared to the germination percentage of the control. Microplot Experiment In order to determine the protection potential of bacterial strains under field conditions, a microplot experiment was carried out in Zurich Affoltern, Switzerland. Each microplot consisted of one row of five potato plants. Per treatment, four replicates were allocated to four blocks in which they were randomly distributed. Each block was surrounded on all sides by a single row of border plants of the cultivar Panda (low susceptibility to late blight). The blocks with borders were separated from one another by a single -Steenblock and Forrer, 2005). The treatment interval ranged between 6 days and 2 weeks. In total, six treatments were carried out. For the suspensions, overnight bacterial cultures were suspended in water and supplemented with 0.1% Nu-Film 17 R (Andermatt Biocontrol), a wetting agent intended to enhance adhesion of the cells to the leaf surface. The concentration of the suspensions was adjusted to OD 570 = 1. The suspensions were sprayed from above and below on the plants' leaves, each plant receiving approximately 50 mL of suspension per application. After the last application, which occurred 106 days after planting, the survival of sprayed bacteria was assessed one and 8 days after this last spraying. To this end, three leaf discs (ø 17 mm) were cut and suspended in 2 mL 0.9% NaCl. The leaf tissue was further homogenized using a Polytron PT300 homogeniser (Kinematica AG), with which the samples were shredded by 6000 rpm during approximately 30 s each. After 5 min in a sonication bath, samples were tenfold serially diluted. Five micro liter of each dilution was spotted twice on a PIA plate containing rifampicin and incubated at 20 • C in the dark. CFUs were counted from the most appropriate dilution after 3 days incubation. In order to assess the protective potential of the strains, a leaf disk experiment was performed 1 day after spraying as described above. Sporangiophore cover was assessed 6 days after inoculation of the leaf disk. Statistical Evaluation In the initial screen (Table 1), a Student's t-test was used to compare the growth inhibition of each bacterial strain to the negative control. The evaluation of subsequent experiments was done with GraphPad Prism version 5.01 (GraphPad Software, San Diego, CA, USA), performing one-way ANOVAs with Tukey's post hoc tests. Growth Inhibition of Pathogens by Potato-associated Bacterial Strains in Dual Cultures Thirty two potato-associated bacterial strains previously identified as potentially active based on their emission of volatiles (Hunziker et al., 2015) were tested for their effects in direct competition assays with five different potato pathogens: the oomycete Phytophthora infestans, the fungi Helminthosporium solani, Rhizoctonia solani, Fusarium oxysporum and the bacterium Dickeya dianthicola. Among those 32 strains, five Pseudomonas were able to significantly inhibit the growth of each target organism, although R. solani and F. oxysporum were inhibited to a much lesser extent than P. infestans and H. solani (Table 1). This differential reaction of the target organisms to the bacterial strains was further illustrated, e.g., by the fact that R. solani was more inhibited in its growth by Pseudomonas strains than by the other strains, while Bacillus strains R73 and R54 were able to drastically reduce the growth of both P. infestans and H. solani and the Streptomyces strain S01 was the best inhibitor of F. oxysporum (60% of its control growth) ( Table 1). Within the genus Pseudomonas, which in general inhibited P. infestans more strongly than other targets, the strains affiliated P. frederiksbergensis (e.g., S04, S06, S19, S24, R74) all impacted H. solani more than P. infestans. In contrast, the two Arthrobacter strains R61 and R60 drastically reduced the growth of P. infestans but barely affected the other target organisms. Based on these results, we selected three promising strains, which showed significant in vitro growth inhibition of all pathogens, for further analysis: P. chlororaphis R47, the strain with the highest in vitro inhibition of P. infestans, as well as P. fluorescens R76 and P. marginalis S35, which showed comparable in vitro inhibition but were isolated from different plant parts (rhizosphere for R76 and phyllosphere for S35). In the following experiments, two control Pseudomonas strains were included for comparison purposes, P. protegens CHA0 (Voisard et al., 1989) and Pseudomonas sp. DSMZ 13134. Effects of Sprout-inoculated Bacterial Strains on Growth and Development of Potato Plants As a first step toward evaluating the potential of these three selected strains for practical application, it was assessed whether they showed any phytotoxic effect when applied onto the potato tubers. The bacterial strains were inoculated on the tuber sprouts of two different potato cultivars, Victoria (moderately susceptible to late blight) and Charlotte (highly susceptible to late blight), which were monitored for 33 days after planting (Supplementary Figure S1). No significant difference in overall growth between the plants developing from inoculated and non-inoculated sprouts could be observed, thus excluding a phytotoxic effect of the strains (Figure 1). No growth promotion was observed either, but some strains led to a more constant growth, i.e., to less variability between individual plants, for instance P. fluorescens R76 in the cultivar Charlotte and P. protegens CHA0 in both cultivars (Figure 1). Survival of the Strains in the Phyllosphere and Rhizosphere of Potato Plants The ability of the three selected strains to survive in the phyllosphere and in the rhizosphere was assessed in a greenhouse experiment, using rifampicin-resistant derivatives of the strains. The survival of bacteria in the phyllosphere was monitored at different intervals within a 15 day period after spraying the biocontrol agents onto the leaves of potato plants. After a strong decrease in population abundance within the first few days, the levels stayed almost constant during the second week and remained at ca. 100 CFUs per cm 2 (Figure 2). All bacterial strains survived in the two tested cultivars for the entire tested period. For the survival in the rhizosphere, washed roots from 11 week-old sprout-inoculated potato plants were used. The abundance of the retrieved inoculated bacteria is shown in a semi-quantitative way in Table 2. In general, the number of bacterial CFUs from the root system of the cultivar Victoria was higher than that retrieved from the root system of Charlotte. All strains were able to establish in the rhizosphere and to compete with the natural potato microbiome, thus demonstrating their rhizosphere competence. The strain originally retrieved from the phyllosphere (S35) showed similar to higher colonization densities as the closely related P. fluorescens strain isolated from the rhizosphere (R76) ( Table 2). However, highest colonization capacity was observed for P. chlororaphis R47: this strain was not only found in very high abundance in the pots where it had been inoculated, but it was also retrieved in significant amounts from neighboring pots (pots had been randomly placed in the greenhouse but in trays where they were connected through the bottom upon watering events), suggesting very high rhizosphere competence. It cannot be excluded that due to this invasion of P. chlororaphis R47, the other strains were restrained in their rhizosphere colonization and that the data presented in Table 2 therefore underestimate their true colonization potential. Evaluating the In Planta Protection Potential of the Strains Using a Leaf Disk Method As a next step toward the evaluation of the bacterial strains' in planta protection potential, a high through-put leaf disk setup was developed (see Material and Methods for details). Applying the Phytophthora sporangia solution onto the bottom side of the leaf disk resulted in quicker disease progression than when sporangia were inoculated on the upper side (Supplementary Figure S2). Moreover, the drop containing the sporangia stayed in place until the end of the experiment, whereas when inoculated onto the upper side of the leaf disk, it often erratically spread and led to multiple infection starting points. To analyze the data in an automated and quantitative manner, a macroinstruction was developed in the freeware ImageJ to measure the necrosis development (days 3-7) and the formation of sporangiophores (day 8). Using a 10 µl drop of a 1.25 . 10 5 mL −1 sporangia solution led to clearly distinguishable necrosis development curves and a significant covering of the leaf disk surface by sporangiophores after 8 days (Figure 3). Using the established leaf disk method, the effect of the three selected bacterial strains were investigated in planta in the greenhouse using Victoria as potato cultivar. Victoria, rather than Charlotte, was chosen for the greenhouse and field experiments due to its lesser susceptibility to late blight. Leaf disks from sprout-inoculated potato plants did not show greater tolerance to P. infestans than disks from non-inoculated control plants, suggesting that the inoculated strains did not induce resistance (data not shown). When bacteria were applied on the leaves at high concentration, all strains but P. marginalis S35 inhibited the formation of Phytophthora-induced necrotic lesions ( Figure 4A). However, when a tenfold dilution of the inoculum was used, only the P. protegens CHA0 strain reduced necrotic area significantly, while the others did not. P. fluorescens R76, which strongly inhibited the formation of necrosis (8% of the leaf disk surface vs. 48% for the untreated control), was unable to reduce the formation of sporangiophores ( Figure 4B). In contrast, P. chlororaphis R47 and P. protegens CHA0 significantly inhibited sporangiophore production in both inoculum densities tested. A very strong concentrationdependency was observed for Pseudomonas sp. DSMZ 13134, which conferred excellent protection when applied in high concentrations, but was inefficient or even favoring infection when applying a lower concentration. Autoclaved Pseudomonas sp. DSMZ 13134 cells did not confer any protection, suggesting that living bacteria are required for plant protection against P. infestans. Inhibition of Sporangia Germination by the Bacterial Strains The first step in Phytophthora's infection process is the germination of sporangia and/or zoospores. We therefore assessed whether the bacterial strains used in the leaf disk experiments were able to inhibit this critical step. The overall germination rate in control treatments was about 35%. When applied at high densities, all strains induced a significant FIGURE 2 | Survival of bacteria in the phyllosphere. The survival of sprayed bacteria on the leaf surface was measured over a period of 15 days in the cultivars Charlotte (A) and Victoria (B). Their abundance at defined intervals is shown in CFUs cm −2 (SEM, n = 6). One-way ANOVA revealed significant differences between the strains after 3 days [F(4) = 3.92, P = 0.013 for cv. Charlotte and F(4) = 5.01, P = 0.004 for cv. Victoria], but none after 8 days and only a marginal one for cv. Charlotte at 15 days [F(4) = 2.77, P = 0.049]. reduction in the percentage of germinated sporangia ( Figure 5A). The same observation could be made with intermediate densities, except for the control strain P. protegens CHA0, which was not significantly different from the untreated control. Low densities of bacteria (OD = 0.03) were ineffective in reducing sporangia germination (Figure 5A). In addition to the reduced germination rate, morphological abnormalities such as hyphal swelling and shorter germination tubes could be observed upon treatment with the bacterial strains ( Figure 5B). No correlation was observed between the in vitro effects of the strains on sporangia germination and the protection potential observed on leaf disks. However, when microscopic observations were done on the sporangia drop applied onto the leaf disks, it seemed that the strain S35, which was unable to protect leaf disks against P. infestans, also did not inhibit sporangia germination as drastically as the other strains (Supplementary Figure S3). Survival and Protective Potential of the Strains in Field Conditions To assess whether our selected strains would be able to protect potato against P. infestans under field conditions, we carried out a microplot experiment where potato plants were regularly sprayed with a suspension of the bacterial strains. After the last application, both survival and protection potential were assessed. One day after spraying the bacterial suspensions on the plants, all strains were still present in relatively high abundance (10 5 -10 6 cm −2 ) (Figure 6), but their population density dropped within the next days: after 8 days, only few cells per square centimeter of leaf could be retrieved. The microplot experiment was carried out to monitor the protective effects of the strains in field conditions, i.e., with natural P. infestans infection. Since this natural infection was prevented by very hot and dry weather conditions during July and August, we tested the protective effect of the strains with our leaf disk experimental setup. This revealed that the leaf disk method was, at least in our experiments, better suited for the greenhouse screen, since the infection was much less efficient in field-grown plants (Figure 7). The infected and the non-infected controls did not differ significantly from each other, which was mostly due to a lesser infection rate of the infected controls (see also Figure 4). Therefore, even though significant protective effects were observed after treatment with the strain P. moraviensis S35 as well as the two control strains Pseudomonas sp. DSMZ 13134 and P. protegens CHA0, these results should be interpreted with caution. DISCUSSION Using microorganisms as biocontrol agents seems an appealing strategy for sustainable crop protection: in the last decades, much effort has been made to isolate, characterize and use microbial strains to this end, yet the bacterial antagonists available on the market are so far only few (Velivelli et al., 2014). Indeed, many criteria have to be fulfilled for such a biocontrol agent to find its way to the farmer (Köhl et al., 2011). The first step usually taken to select for candidate biocontrol agents is an in vitro screening in the laboratory, such as the one we carried out against potato pathogens in the present study ( Table 1). Compared to the first screen of the potato-associated strains reported in (Hunziker et al., 2015), which focused on volatiles, the present evaluation of the strains' potential activity against a broad range of potato pathogens yielded slightly different results: while the Pseudomonas strains classified as the best producers of antifungal volatiles (Hunziker et al., 2015) also were very active in the present study, other, non-Pseudomonas strains such as the two Bacillus strains R73 and R54 or the Streptomyces S01 were much more inhibitory to the potato pathogens when their diffusible substances, rather than only their volatiles, came into play. Moreover, the selective inhibition of Helminthosporium solani by strains affiliated with the species P. frederiksbergensis FIGURE 3 | Disease progression on potato leaf disks. Pictures were taken daily between 2 and 8 days after inoculation and processed to quantify necrosis area (days 2-7) and sporangiophore production (day 8). Averages and standard errors are shown (n = 18). Pictures were taken with illumination from the top for necrosis measurement and with illumination from the side for sporangiophore measurement. One representative picture and its processed counterpart are shown per day. Frontiers in Microbiology | www.frontiersin.org seem to originate from diffusible substances, as these strains did not produce volatile compounds inhibiting H. solani (Hunziker et al., 2015). However, the results obtained in such in vitro experimental setups only represent a metabolic potential of the strains grown on rather rich laboratory media, and no guarantee can be offered that the strains that are active in vitro will also be in planta. The reasons for this first screen in the laboratory are mainly time and space constraints. Therefore, developing a space-and time-efficient screening procedure, which would allow testing the biocontrol agents at a very early stage already on plant material rather than on artificial laboratory media would potentially yield better-suited candidate strains for further investigations. This is why we developed a high through-put leaf disk method allows the monitoring over time and the automated quantification of disease progression by picture analysis. The main difficulty in developing a reliable macroinstruction to quantify the sporangiophore production came from the white background originating from leaf veins and trichomes. Nevertheless, provided the macroinstructions are carefully adapted to each new experimental setup, the automated leaf disk quantification method represents a big step forward for the selection of promising biocontrol strains, since it provides the means to obtain in a time-and space-efficient manner quantitative data on the in planta disease protection potential of the strains of interest. While this assay was in our hands well-suited for greenhouse-grown plants, it was not suitable to evaluate disease progression on material from field-grown plants, since the control plants (of the same cultivar, Victoria) developed little infection (less than 20% of leaf surface infected, compared with over 30% for leaf disks from greenhouse plants, Figures 4 and 7). Field-grown plants are expected to show a basic level of resistance to diseases due to the multifarious biotic and abiotic stimuli they encounter in nature (Walters, 2009). Moreover, plants grown in the greenhouse show less abundant and less diverse microbial colonization than field-grown plants (data not shown), which might also explain their higher susceptibility to P. infestans infection, as the role of the plant microbiome in disease protection and induction of resistance is becoming increasingly clear (Bakker et al., 2013;Pieterse et al., 2014). The leaf disk method and the subsequent automated image analysis developed in the present study is therefore not meant to replace whole-plant analysis and field trials, but represents an efficient tool to replace the in vitro screening and to select for antagonists that are able to inhibit the pathogen when both organisms grow on the host plant rather than on rich laboratory media. Since the co-inoculation of both antagonist and pathogen would not allow to see induced effects, the setup might be adapted by either applying the antagonists 1 day before the pathogen or by spraying whole plants with the antagonists and thereafter cutting leaf disks, infecting them with P. infestans and monitoring disease progression. In our case, the strain that showed the highest in vitro activity turned out to also be the most efficient when tested on leaf disks from greenhouse-grown plants (Table 1, Figure 4), although the very few strains selected in this study do not allow any generalization of this observation. This Pseudomonas strain was affiliated to the P. chlororaphis species according to its rpoD sequence (Hunziker et al., 2015), a species to which the active ingredient of the product Cerall R also belongs (Johnsson et al., 1998;Velivelli et al., 2014). This affiliation was confirmed by phylogenetic analysis based on four housekeeping genes (16S rRNA,gyrB,rpoB,rpoD), which placed this strain in a cluster comprised of P. chlororaphis and P. protegens strains (De Vrieze et al., 2015). Both species are part of the larger P. fluorescens group and recent phylogenetic studies indicate a close proximity between P. chlororaphis and P. protegens (Gomila et al., 2015). Strains belonging to both species include well-known biocontrol agents against pathogenic fungi (Haas and Defago, 2005) but also against insects (Kupferschmied et al., 2013;Ruffner et al., 2013). Preliminary inspection of the genomic potential of P. chlororaphis R47 revealed that this strain shows a similar toolset of antibiotics as other P. chlororaphis strains according to a recent study comparing Pseudomonas strains (Loper et al., 2012): the genome of P. chlororaphis R47 encodes the synthesis of the antibiotics hydrogen cyanide, phenazines, pyrollnitrin and 2-hexyl-5-propyl-alkylresorcinol (HPR) (data not shown), which might be involved in its anti-Phytophthora activity. Moreover, siderophore (pyoverdine, achromobactin) production is also encoded in the genome and might contribute to the strains' antifungal activity and, perhaps more importantly, to its remarkable rhizosphere competence (Ghirardi et al., 2012). Indeed, P. chlororaphis R47 strain seemed to largely surpass the other strains tested in terms of rhizosphere colonization ( Table 2), an important feature considering the practical advantages (feasibility and cost-efficiency) of tuber treatment compared with leaf spraying. However, to successfully inhibit late blight at the shoot level, the biocontrol agent should be able to either induce resistance or to systematically colonize the upper parts of the plants. Although induction of resistance in potato has been shown for other Pseudomonas strains (Arseneault et al., 2014;Pieterse et al., 2014), leaf disks from P. chlororaphis R47-tuber inoculated plants did not show increased resistance to late blight (data not shown), suggesting that this strain was not able to induce long-lasting resistance to late blight after tuber inoculation. Preliminary data suggest that P. chlororaphis R47 might be able to move from the tuber to the upper parts of the plants, which would be an important feature for late blight protection. Such an endophytic colonization has been shown for other plant-growth promoting and biocontrol Pseudomonas, such as P. poae in sugar beet (Zachow et al., 2015) or P. putida in potato (Andreote et al., 2009). For non-endophytic microbes, the challenge of a successful application as leaf spray is particularly high in view of the harsh conditions that prevail in the phyllosphere, such as UV irradiation, as well as extreme variations in temperature and humidity. In our greenhouse experiments, all tested bacterial strains were able to persist for 2 weeks, but in the field, their abundance underwent a more rapid decrease, although all inoculated strains could be retrieved after 8 days in our microplot experiment (Figure 6). In addition to the abiotic stresses prevailing in field conditions, the higher complexity of the microbiome in field-grown plants is also likely to reduce the ability of introduced bacteria to establish in leaves as well as in roots, due to the higher competition they are facing. The colonization ability of the strains might thus depend on the residing microbiome as well as on the plant variety: in our root colonization assay, consistently more bacteria could be retrieved from the rhizosphere of cv. Victoria than from that of cv. Charlotte ( Table 2). As in many studies preceding the current one, going from the greenhouse to the field proved a challenging step. However, we selected only few strains that had been pre-screened and characterized based on in vitro experiments. We hope that the leaf disk-based automated picture analysis of disease progression developed in the frame of this study will enable to skip this first time-consuming step of in vitro tests and to directly test the antagonists' protective potential on plant material from different potato cultivars differing in their susceptibility to late blight. In a second step, the selected strains should be tested for their ability to establish sufficient population densities on field-grown plants that harbor their own, complex native microbiome, since colonization of these plants might be more challenging than that of greenhouse-grown plants harboring less complex microbial communities. Beneficial or antagonistic interactions between the inoculated strain and the resident microbiome, whose composition will change according to biotic and abiotic factors, might be important factors that support or prevent successful establishment of a biocontrol agent. Those strains showing in planta anti-oomycete activity in a broad range of cultivars, as well as the ability to establish on already colonized plants would then represent better candidates for the timeconsuming field trials than those selected solely based on in vitro tests in Petri dishes. Finally, future research will tell whether the best use of microbial control agents to limit late blight in tomorrow's potato production will reside in application of single, highly potent strains, or of a combination of strains with different and therefore complementary abilities, or in a more global approach involving microbiome engineering and microbiome-driven selection (Mueller and Sachs, 2015). AUTHOR CONTRIBUTIONS AG, LW, and AB designed the research, AG, MV, DB, RG, and NB performed experiments, AG, MV, and AB analyzed the data, LW and AG wrote the MS with help from MV, AB, NB, TM, and RG.
2016-05-12T22:15:10.714Z
2015-11-27T00:00:00.000
{ "year": 2015, "sha1": "e853055348d86cf71833ea385babde9f2a5006e0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2015.01309/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e853055348d86cf71833ea385babde9f2a5006e0", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
6273246
pes2o/s2orc
v3-fos-license
Computational Complexity of Iterated Maps on the Interval (Extended Abstract) The exact computation of orbits of discrete dynamical systems on the interval is considered. Therefore, a multiple-precision floating point approach based on error analysis is chosen and a general algorithm is presented. The correctness of the algorithm is shown and the computational complexity is analyzed. As a main result, the computational complexity measure considered here is related to the Ljapunow exponent of the dynamical system under consideration. Introduction Consider a discrete dynamical system (D, f ) on some compact interval D ⊆ R, called the phase space, given by a function f : D → D, a recursion relation x n+1 = f (x n ) and an initial value x 0 ∈ D. The sequence (x n ) n of iterates is called the orbit of the dynamical system in phase space corresponding to the initial value x 0 . If such a dynamical system is implemented, that is a computer program is written for calculating a finite initial segment of the orbit for given x 0 , care has to be taken in choosing the appropriate data structure for representing real numbers. Traditionally, IEEE 754 double floating point numbers [10] are used. However, if the dynamical system shows chaotic behavior, a problem arises. The finite and constant length of the mantissa of a double variable causes round off errors which are magnified after each iteration step. Only after a few iterations, the error is so big that the computed values are actually useless [12]. To put things right, a rigorous method for computations with real numbers has to be used. In [2], this issue is addressed for the logistic map which is also consiedred as a starting point in the next section. There, the exact real arithmetic in the form of centered intervals with bounded error terms is used as described in [3]. However, the notation used in [2,3] is an algebraic one based on arbitrary large integers. On the other hand, the aim of the present paper is to keep the notation as close as possible to the standard in scientific computing but being precise in the sense of exact real arithmetic. This has as a consequence first that the basic data type is not an integer as considered in [2,3], but a floating point number with a definite mantissa length. Second, the type of error considered here is the relative error as is standard in floating point arithmetic -in contrast to the absolute error considered in [2,3]. In practice, a multiple precision floating point library providing floating point numbers with arbitrary high mantissa length have to be used. In the following, it is analyzed how the needed mantissa length behaves in multiple-precision computations of iterates of discrete dynamical systems. The mantissa length needed for floating point numbers such that any computed point of the orbit has a specified and guaranteed accuracy is examined. Therefore, a precise mathematical framework for floating point computations has to be established. The main result shows that the ratio of mantissa length to iteration length in the limit of iteration length to infinity is related to the Ljapunow exponent. Comparing, in [2] only the logistic map is considered explicitly and the connection to the Ljapunow exponent is not stated, but observed numerically. In the present paper, this connection is shown mathematically for a general discrete dynamical system (D, f ). This result also gives some advice for economically designing exact algorithms simulating one-dimensional discrete dynamical systems. Roundoff Error, Error Propagation and Dynamic Behavior In this section, the discrete dynamical system (D, f µ ) with D = [0, 1] and f µ : for some control parameter µ ∈ (0, 4] is investigated. In the literature, the recursion relation x n+1 = f µ (x n ) is called the logistic equation [5]. When implementing the logistic equation on a real computer and demanding to obtain exact values for the orbit (x n ) n , the analysis of roundoff errors and of error propagation requires some care. This is due to the fact that for some values of µ the dynamics is highly chaotic and therefore inaccuracies are magnified exponentially in time [6,9]. In the following, for a given initial value x 0 , the true orbit is denoted by (x n ) n , whereas the really computed orbit, suffering from roundoff errors and error propagation, is denoted by (x n ) n . Note that evenx 0 may differ form x 0 since the conversion to a floating point number may cause the very first roundoff error. One goal of this section is to give a rigorous estimation of the total error in dependence of the iteration step n. Calculating the orbit (x n ) n , two types of error are present. First, error propagation due to the iteration scheme and second the roundoff error caused by the calculation of f µ . Now, letx n for some n ∈ N be given. Then the true error after one iteration step isx n+1 − x n+1 . Since in reality not f µ (x n ) is calculated but some erroneous approximationf µ (x n ), the true error can be written asx Hence, the true error can be written as a sum of two terms. The first term describes solely the error propagation while the second term gives exactly the newly produced error due to the approximate calculation of f µ . To handle the exact values of both errors computationally, interval arithmetic can be used [1]. Interval arithmetic can be seen in the setting here as a special case of the computational model of TTE [16], which gives a precise notion for describing computations over the real numbers. Another strongly related model, which in some sense reflects the situation here more adequate is the Feasible Real RAM model [4]. For the sake of simplicity however, an interval setting is used here. For any time step n, let the phase point x n together with its error be represented by two floating point numbers x l n and x u n (x u n ≥ x l n ) with given mantissa length m n forming an interval [x l n , x u n ]. The interval is an enclosure of the real value x n , that is It is straightforward to transform the interval to a floating point valuex n of mantissa length m n by settingx n := gl where gl(.) performs the rounding to nearest floating point number. The absolute error e n := |x n − x n | of x n can be estimated via the interval length d n := x u n − x l n by where r n is an error introduced by the rounding operation gl(.) in Equation 2. An upper bound on r n will be discussed later, for now it suffices to say that in general it is small compared to d n . For doing an error analysis of the logistic equation analytically, some idealizing assumptions are made. First, the value of µ is assumed to be given with such a high precision that no interval representation is needed. Second, only the error propagation is considered caused by the initial error due to rounding x 0 to some floating point number of mantissa length m. Third, the value of r n in Equation 3 is neglected. The recursion relation then reads in natural interval extension with the interval length d n given by the recursion relation with the obvious solution d n = µ n d 0 . Finally the absolute error e n ofx n according to Equation 2 can be bounded from above by Note that the above analysis only holds if the natural interval extension for f µ is derived from the formula the mathematical analysis is more difficult. However, the problems described in the following also appear, but in some different form. The aim now is to calculate, for given N ∈ N, p ∈ Z and mantissa length m, the orbit up to time N with relative error 10 −p . That is, for (x n ) 0≤n≤N should hold e n = |x n − x n | ≤ 10 −p x n ≤ 10 −p . The ideal assumptions require the somewhat unreal setting that the mantissa length has to be set to some finite, but big enough value m for representing x 0 and a virtually infinite value m ∞ for doing the iteration. Finally, some upper bound on d 0 is needed. The value of d 0 is given as the roundoff error by representing x 0 as a floating point number of mantissa length m. For that, the well known estimate exists. Combining (5), (4) and (6) gives as a sufficient condition µ n · 2 −m ≤ 10 −p for n = 0, . . . , N. The minimal m, fulfilling the precision requirement (5) on the relative error of x n , which depends on x 0 , N and p, is denoted by m min (x 0 , N, p). So, the sufficient condition (7) gives an upper bound on where ld(.) is the logarithm to base 2. At that stage, a central quantity of this work is introduced which is a kind of complexity measure. The loss of significance rate σ (x, p), which may depend on the initial value x = x 0 and the precision p is defined by This quantity describes the limiting amount of significant mantissa length being lost at each iteration step. Significant means here the part of the places being exact. A general treatment of this complexity measure is given in the next section. Roughly speaking, ⌈σ (x 0 , p)N + p · ld(10)⌉ is the mantissa length for any floating point number needed in an algorithm doing the iteration starting with x 0 and calculating up to x N , if the output should be precise up to p decimal places. Formula 8 gives an upper bound for the loss of significance rate by σ (x, p) ≤ max(0, ld(µ)). It is interesting to see whether the upper bound calculated analytically, which needed strong idealizations, is in the region of the real value. So, the logistic equation was implemented using a multipleprecision interval library. For that purpose, the interval library MPFI [15] based on the multiple-precision floating point number library MPFR [8], both written in C, was used. For each control parameter µ ranging from 0.005 to 4 and a step size of 0.005, the orbit for initial condition 0.22 was calculated up to N = 2000. For each µ, the minimum mantissa length m min needed to guarantee e n ≤ 10 −6 x n for n = 0, . . . , N was searched. Then, σ est := m min /N was calculated. The result shows that σ est exceeds the analytical bound max(0, ld(µ)) only slightly. So, the above made ideal assumptions seem to be valid. In [12], the logistic equation was also investigated for µ = 3.75 using the exact real arithmetic package iRRAM based on the Feasible Real RAM model [4]. In the paper, also the precision needed to guarantee the exactness of the first 6 decimal places are reported up to N = 100000. The values are in full agreement with the simulation results performed here. Hence, for µ > 1, the interval length d n increases exponentially in time n. This result should be interpreted in terms of the dynamical behavior of the logistic equation. So, at this point is worth having an analytical look at the behavior of the dynamical system. Despite the fact that these results are well known [9,7], they are reviewed here for the sake of self containment. First, the equation possesses in the range D = [0, 1] exactly one fixed point x o = 0 if µ ∈ (0, 1] and exactly two fixed points x o = 0 and and a bifurcation occurs at that value of the control parameter µ. If µ ∈ (1, 3), x o becomes unstable and the newly occurring fixed point x (µ) is stable. Finally, lim n→∞ f n µ (x) = x (µ) for µ > 1 and lim n→∞ f n µ (x) = x o if µ ≤ 1 holds for all x ∈ (0, 1). If µ ∈ (0, 1), this is a direct consequence of the contraction mapping principle. If µ = 1, observe that f 1 (x) < x holds for all x ∈ (0, 1). Hence, any sequence ( f n 1 (x)) n , x ∈ (0, 1), is strictly decreasing and bounded from below. So it converges to the only fixed point x o . For the case µ ∈ (1, 3), the interested reader is referred to the literature: [7], Proposition 5.3. At µ = 3 a second bifurcation occurs and for µ > 3 the system goes into a region of periodic behavior with period doubling bifurcations. Finally, for some µ < 4, chaotic behavior is reached. This analysis shows that in the parameter range µ ∈ (0, 3), the orbit converges to the stable fixed point for any initial value x 0 ∈ (0, 1). Furthermore, there exists some closed interval I ⊆ D, which depends on µ, containing the stable fixed point such that f µ (I) ⊆ I holds and f µ is a contraction on I. The interval computation using a natural interval extension of the recursion formula µx(1 − x), on the other hand, is not very compatible with this picture. While for µ ∈ (0, 1), the results are in agreement with the dynamical analysis, the calculations for µ ∈ (1, 3) are not handled very well by interval arithmetic since the interval approach would suggest an exponential divergence of initially nearby orbits which is not true in reality. The reason is that the natural interval approach implicitly, due to the dependency problem, takes account only of the global behavior of f µ in the form of the global Lipschitz constant max{| f ′ µ (x)| : x ∈ D} = µ. However, the local Lipschitz constant max{| f ′ µ (x)| : x ∈ [x l n , x u n ]} governs the real error propagation at time step n and also describes the dynamic behavior. This notion can be made precise and finally leads to a more efficient algorithm for computing orbits. Let us return to Equation 1. The true error is the sum of the error propagation (first term) according to the iteration and the roundoff error due to the computation of f µ (second term). The first term of Equation 1 can be handled using the mean value theorem, | f µ (x n ) − f µ (x n )| = | f ′ µ (y n )| · |x n − x n | with y n ∈ [x n − e n ,x n + e n ]. This gives directly the bound The second term can be estimated the following way. As discussed in [17], the roundoff error produced in calculating f µ can be estimated by where K is the number of rounding operations performed in computingf µ and m is the mantissa length ofx. In the case considered here, K = 4 follows since there are 3 arithmetic operations and the rounding of µ. It is further crucial to mention that the factor 1.06 is only valid if K ≤ 0.1 · 2 m holds so that the mantissa length must not be chosen too small. Using the fact that f µ (x) ≤ µ 4 holds and f µ (x) < x if µ ≤ 1, the unknown value | f µ (x)| can be eliminated. This calculation shows that there exists a recursive equation on an upper bound e n on e n for all n: The idea is now not to calculate intervals, but pairs of valuesx n and corresponding guaranteed error bounds e n . The difference to the interval concept is not to compute the errors implicitly, so that only global behavior can be taken into account, but to compute them explicitly and independent of the values of interest. It should be mentioned that the approach described here is compatible with an interval approach using special centered forms, namely mean value forms [14]. However, the approach here explicitly devises values and errors, describes an automated error analysis, whereas an interval approach primarily does not disclose any error. Furthermore, also the iRRAM package permits a more elaborate way for computing the iteration, based on a similar algorithm as described above [13]. The rounded valuesx n are calculated as usual in floating point arithmetic except that multiple-precision floats are used. The guaranteed error bounds are also calculated using floating point according to (9), where interval arithmetic is used for calculating L. Only standard precision is needed for calculating the error bounds. Implementing this improved algorithm using MPFR and MPFI, the setting as given in the interval case produces the following result. In the parameter range µ ∈ (0, 3), the dynamic behavior is reflected very well. Furthermore, in the range µ ∈ [3,4], the curve suggests a relation between the loss of significance rate and the Ljapunow exponent λ (x) for the logistic map (for a curve of the Ljapunow exponent of the logistic map see [5]): σ (x) = max(0, λ (x))/ ln (2) if the limit exists. The Ljapunow exponent may depend on x. However, the following properties hold: (a) If (D, f ) has an invariant measure ρ, then the limit in Equation 10 exists ρ-almost everywhere. These properties are a direct consequence of the Birkhoff ergodic theorem, see [11], Theorem 4.1.2 and Corollary 4.1.9. The General Algorithm and its Complexity Let D be a compact real interval and f : D → D a self mapping. In the following, f is assumed to be continuous on D, continuously differentiable on the interior of D and f ′ is bounded. Furthermore, f and f ′ are assumed to be computationally feasible. The precise definition of "computationally feasible" is given below. In this section, a general algorithm for computing the iteration is presented. To be more precise, for given N ∈ N and p ∈ Z, this algorithm computes a finite part of the orbit, (x n ) 0≤n≤N , exact in the sense that the relative error at each point x n does not exceed 10 −p . The correctness of the algorithm and its computational feasibility is shown. Finally, its complexity is examined. Syntax, Semantics and the Algorithm The set of all computationally accessible real numbers are the floating point numbers of arbitrary mantissa length denoted byR. In the following, by a floating point number any real number is meant which can be expressed by normalized scientific notation. Hence, the setR ⊆ R of all floating point numbers is countable infinite and therefore a natural basis for standard computability considerations. Let x ∈R be some floating point number, thenx has as an essential property, its mantissa length denoted bŷ Here, a partial functionf :R →R is called computable iff is computable as a string function over some finite alphabet where the floating point numbers are interpreted as finite strings. Finally, computability over integers, computability of functions with mixed arguments and computable predicates are defined in a standard way. The algorithm with the above described specification reads Finally a remark on optimization. The algorithm is not optimized in the performance. Otherwise, in Line 10 something like m ← 2m should be used. Here, the aim is to find the minimal m to guarantee some given upper bound on the relative error of x n . Feasibility and Correctness It is clear, that the rounding function gl is computationally feasible. So lets begin with the predicate prec. Note that the definition of the predicate this way also gives true in the singular case wherex = 0 and e = 0 and hence x = 0. An algorithm for computingf is by assumption possible. To derive an algorithm for computing er f on the absolute error, return to Equations 1 and 9. Proposition 3.2. Assume thatf (x) computes f (x) up to a correctly rounded last bit in mantissa according to rounding convention. Then there exists a constant K > 0 such that the absolute error of f (x) of the computation [ f ]([x]) is bounded from above by x + e])|) and m is the mantissa length ofx:x.m. Furthermore, this bound is computable. Proof. Using Equation 1 and following the calculations leading to equation 9, follows. Since an upper bound on L(x, e) can be computed using global optimization techniques, e.g. with interval arithmetic, the above described bound is computable. To summarize, the mathematical iteration (11) is performed in the algorithm by iterating a valuex n approximating x n with an upper bound on its absolute error e n according tô where L(x n , e n ) is computable upper bound on L(x n , e n ) as described in the preceding proposition. This is Line 9 in the inner for-loop of the algorithm which is executed with successively increasing mantissa length m, controlled by the outer do-while-loop. Finally, it has to be shown that this outer loop eventually terminates. Therefore, two more propositions are needed. Hence, for n fixed, lim m→∞ ([x n ] m ).err = 0 follows. These two propositions finish the correctness proof of the algorithm. They show that, if x n = 0 for n = 0, . . . , N, the outer loop eventually terminates for any p ∈ Z. Computational Complexity After having presented the preliminary work, the main issue of the paper is addressed -the computational complexity of the presented algorithm. The complexity measure of interest here is the loss of significance rate already introduced informally in the last section. Here is the formal definition. Definition 3.1. The minimal mantissa length, for which the described algorithm eventually halts is denoted by m min (x 0 , N, p), where x 0 , N and p are the corresponding input parameters. Then, the loss of significance rate σ :R ∩ D × Z → R is defined by However, to achieve bounds on the loss of significance rate, a technical difficulty has to be circumvented. Therefore, one more assumption on the dynamical system (D, f ), additional to the ones already mentioned in the beginning of this section, has to be made. It was already seen in the last subsection that x n = 0 makes difficulties such that it cannot be proven that the algorithm eventually halts. However, the restriction 0 ∈ D is no loss of generality. If all other conditions are fulfilled except that D contains zero, consider the The treatment has now come to a stage that the main statements of this paper can be formulated. A lower and an upper bound for the loss of significance rate is given. Furthermore, these bounds are strongly related to the Ljapunow exponent λ (x) defined in the previous section. A necessary condition for the algorithm to terminate is therefore BK . Following the definitions of the loss of significance rate and the Ljapunow exponent, σ (x 0 , p) ≥ λ (x 0 )/ ln(2) follows. The proof is similar to the proof of Theorem 3.1 and can be found in the full version of this article [18]. Conclusions In this paper, two main issues are addressed. First it is shown that a mathematically precise treatment of multiple-precision floating point computability is not hard to do. Furthermore this treatment is in a manner which is familiar to people working in the field of numerical analysis or scientific computing and also for theoretical computer scientists. Furthermore, the formalism does not only allow exact answers concerning the existence of a computationally feasible algorithm, but is also allows a treatment of its complexity. As a consequence, the described algorithm is formulated not only in an exact and guaranteed way, but also enables a motivated reader the real implementation and gives a practical performance analysis. Second, the results show that the Ljapunow exponent, a central quantity in dynamical systems theory, also finds its way into complexity theory, a branch in theoretical computer science. In dynamical systems theory, the Ljapunow exponent describes the rate of divergence of initially infinitesimal nearby points. For two points having a small but finite initial separation, the Ljapunow exponent has only relevance for short time scales [6]. The reason is that due to the boundedness of D, any two different orbits cannot separate arbitrarily far away. However, the loss of significance rate shows that the Ljapunow exponent has on long time scales not only an asymptotic significance but also a concrete practical one.
2010-06-02T07:30:55.000Z
2010-03-31T00:00:00.000
{ "year": 2010, "sha1": "121ba615e5ba7ba96dc10d8e7c3edcc13191ed45", "oa_license": "CCBYNCND", "oa_url": "https://arxiv.org/pdf/1006.0404", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "185f515a51fd21c11c801b1a09c4f311cc86667a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
51784376
pes2o/s2orc
v3-fos-license
An update on the uncertainties of water vapor measurements using Cryogenic Frostpoint Hygrometers . Long time series of observations of essential climate variables in the troposphere and stratosphere are often impacted by inconsistencies in instrumentation and ambiguities in the interpretation of the data. To reduce these problems of long term data series all measurements should include an estimate of their uncertainty and a description of their sources. 10 Here we present an update of the uncertainties for tropospheric and stratospheric water vapor observations using the Cryogenic Frostpoint Hygrometer (CFH). The largest source of measurement uncertainty is the controller stability, which is discussed here in detail. We describe a method to quantify this uncertainty for each profile based on the measurements. We also show the importance of a manufacturer independent ground check, which is an essential tool to continuously monitor the uncertainty introduced by instrument variability. A small bias, which has previously been indicated in lower tropospheric 15 measurements, is described here in detail and has been rectified. Under good conditions the total from all sources of uncertainty of frostpoint or dewpoint measurements using the CFH can be better than 0.2 K. Systematic errors, which are most likely to impact long term climate series are verified to be less than 0.1 K. Implementations of the frostpoint (dewpoint) technique vary significantly and most limitations are a result of the characteristics of the individual implementation (Vömel and Jeannet, 2013). Therefore, not all frostpoint or dewpoint hygrometers are equivalent, and some understanding of the technical realization is needed to properly interpret the reported frostpoint temperature and to be able to estimate the measurement uncertainty. 5 Here we focus on the Cryogenic Frostpoint Hygrometer (CFH), which has been described in detail elsewhere (Vömel et al., 2007a). This instrument uses the chilled-mirror principle, in which a small mirror is exposed to the ambient air and whose temperature is controlled such that liquid water or ice condenses on the mirror. An optical detector senses this condensate and a digital controller regulates the temperature of the mirror in order to maintain a constant reflectivity of the condensate 10 on the mirror surface. To the extent that the reflectivity is constant, the condensate on the mirror is assumed to be in equilibrium with the gas phase. These instruments have been used in a large number of studies of upper tropospheric and stratospheric water vapor (e.g. Vömel et al., 2007b;Hasebe et al., 2007;Fujiwara et al., 2010;Selkirk et al., 2010;Shibata et al., 2010). Although they are 15 recognized by many as reference instruments, we refer to the rigorous definition of what constitutes a reference observation given recently by Immler et al. (2010). This paper defines the measurement philosophy of the GCOS Reference Upper Air Network (GRUAN), which aims at providing traceable and well characterized observations of essential climate variables within the troposphere and stratosphere, which may also be applied to other types of atmospheric observations. 20 The present paper discusses the measurement uncertainties of the CFH within the framework laid out by Immler et al. (2010). The work leading up to this paper has resulted in some instrument improvements over the work presented by Vömel et al. (2007a) which are discussed here. The sequence of processing and data quality control steps from raw data to final data product is described in Appendix A. 25 Throughout the paper uncertainties are expressed as one standard deviation, following the recommendations given by the Guide to the expression of uncertainty in measurement (JCGM/WG1, 2008). Mirror temperature controller Frostpoint instruments do not measure instantaneous water vapor concentrations, but rather average frostpoint temperatures 30 over (short) time periods, during which the mean mirror temperature can be assumed to be in near equilibrium with the condensed phase. Typically a proportional-integral-derivative (PID) controller is used to actively regulate the bulk Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. Published: 8 March 2016 c Author(s) 2016. CC-BY 3.0 License. reflectivity of the condensate layer, because the stability of the condensate layer is the prerequisite of equilibrium. Mirror temperature readings are taken at frequencies higher than the equilibration time, i.e. typically once per second or faster, but individual temperature readings are a poor measure for the equilibrium temperature between gas phase and condensed phase on the mirror. Figure 1 shows one-second resolution raw data from two instruments on the same balloon launched at Lindenberg, Germany. This dual sounding was used to test different controller settings in the second instrument and 5 demonstrates the impact of controller settings on the stability of the controller. To interpret these data properly, the dynamic response of the instruments must be considered and appropriate averaging intervals should be used to smooth out the controller oscillations. For high quality instruments, the averaging intervals will be short compared to the rate of change of ambient water vapor. 10 Balloon borne instruments encounter a large range of water vapor concentrations, which may span up to 5 orders of magnitude in partial pressure between the surface and the stratosphere and changes over two orders of magnitude may happen within a few tens of seconds in some soundings. Therefore, instruments need to respond quickly. A slow response may lead to delays in the detection of sharp atmospheric layers and to an underestimation of changes in atmospheric water vapor. Slow-responding instruments may also be susceptible to measurement artifacts and external disturbances in the 15 measurement, which may be impossible to distinguish from true atmospheric water vapor signals. On the other hand, fastresponding instruments tend to show oscillatory behavior, which complicates the interpretation of the raw signals (see e.g. Figure 1). In laboratory settings, mirror temperature oscillations may be as small as a few millikelvin, while under atmospheric conditions oscillations in poorly behaving instrument may be as large as a few kelvin. 20 The response of a PID controller to changes in its process variable, here the mirror reflectivity, depends on the choice of coefficients, which control the strength of the individual feedback process. Choosing the proper coefficients is called tuning and is the most difficult part of PID controller implementation. The tuning of the CFH is in favor of fast response, which may lead to oscillations that are sometimes slightly larger than desirable, in particular in the presence of sunlight. On the other hand, the presence of some oscillations indicates that the instrument responds quickly to changes in water vapor. These 25 oscillations may be smoothed out using appropriate filters, as described below. Instruments showing no oscillations are likely to suffer from slow controller response in the detection of atmospheric water vapor. If no other measurements are available, it may be difficult to quantify this lag. Therefore, the presence of slight oscillations is often preferred. 30 The uncertainty due to controller stability has been considered the largest source of measurement uncertainty in balloon borne frostpoint hygrometers (Vömel et al., 2007a) and was estimated at 0.5 K based on experience. In this section we revisit this subject and quantify the controller stability based on measurements. The quantified controller stability is one of the Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. To quantify the controller stability we use the amplitude of frostpoint oscillations around the mean frostpoint temperature over a time interval. To estimate the oscillation amplitude, the frostpoint profile is first smoothed using a Gaussian filter. The 5 width of the Gaussian filter varies with altitude and depends on the ambient water vapor partial pressure and the performance of the instrument: where the filter coefficients (2) 10 and Here χ j is the original frostpoint temperature or mixing ratio measurement at time-step j and χ ̅ i is the value of the filtered time series at time-step i. τ i is the width of the Gaussian kernel at time-step t i and provides a measure for the time interval 15 over which the data are being smoothed. τ i varies with frostpoint temperature and is provided as part of the data. The amplitude of the oscillations which have been smoothed out is a direct measure of the residual uncertainty of the smoothed profile and a quantification of the controller-induced measurement uncertainty. This uncertainty is expressed by a weighted standard error of the mean (Gatz andSmith, 1995 andEndlich et al., 1986). Due to the action of the PID controller and the thermal lag of the mirror and condensate dynamics frostpoint observations cannot be considered randomly distributed and show some degree of autocorrelation. This autocorrelation leads to an underestimate of the uncertainty using Equation 4. For simplicity we assume a first order auto-regressive model, for which 25 the underestimation due to the autoregression can be expressed as The autoregression parameter ϱ can be estimated by the time lag λ of the instrument as ϱ = e − 1 λ and depends on the detector 30 characteristics, the PID settings as well as the frost coverage. Comparisons with the Fast In situ Stratospheric Hygrometer Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. (FISH) during the AquaVIT-1 laboratory intercomparison (Fahey et al., 2014) indicate that the time response of the CFH at very low frost point temperatures is faster than 20 s. Here we use soundings with pronounced stratospheric features to study time lag in the CFH by comparing ascent and descent profiles. A profile at Lindenberg launched on 20 May 2014 showing two prominent features in the stratosphere at 22 km and 23.5 km was sampled both on ascent and descent ( Figure 2). The time difference between the sampling of these features on ascent and descent is small (about 10 min), since the balloon burst 5 at 24 km altitude. Furthermore, the wind shear in this altitude region was small, so that nearly the same air mass was sampled both on ascent and descent. The parachute opened immediately after balloon burst and slowed the instrument to less than 25 m/s. Time lag impacts measurements both on ascent and descent; however, due to the faster velocity this time lag is more noticeable on parachute descent. Applying a time lag correction following Miloshevich et al. (2004) to the entire profile data using a stratospheric time constant of λ = 10 s brings the features during ascent and descent into agreement. The 10 same result was obtained using data from another sounding launched on 22 June 2011 at Lindenberg (not shown). Therefore, we may use this stratospheric time lag to estimate the autocorrelation coefficient and the frostpoint uncertainty given in Equation 5. Note that in the troposphere the time lag is estimated to be less, although we currently don't have a good way to quantify it. We estimate it as λ ≤ 1 s near the surface, which is less than the temporal resolution of the measurements. 15 Since the uncertainty estimate expressed in Equation 5 is based on the variance of the data over a particular time period, it represents the uncertainty of χ ̅ i over this time period and not of individual one-second data points. Therefore, uncertainty due to controller response and vertical resolution are directly related. The statistical uncertainty can be further reduced by widening the kernel, however at the expense of a reduced vertical resolution. If further vertical averaging is required it is essential to know over which time period the uncertainty should be applied and how the uncertainties of neighboring layers 20 can be averaged. The width of the averaging kernel is chosen manually for a best compromise between uncertainty and resolution and typically varies from 3 s in the lower troposphere to 30 s in the middle stratosphere. For well-behaved instruments the uncertainty due to controller oscillations may be significantly less than the estimate by Vömel et al. (2007a). However, for 25 fast-changing features in the troposphere and or poorly behaving instruments the uncertainty due to controller stability may be larger. Providing this vertically resolved uncertainty estimate together with the data will help the user in the interpretation of the data. The mixing ratio difference between the smoothed profiles is shown in Figure 4 together with the combined uncertainties which have been added in quadrature. Throughout most of the stratosphere the smoothed profiles agree to within 0.25 ppmv. The combined uncertainty is slightly larger than this, and is dominated by the controller oscillations of instrument 2L2506. This plot shows that the mean difference between these two profiles is significantly better than 5% for most parts of the 5 profile and that the total uncertainty is roughly of that magnitude. Only at 15.5 km is the difference between the profiles significantly larger than 5%, a result of the poorer behavior of instrument 2L2506. However, since the uncertainty of the measurements of this instrument at that altitude is quite large, both instruments are considered in agreement within their combined uncertainty. 10 Several of the instruments during the 2010 WMO radiosonde intercomparison campaign at Yangjiang, China, (Nash et al., 2011) suffered from poor controller stability. The data processing of that campaign provided for the first time a simple estimate of the controller stability, which has been refined here. Using the experiences of this campaign the tuning of the PID controller and the noise characteristics of the detector have been significantly improved leading to much better performance throughout the entire troposphere and the stratosphere in almost all conditions. Current instruments achieve controller 15 uncertainty as low as 0.1 K (e.g. Figure 1, left panel). For simplicity the profiles shown here have been limited to the stratospheric part of the sounding, though all arguments given here apply to the tropospheric part of the profile as well. Data files generated with this processing provide in addition to the actual measurement, the vertically resolved kernel width (i.e. vertical resolution) and the vertically resolved uncertainty 20 estimate. Therefore, researchers using CFH measurements should also pay attention to the uncertainty estimate and the vertical resolution. Calibration Frostpoint and dewpoint hygrometers rely on the accurate calibration of the mirror temperature measurement. They are not calibrated against water vapor standards and are considered water vapor standards in themselves. The uncertainty of the 25 frostpoint temperature due to calibration and other instrument specific factors is considered to be small with a total of about 0.1 K (Vömel et al., 2007a). Here we present an update on the long term calibration stability. Five thermistors have been recalibrated in more than 40 different calibration runs, which allows identification of possible drifts in the reference thermometer and calibration setup. Figure 5 shows the calibration deviation of the mean of these thermistors at four select temperatures. Over the entire record, the variation is less than 0.02 K, which for stratospheric water vapor measurements is 30 equivalent to a mixing ratio variation of about 0.4%. Most importantly, there is no significant drift in this calibration record. In addition to the accuracy of the thermistor calibration, the accuracy of the resistance measurement circuitry has been tested for 508 CFH instruments by measuring a high precision 10 kΩ reference resistor with the circuitry of each instrument prior Atmos. Meas. Tech. Discuss., doi: 10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. to installation into the instrument. This 10 kΩ resistance is equivalent to a temperature of -35.84°C using average instrument calibration coefficients. At this equivalent temperature the instruments showed a negligible bias of -0.006 K +/-0.006 K. The standard deviation shows the repeatability of the resistance measurement among all instruments. The repeatability of the resistance measurements at 100 kΩ (corresponding to a temperature of -77.56°C) is less than 0.005 K and at 1kΩ (22.7°C) it is 0.08 K. At any of these standard resistances that were used to test the CFH measurement electronics, no drift could be 5 detected. Seven dual CFH soundings conducted at Lindenberg, one of which is shown in Figure 1, are investigated in greater detail to verify the total calibration uncertainty. The main goal of these soundings was to test new developments or modifications while at the same time maintaining the consistency of the observations. For each dual sounding the differences of the two profiles have been averaged over the entire troposphere, over the entire stratosphere and over the entire profile. Using large 10 layer averages removes all random effects due to controller stability and other random processes in any particular profile. In the first of these dual soundings a manufacturing defect was found in one instrument, which led to a very large difference between the two instruments. This sounding has therefore been removed from the statistics. Within the remaining six dual soundings the standard deviation of the difference is 0.11 K in the troposphere and 0.09 K in the stratosphere. The standard deviation of the difference over the entire profile is 0.08 K. 15 These six dual soundings imply that on average any two randomly-picked instruments are expected to agree with each other to within better than 0.11 K. This difference is only slightly larger than the expected agreement of about 0.1 K and is in part due to the fact that the second instrument in each pair was used to test modifications rather than to test a second identical instrument. Laboratory tests 20 In 2007 an intensive laboratory campaign took place at the Aerosols Interaction and Dynamics in the Atmosphere (AIDA) test chamber at the Karlsruhe Institute of Technology (Fahey et al., 2014). During this campaign stratospheric and upper tropospheric water vapor concentrations were used to test a large number of water vapor instruments. The experiments conducted during the first week of the campaign were averaged to remove any random effect and short term deviations, which is comparable to the stratospheric and tropospheric averages of the dual soundings. The mixing ratios measured by the 25 CFH were within 10% of the campaign reference (approximately within 0.5 K frostpoint temperature) for all static levels during that experiment and within 4% (approximately within 0.2 K frostpoint temperature) for most levels. Although this campaign was not intended as a comparison against an absolute reference, this result may be taken as indication that a calibration uncertainty estimate of better than 0.2 K is a reasonable estimate. The calibration uncertainty term is assumed to be a fixed value over the entire profile. It therefore needs to be added in 30 square to the vertically resolved controller stability. However, this must be done after any vertical or temporal averaging of the controller stability has been done, since this term cannot be reduced by increasing the averaging. The calibration uncertainty is the lower limit for the overall measurement uncertainty that current production technology can achieve. Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. Published: 8 March 2016 c Author(s) 2016. CC-BY 3.0 License. Miloshevich et al. (2009) indicated that the CFH may have some unexplained bias in the lower troposphere. Furthermore Nash et al. (2011) indicated poor control and possible bias in some instruments. These biases and poor control impacted some instruments and have motivated further instrument improvements. The most important of these is a modification in the production procedure related to installation of the thermistor into the mirror. The location of thermistor in the mirror and its 5 installation has been largely the same since the original design of the NOAA frostpoint hygrometer by Mastenbrook. Measurement bias Although the location of the mirror temperature measurement has previously been found not to be critical (Vömel et al., 2007a), we found that small variations in the manufacturing process may have had a noticeable influence on the accuracy of the temperature measurement. To eliminate the possibility of this bias the assembly procedure of the thermistor has been changed. This change was introduced in the production starting with 2L2901. 10 The impact of this change can be illustrated in the frequency of supersaturation over liquid water observed in the lower troposphere. Supersaturation over liquid water is expected to be small in the real atmosphere and measurements showing significant supersaturation over liquid water are most likely related to high biases in the measurement of water vapor or cold biases in the measurement of air temperature. From a set of 1022 available CFH soundings, we extracted the maximum values of relative humidity over liquid (RH) in the lower troposphere measured in each profile. Figure 7 shows the 15 distribution of this peak value grouped by CFH version. Here we used all available data, including profiles that had been flagged as suspicious in the original data processing and ignored in previous scientific analyses. The lower panel shows the distribution of peak values of RH for all instruments up to serial number 2L28xx. This includes all instruments interfaced to the Vaisala RS80 radiosonde, the InterMet iMet-1 radiosonde and the Meisei RS06G radiosonde. The upper panel shows the distribution for all available data from instruments starting with serial number 2L2901. Fifteen percent of instruments from 20 prior to the production change show supersaturation over liquid water larger than 5%, which is a direct indication of a possible bias in these instruments. This bias is the same that had been noted already by Miloshevich et al. (2009). In the top panel, which shows the observations from all instruments after the manufacturing change, only 1.6 % of the soundings show a significant bias. Therefore, we can conclude that the bias that had impacted some instruments in the past has been effectively eliminated in all instruments starting with serial number 2L2901. 25 Suspected bias in the older instruments can be corrected using an empirical model, which depends on measured frostpoint temperature and pressure: where Fp ′ is the corrected frostpoint temperature, Fp is the originally measured frostpoint temperature, P is the ambient pressure, and k is an empirical correction constant for the entire profile, which has to be estimated based on the level of 30 supersaturation observed in the lower troposphere. The change in frostpoint temperature impacts lower tropospheric observations more strongly than upper tropospheric measurements. Figure 8 shows the distribution of the correction as a function of altitude for all soundings, where a bias is suspected. The most common correction in the lower troposphere is Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. Published: 8 March 2016 c Author(s) 2016. CC-BY 3.0 License. around 0.4 K, dropping to about 0.2 K in the stratosphere. This correction is within the previously published total uncertainty estimate of 0.51 K; however, since this is a systematic error, it will impact large data sets, trend estimates, or comparisons with satellite instruments. Although most of the impacted instruments show a bias within the previously published uncertainty estimates, a few instruments show significantly larger biases. All of these were noted in the initial data processing and were either rejected or empirically corrected. The right hand panel of Figure 8 shows the weighted average 5 correction profile, which is strongly influenced by large outliers. The weighted mean correction for these soundings varies between 0.45 K in the lower troposphere and 0.3 Kin the stratosphere. This change is equivalent to a change in mixing ratio of about 3% in the lower troposphere and about 5% in the stratosphere. Since only about 15 % of all instruments are suspected to be impacted by this bias, the impact on studies using a large number of soundings such as satellite comparisons is considered small. 10 Frost layer morphology It has been pointed out in the literature (e.g. Fujiwara et al., 2003;Vömel et al., 2003), that it is essential to know the phase of the condensate on the mirror of a frostpoint hygrometer to be able to convert the mirror temperature to a water vapor partial pressure. Above 0°C it is safe to assume that the condensate is liquid and below a condensate temperature of about -35°C it is safe to assume that it is ice. In the mirror temperature range between 0°C and -35°C the condensate on the mirror 15 may be liquid or ice or mixture of liquid drops and ice crystals. If the phase is not well known, the instrument cannot produce reliable reference data. The CFH addresses this issue by forcing the condensate to freeze at a condensate temperature of -15°C (Vömel et al., 2007a). Above this temperature the condensate is almost always supercooled water. Visual inspection of each profile and the associated engineering data is required to guarantee a unique identification of the condensate phase. 20 Different condensate layer morphologies may introduce unexpected behavior depending on the details of the PID implementation. Figure 10 shows four images of the condensate layer on the CFH mirror which has a diameter of about 7 mm. These images were obtained during the AquaVIT-1 laboratory intercomparison with a specially-built laboratory version of the CFH (see Fahey et al., 2014) that allowed visual observation of the mirror in parallel with the detector. The top left image shows a layer of supercooled liquid water droplets at T = -28.0°C that had been generated purposely. The instrument 25 is able to maintain this condensate layer with good control and therefore measures the dewpoint temperature. This image is typical for dewpoint measurements. The top right image shows a fine frost condensate layer at T = -87.9°C. This is a typical frost layer image for most properly prepared instruments. The morphology of this condensate layer is reasonably stable for extended periods of time. That is, once it has been formed, its structure does not significantly change throughout the time of a typical sounding. The image on the bottom left shows a coarse frost layer with patches of small liquid droplets at T = -30 26.8°C. Although the PID controller is able to maintain a stable reflectivity, the mirror temperature is not a measurement of the amount of water vapor, since both liquid droplets and small ice crystals are present on the mirror at the same time. All liquid droplets must freeze before the mirror temperature represents the frostpoint temperature instead of the dewpoint temperature. The force-freezing algorithm avoids this ambiguity in a sounding instrument. The image on the bottom right shows a coarse frost layer at T = -88.9°C. Here the PID controller is unstable and not able to properly control the amount of condensate. This condition may be caused by a number of factors, one of which is the cleanliness of the mirror. Instrument operators normally clean the mirror prior to launch to avoid this issue and to guarantee that the frost layer morphology is consistent between all soundings. Improperly cleaned mirrors may lead to very different condensate layer morphologies and 5 thereby to unexpected instrument behaviors. These conditions give rise to substantially increased uncertainties in the measurements, which however are difficult to quantify. These images highlight the fact that the proper instrument preparation and operation may strongly impact the instrumental uncertainty. Therefore, the instrument preparation and set-up needs to be captured in the metadata to evaluate the quality of the sounding data after the observations have been completed. 10 Contamination and descent observations Water vapor observations on balloon ascent always carry the risk that the measurements are contaminated by the outgassing of water vapor from the balloon envelope, the parachute, the load line or the intake tubes of the instrument. This risk is particularly elevated when the sounding passes through liquid water clouds where supercooled water may freeze on any surface. Most soundings show some degree of contamination in the highest parts of the stratospheric profile. With the design 15 of the CFH contamination is almost never encountered in the troposphere, but in extreme cases may impact the data soon after passing the tropopause. Therefore, all soundings have to be visually screened and contamination has to be flagged manually. Outgassing by the balloon envelope and the parachute can be minimized by using unwinders of at least 50-m length. Selfcontamination by icing inside the inlet tubes cannot be avoided unless heated inlet tubes are being used, which so far has not 20 yet been done on small sounding balloons. However, it can be minimized by using short polished stainless-steel inlet tubes such as those used by the CFH. CFH soundings rely largely on ascent measurements, allowing the use of ascent pressure, temperature and humidity data from the parallel radiosonde, which are considered more reliable during ascent. Trace gas interferences 25 The frostpoint principle is based on the two-phase system of condensed and gas phase water. There have been speculations that the presence of other trace gases may interfere with this basic assumption and change the equilibrium temperature. One of the atmospheric trace gases known to change the condensation temperature of ice in polar stratospheric clouds is nitric acid (Crutzen and Arnold, 1986). Nitric acid may co-condense together with water vapor to form solid nitric acid trihydrate, for which the frostpoint is a few kelvin above the frostpoint of pure water ice. Therefore the possibility exists that nitric acid 30 co-condenses in the condensate layer on the mirror of a frostpoint hygrometer. Laboratory studies with different concentrations of nitric acid have shown that at atmospheric concentrations small amounts of nitric acid are incorporated into Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. the ice layer of a frost point hygrometer; however, these amounts are insufficient to change the equilibrium temperature between ice and gas phase water vapor (Thornberry et al., 2012). Laboratory tests using high concentrations of carbon dioxide also have not shown indication of any significant interference with the ice frostpoint temperature. Therefore, trace gas interferences are currently excluded as systematic error sources. Discussion of uncertainties 5 The uncertainty due to controller stability is considered random and correlated over the time of the smoothing within a single profile. For long time series, this term can be considered uncorrelated as well as random and can be treated accordingly. Here it is assumed that the oscillations are symmetric around the true frost point temperature. However, for strongly oscillating instruments, this may not be the case due to the difference in frost formation and evaporation, in particular at cold temperatures. Therefore, for strongly oscillating instruments the uncertainty due to controller stability may mask a 10 systematic error, which cannot be quantified at this point. However, the statistical uncertainty of these soundings is usually so large that the measurements become less significant and should not be used in time series analyses. Likewise, for instruments with negligible uncertainty due to controller stability, a systematic error due to controller lag may be present which cannot be quantified. For tropospheric and stratospheric observations the CFH is a fast-responding instrument and lag issues are not suspected. 15 For stratospheric observations the agreement of measurements between ascent and descent can be taken as indication that possible systematic errors are less than the difference between the ascent and descent. Observations where large differences exist between ascent and descent data that are not due to a fast descent and not due to contamination may be due to frostlayer morphology issues. These soundings require an increase in the uncertainty estimate, which is at least as large as the difference between the ascent and descent measurements over the affected altitude regions. 20 The uncertainty due to calibration is that of the thermistor calibration and the measurement electronics. Both are considered small. Instrumental and production variability have caused larger deviations in the past and should therefore always be checked prior to launch. A pre-launch check (see following section) is recommended, which should consist of at least a onepoint measurement compared to a known reference after the cryogen has been added to the instrument. Ground check 25 Upper air observations using disposable instrumentation rely heavily on the stability of the manufacturing process, which is usually not in the control of the personnel using the equipment. Even though manufacturers of high-quality equipment pay great attention to the stability of their product, factors outside their control may impact the quality of their sounding equipment. The purpose of a ground check is to provide evidence that each instrument behaves as expected prior to launch and that instrumental issues that may be detectable prior to launch are indeed detected. The ground check procedure is most 30 important for long term observations, where an independent verification of the instrument calibration is essential, even if it Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. cannot be done over the full range of parameters to be measured. It assures that at least a one-point traceability to a reference can be established to verify the long-term stability of the instrumentation. The ground check for the CFH consists of a measurement of relative humidity inside the inlet tube prior to launch using an integrated temperature and polymer humidity sensor (Ahlborn FHAD462). A small propeller generates highly turbulent air motion inside the sensor housing to assure proper ventilation across the mirror. The polymer humidity sensor is regularly 5 recalibrated using a NaCl salt solution at 75.3% and a MgCl salt solution at 32.8% RH which avoids any long term drifts of this sensor. The relative humidity measurement is converted to dewpoint temperature using the air temperature measurement inside the inlet tube which serves as the reference to the CFH dewpoint measurement. This ground check was introduced at Lindenberg in 2014 and has been used in a total of 31 soundings so far. Figure 9 shows the time series of the differences between the CFH dewpoint measurements and the reference measurement 10 inside the tube just prior to launch. The mean dewpoint difference for all 31 prelaunch checks is DP CFH − DP ref ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ = 0.02 ± 0.10 K. The scatter of this comparison is due to the uncertainty in the CFH as well as in the reference measurements. Over this time period there is no apparent drift and the CFH agrees with this reference to better than 0.1 K. Under typical launch conditions an uncertainty of 0.1 K in dewpoint temperature corresponds to an uncertainty of about 0.7% in relative humidity, which is comparable to the uncertainty of the recalibration of the RH sensor used in these tests. Therefore, the uncertainty of 15 the CFH water vapor measurements at the surface is likely better than expressed by this standard deviation. Although the data record of ground checks is short, it demonstrates the stability of this test over this time period and the ability to identify potential issues, had any arisen. For long-term climate observations using frequently changing instruments this stability test prior to launch is essential to gain confidence in the data set (Immler et al., 2010). Although we did not start this test when the older instruments described in section 0 were still used, we are confident that the issues discussed there 20 could have been identified much sooner using this test. Other issues may emerge in the future as technology develops and instrumentation changes. Having a continuous set of reference measurements such as this short set will be essential in defending the stability of these measurements. Summary The uncertainties of observations are one of the limiting factors in determining changes in atmospheric composition. 25 Random errors are less likely to impact long-term trends; however, changes in systematic errors may impact long-term trends to the extent of the change. The oscillations around the mean of the frostpoint temperature, which are due to the action of the PID controller, provide a direct measure of the random uncertainty of the frostpoint temperature. It is important to note that the uncertainty refers to the uncertainty of the mean frostpoint temperature and not to the instantaneous measurement. It represents the uncertainty of a mean value in a vertical layer of the atmosphere. The extent of this vertical layer is provided 30 by the time window that was used to smooth the data and is provided in the data files. Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. This calibration is currently stable within 0.02 K. Parallel observations of two CFH instruments on the same balloon have so far verified an agreement to within 0.11 K throughout the entire profile, which is in part due to the fact that the second instrument has been used to test new modifications. A manufacturer-independent ground check was introduced in 2014. This ground check demonstrates that the instruments 5 behave prior to launch as expected and that the systematic error is less than 0.1 K. This systematic error limit of 0.1 K is currently the limiting uncertainty term for long term data series. A systematic bias, which had been noted in previous studies in lower tropospheric water vapor measurements has been identified in some instruments and was traced back to manufacturing variability. A change in the production process has eliminated this variability and thereby effectively removed this bias from the current production starting with serial number 10 2901. Even though the bias is smaller than the previously stated uncertainty, a correction algorithm can correct all affected profiles. Since this is a systematic error, it should be removed from instruments suspected to be impacted. This bias is another example of the need for a manufacturer-independent ground check in reference observations using disposable sounding instrumentation. This experience highlights the need for such checks in all other sounding observations if these are to be used for long-term climate observations. 15 Appendix: Data processing, filtering, smoothing Raw data are transmitted once per second. The raw data consist of the mirror temperature, the reflectivity signal from the optical detector, the battery voltage and the detector temperature. Other housekeeping data may be transmitted as well, but are not considered essential. The CFH is programmed to perform one clearing pulse and one freezing pulse, during which the condensate of the mirror freezes. These pulses last several seconds and their disturbance on the PID controller may impact 20 the frost point measurement for a few tens of seconds. These data are flagged as bad data and should not be used in further processing. The forced freezing is then examined more closely and a comparison with the parallel radiosonde verifies that the condensate changes phase during this freezing cycle. A condensate phase flag is set to indicate whether the condensate on the mirror is liquid or ice. This distinction is essential since the partial pressure of water is calculated using the measured mirror 25 temperature and the vapor pressure equation corresponding to the condensate phase. During winter or in conditions, where the dewpoint temperature at the surface is significantly below 0°C, the condensate may already be frozen prior to launch, in which case the freezing cycle does not lead to a change in the condensate phase. These situations are visually identified and properly flagged. The stratospheric water vapor profile is examined and the region is identified, where contamination from outgassing is 30 suspected. These data are flagged and must be ignored in future processing. If descent data are being used, then data during which the PID controller recovers after the beginning of the descent have to be flagged manually as well. For both Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. contamination and PID recovery experience is used as guide. Generally data are flagged conservatively to avoid misinterpretation of low quality data. The data are then screened for unexpected issues. Although rare, data problems may arise from unusual balloon behavior, interferences from other instruments or unknown instrumental issues and any suspect data are flagged. All data are smoothed using the low pass filtering described in Section 2.1. The smoothing algorithm also provides the uncertainty estimate for the 5 controller stability as well as the time resolution for the uncertainty estimate. Layer averages can be calculated using only ascent data. Data from the parallel radiosonde and possibly ozone sonde are valid only on ascent and uncertainty estimates for these measurements can only be provided for ascent measurements. using the CFH instrument through serial number 2L28xx. The upper panel shows the current instrument series, which started with 2L29xx. Peak values above 100% are an indication for bias either in the temperature measurement or in the water vapor measurement. While some significant measurements of supersaturation were taken using instruments through 2L28xx, no significant supersaturation has been observed with instruments starting with 2L29xx. Figure 3. Also shown is the combined uncertainty. To provide a guide for the relative difference, a 5% relative difference in mixing ratio is indicated by the thin dotted lines. Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech. Atmos. Meas. Tech. Discuss., doi:10.5194/amt-2016-44, 2016 Manuscript under review for journal Atmos. Meas. Tech.
2018-07-21T02:42:54.463Z
2016-08-16T00:00:00.000
{ "year": 2016, "sha1": "12cce932b08f45bc4749ff54632455fae7cfe035", "oa_license": "CCBY", "oa_url": "https://www.atmos-meas-tech.net/9/3755/2016/amt-9-3755-2016.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d332631a1bef484cdb72cc4f0191e9ce714dfd4c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
244402050
pes2o/s2orc
v3-fos-license
The effect and safety of ropinirole in the treatment of Parkinson disease Abstract Background: It is necessary to conduct a meta-analysis of the clinical randomized controlled trials (RCTs) on ropinirole in the treatment of Parkinson disease (PD), to explore the effects and safety of ropinirole, and to provide a theoretical basis for clinically safe and rational drug use. Methods: RCTs on the effectiveness and safety of ropinirole in the treatment of PD were searched. We searched Dutch medical literature database, Pubmed, Cochrane Library, China National Knowledge Infrastructure, Wanfang Knowledge Service Platform up to December 15, 2020. The Cochrane risk bias assessment tool was used to evaluate the quality of the included literature, and the RevMan5.3 software was used for meta-analysis. Results: A total of 12 RCTs with 3341 patients were included. The changes of Parkinson Disease Rating Scale Part II score (mean difference = –2.23, 95% confidence interval [CI] –2.82 to –1.64) and Parkinson Disease Rating Scale Part III scores (mean difference = –4.93, 95%CI –5.25 to –4.61) in the ropinirole group was significantly lower than that in the control group. The incidence of dizziness (odd risk [OR] = 1.85, 95%CI 1.50–2.28), nausea (OR = 2.17, 95%CI 1.81–2.59), vomiting (OR = 2.73, 95%CI 1.47–5.09), and lethargy (OR = 2.19, 95%CI 1.39–3.44) in the ropinirole group was significantly higher than that in the control group (all P < .05), and there were no significant differences in the incidence of headache (OR = 1.14, 95%CI 0.79–1.65) and insomnia (OR = 1.06, 95%CI 0.72–1.55) were found between 2 groups (all P > .05). Conclusions: Ropinirole can help improve the ability of daily living and exercise function of PD patients, but it will increase the incidence of related adverse reactions, which needs to be further confirmed by subsequent large-scale, high-quality RCTs. Introduction Parkinson disease (PD) is a common neurodegenerative disease in middle-aged and elderly people. [1] Its symptoms include typical motor symptoms and non-motor symptoms. At present, early and mid-term PD is still dominated by drug therapy. [2] Dopamine agonist (DA) has been widely used in the early monotherapy of PD and the combination therapy with levodopa in the middle and late stages. [3] Although there is no more recognized evidence that one type of DA is better than another type of DA, ergot DA is no longer used as the first-line treatment for PD due to its fibrotic side effects. [4] However, nonergot DA continues to be used as the first-line treatment for PD. [5] At present, new long-acting non-ergot DA preparations such as ropinirole have been developed, and their effectiveness and safety have been extensively studied to guide the clinical drug use and treatment of PD. In the past, dopamine receptor agonists were mostly partial agonists of the receptor. At present, the non-ergot receptor agonists pramipexole and piribedil hydrochloride are widely used at home and abroad. [6] Ropinirole as a new generation of nonergot alkaloid selective dopamine D2/D3 receptor agonists, it was first marketed in the UK in 1996 and was approved by the Food and Drug Administration for PD treatment in 1998. It has a unique pharmacological effect and a long half-life, it can last for a long time on dopamine receptors, and it is beneficial to reduce the number of medications and drug dosage. [7] Understanding the effectiveness and safety of ropinirole in the treatment of PD has important guiding significance for clinical medication. Previous studies [8,9] have focused on the role of ropinirole in PD, yet the results remained inconsistent. Therefore, we aimed to conduct a meta-analysis to investigate the effect and safety of ropinirole in the treatment of PD, to provide insights to the clinical treatment of PD. Ethical consideration Ethical approval and patient informed consent were not necessary since our study was a meta-analysis and systematic review. Literature search We used computers to search the Dutch medical literature database (Embase), the U.S. National Library of Medicine Medical Literature Retrieval System (Pubmed), the Cochrane Library, China National Knowledge Infrastructure (CNKI), Wanfang Knowledge Service Platform for informal non-inferiority design studies on the effect and safety of ropinirole in the treatment of PD. At the same time, we manually searched related documents and references. The search deadline was December 15, 2020. The database search term used was: ("Ropinirole" OR "non-ergot dopamine agonist" OR "NEDA") AND ("Parkinson's Disease" OR "PD"). Two authors independently conducted literature search and screening. Inclusion and exclusion criteria The inclusion criteria of this meta-analysis were as follows: All participants were not limited in gender, age, and nationality, and the diagnosis of PD met the relevant PD diagnostic criteria. The interventions should compare the ropinirole and control treatments. The study design is a randomized controlled trial (RCT). Exclusion criteria: Non-RCT research design. Patients with a history of brain stereotactic surgery in the patient's medical history, or patients with serious underlying diseases and mental disorders. The study sample was unclear or the relevant outcome data were incomplete. Quality evaluation Two evaluators independently completed the data extraction and quality evaluation, and then checked and compared each other. If the opinions were inconsistent, they would discuss with the third evaluator. Cochrane collaboration's tool [10] for assessing risk of bias was used for quality evaluation, and each item was divided into "low bias", "unclear", and "high bias". "Low bias" means that there is no risk of bias, which is indicated by a green area on the Cochrane evaluation scale; "unclear" means that the evaluator cannot judge whether there is a bias, and it is indicated by a yellow area on the Cochrane evaluation scale; "highly biased" indicates that there is a risk of bias, which is indicated by a red area on the Cochrane evaluation scale. Data extraction We extracted the number of cases, gender ratio, average age, Hoehn-Yahr scale, treatment dose, and course of treatment in each RCT. The extracted outcome indicators included: the change in the total activity of daily living score in the Parkinson Disease Rating Scale Part II (UPDRS II) from the baseline; the change in the total motor function test score in Part III (UPDRS III) from the baseline, and the incidence of adverse events after treatment with ropinirole, such as dizziness, nausea, vomiting, drowsiness, insomnia, hallucinations, dyskinesia. Statistical analysis We used RevMan5.3 statistical software for meta-analysis. Continuous variables use mean difference (MD), and binary variables use odd risk (OR) as the statistic used for efficacy analysis, with 95% confidence interval (CI) represented each effect size. The heterogeneity of the data was tested by I 2 statistic. In this study, the random effects model was used to calculate the total results. According to the possible heterogeneity factors, subgroup analysis and sensitivity analysis were performed to clarify the reasons for the heterogeneity. P < .05 indicated that the difference between groups was statistically significant. Study selection Through the initial database search, 142 potential documents were initially obtained. With reference to the inclusion and exclusion criteria, 96 articles were excluded by reading the title and abstract. We searched and read the full text of the remaining 46 articles, and further excluded 34 articles by reading the full text with reasons including non-RCT study design, different intervention methods. Three associated RCTs [11][12][13] were excluded because they compared the ropinirole and other drugs, which failed to meet the inclusion criteria of this meta-analyses. A total of 12 RCTs [14][15][16][17][18][19][20][21][22][23][24][25] with 3341 patients were included finally, including 1855 patients in the ropinirole group and 1486 patients in the control group. The flow chart of study selections was presented in Figure 1. Quality evaluation of included studies We evaluated the quality of all the included literature according to the research quality evaluation criteria recommended by Cochrane Handbook 5.1.0. All included studies were RCTs. Four studies [14,16,19,21] only mentioned randomization without specifying specific methods. The remaining studies [15,17,18,20,[22][23][24][25] all described specific methods. None of the studies explained the hiding of allocation, and none of the studies explained the blinding method. In terms of the completeness of the result data, all studies have no missing data. None of the 12 RCTs reported selective results and other sources of bias. The quality evaluation of the included studies was shown in Figures 2 and 3. 3.4. Meta-analyses 3.4.1. Changes of UPDRS II score. Three RCTs [19,20,25] reported the changes of UPDRS II scores before and after treatment with ropinirole or control in PD patients. The heterogeneity test indicated that the synthesized results of the various studies have moderate heterogeneity (P = .06, I 2 = 64%). Meta-analysis results showed that the changes of UPDRS II score in the ropinirole group was significantly lower than that in the control group (MD = -2.23, 95%CI -2.82 to -1.64) (see Fig. 4A). 3.4.2. Changes of UPDRS III score. Four RCTs [19,20,24,25] reported the changes of UPDRS III scores before and after treatment with ropinirole or control in PD patients. The heterogeneity test indicated that the synthesized results of the Table 1 The characteristics of included patients. various studies have no heterogeneity (P = .56, I 2 = 0%). Metaanalysis results showed that the changes of UPDRS III score in the ropinirole group was significantly lower than that in the control group (MD = -4.93, 95%CI -5.25 to -4.61) (see Fig. 4B). The incidence of movement disorders. Eight RCTs [15,[18][19][20][21][22]24,25] reported the incidence of movement disorders with ropinirole or control in PD patients. The heterogeneity test indicated that the synthesized results of the various studies have no heterogeneity (P = .58, I 2 = 0%). Meta-analysis results showed that incidence of movement disorders in the ropinirole group was significantly lower than that in the control group (OR = 4.08, 95%CI 2.74 to -6.08) (see Fig. 4C). 44) in the ropinirole group was significantly higher than that in the control group (all P < .05), and there were no significant differences in the incidence of headache (OR = 1.14, 95%CI 0.79-1.65) and insomnia (OR = 1.06, 95%CI 0.72-1.55) were found between 2 groups (all P > .05). Discussions Drug treatment can improve the symptoms of PD and improve the quality of life of patients. [26] At present, compound levodopa, dopamine receptor agonists, monoamine oxidase B inhibitors, catecholamine-O-methyltransferase inhibitors, etc are common drugs for the treatment of PD. [27] Dopamine receptor agonists can directly act on postsynaptic dopamine receptors to improve symptoms. [28] Ropinirole is a new type of dopamine D2 receptor agonist. [29] A number of clinical studies [30][31][32] have discussed its therapeutic effect and safety, but the results are not consistent. Previous meta-analysis [33] has included 12 RCT studies prior to 2010 involving ropinirole, demonstrating a higher incidence of adverse event of ropinirole such as somnolence, dyskinesia in addition to dizziness, nausea, vomiting observed in this study, which may be associated to the fact that the adverse effects of ropinirole are reduced with the development of biopharmaceutical technology, this study mainly focused on the adverse effects of ropinirole, we have both focused on the therapeutic effects and safety of ropinirole in the treatment of PD. The results of this meta-analysis show that ropinirole has a significant effect in improving PD motor function and ability of daily living, but its risk of dizziness, nausea, vomiting, and lethargy is also significantly higher. Ropinirole is a non-ergot dopamine receptor agonist that is selective for D2 and D3 dopamine receptors. [34] It has negligible affinity for a wide range of central non-dopaminergic receptors, including a and b adrenergic receptors, serotonin receptor type 1, serotonin receptor type 2, benzodiazepines, and g-GABA receptor. [35] In PD patients with exercise fluctuations, ropinirole, as an adjunct to L-DA, has been proven in early trials to improve the symptoms of PD. [36] It has been reported that the use of ropinirole as an adjuvant therapy can also significantly reduce the dosage of L-DA. [37] UPDRS score is a scale that evaluates the severity of PD. It combines the subjective and objective perspectives of patients for a more detailed assessment from various aspects such as different motor symptoms, non-motor symptoms, and motor complications. The safety of Ropinirole in the treatment of PD deserves further consideration. Dopamine receptor agonists have been used as anti-PD drugs since 1974, and they offer several theoretical advantages over levodopa therapy. [38] Firstly, they directly stimulate dopaminergic receptors in the postsynaptic striatum, without having to pass through a degraded pool of black striatal neurons or be regulated by reduced striatal terminals to convert to dopamine. [39] Secondly, they can be designed to preferentially stimulate a specific subset of dopamine receptors. [40] Thirdly, they have a longer half-life than levodopa and do not compete with dietary amino acids to enter the circulation and brain. Dopamine receptor agonists, as adjuvants to levodopa, have played an established role in the treatment of PD. [41] However, they are not as widely used as expected from their pharmacolog- Table 2 The meta-analyses on the related complications between 2 groups. Variables Number of included RCTs Heterogeneity (I 2 ) OR 95%CI P ical characteristics, which may be related to the difficulty of managing patients with combined therapy. The results of this safety analysis have showed that the incidence of adverse events in the ropinirole group was higher than that in the control group. The incidence of adverse reactions including dizziness, nausea, vomiting, lethargy, hallucinations, dyskinesias, fatigue were significantly higher than that of control group. In the previous reports, [42,43] the incidence of ropinirole dizziness was 6% to 40%, which was related to the dosage. Dizziness is a common neurological adverse reaction in the ropinirole group in this study. At present, there is no reports on the mechanism of dizziness after ropinirole treatment. Studies [44,45] have reported the incidence of insomnia is 6% to 26%. The mechanism of sleep disorders may be related to adverse dopaminergic reactions. In animal models, D2 receptors have a dual role. Low doses stimulate presynaptic receptors to produce a sedative effect, and high doses stimulate postsynaptic receptors to promote wakefulness, low-dose dopamine can cause sleepiness in PD patients, and high-dose dopamine can cause insomnia. [46] In addition, studies have reported that orthostatic hypotension is very common in PD patients. It has been reported that the incidence of inpatients with PD is 43% to 58%, and the incidence of PD patients in the community is 47%. [47] Compound levodopa and dopamine receptor agonists can cause orthostatic hypotension. [33] Studies [48,49] have found that dopamine receptor agonists may cause insufficient increase in norepinephrine secretion when the position is changed, and then cause orthostatic hypotension. In addition, some research results [50,51] suggest that ropinirole has a certain substitute value for patients with severe headache, insomnia, orthostatic hypotension, constipation, and other symptoms caused by long-term levodopa, and further clinical research is needed in this regard. This study still has certain limitations that must be considered. Firstly, the number of reports retrieved in this study was small and the sample size was not large, which did not fully represent the efficacy and safety of ropinirole in the treatment of PD. And the incidence of movement disorders are heterogeneous so that the effects and side effects were different, more studies on the safety of ropinirole are needed in the future. Secondly, due to the incomplete data of some reports, many useful data could not be extracted, and most of the data were from European and American countries, and there were few data from Asian countries, which may cause bias in the outcome. Besides, we only included RCTs comparing the ropinirole vs placebo in this meta-analysis. More network meta-analyses are needed in the future to evaluate standard ropinirole vs placebo, long-acting ropinirole vs placebo, standard ropinirole vs specific comparator drugs, moreover, large-scale, high-quality RCTs with long-term follow-up period are needed for further role verification of ropinirole in the PD treatment. Conclusions As a non-ergot dopamine D2/D3 receptor agonist, ropinirole has been proven to be a monotherapy and adjuvant treatment of Ldopa to reduce the symptoms of PD. Ropinirole has shown effective symptom relief in the treatment of patients with PD and is usually well tolerated. Patients treated with ropinirole had a significant improvement in motor function, which was determined by the UPDRS score. However, ropinirole may be also complicated by several complications. Therefore, and further studies are needed to evaluate the adverse reactions and tolerance of PD patients taking ropinirole for a long time.
2021-11-21T06:07:43.669Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a68f4c70f496891fb07165cfb88df8e6843accb4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000027653", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a68f4c70f496891fb07165cfb88df8e6843accb4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261822551
pes2o/s2orc
v3-fos-license
Implications of Weak Gravity Conjecture for de Sitter Decay by Flux Discharge We examine implications of the weak gravity conjecture for the mechanisms for discharging cosmological constant via membrane nucleations. Once screening fluxes and membranes which source them enter, and weak gravity bounds are enforced, a generic de Sitter space \underline{must} be unstable. We show that when all the flux terms which screen and discharge the cosmological constant are dominated by quadratic and higher order terms, the bounds from weak gravity conjecture and naturalness lead toward anthropic outcomes. In contrast, when the flux sectors are dominated by linear flux terms, anthropics may be avoided, and the cosmological constant may naturally decay toward smallest possible values. Introduction In the recent sequence of papers [1][2][3][4][5] we have initiated a novel program for addressing the cosmological constant problem and de Sitter space decay by using discharge of 4-form fluxes which screen, and then relax the total cosmological constant.As a result de Sitter space in this approach is intrinsically unstable.It decays into a fractal-like patchwork of regions of ever smaller constant curvature.In its simplest incarnation, our approach is a generalization of the idea pioneered in [6,7] about discharging cosmological constant by nonperturbative membrane nucleation processes, and also benefits from results of [8][9][10][11][12]. Let us briefly summarize the main features of the 'mechanics' of flux discharge here.Regardless of the details of the flux sector, when the cosmological constant is very large, larger than a certain scale Λ * set by the membrane quantum numbers, the discharge proceeds by nucleation of membranes whose size is comparable to the environment curvature radiusi.e. the horizon size.During this "boiling" stage the discharge is essentially unsuppressed.One could try a quick estimate S bounce ≃ − ΛoutΛ in , which is O(1) at the cutoff [1][2][3], and which resembles the Hawking-Moss instanton action [13], which supports that nucleation rates are not suppressed.A more incisive analysis of S bounce actually shows that Λ → ∞ and Λ = 0 are branch points of S bounce as opposed to poles, and so the limits are more delicate.We will show here that when Λ → ∞, S bounce → 0, and so in this regime the decay rate is which indicates barrier-less tunneling in the Euclidean theory.This could happen in the limits of the well known analyses of [14,15], for very fast bubble nucleations [16][17][18].In the thin wall limit of tunneling between a false and a true vacuum for a scalar field, the wall tension measures the barrier area, which controls the tunneling rate.Hence a negligible barrier yields an unsuppressed nucleation rate.Even more accurately, the rapid decay during this stage is moderated by a small bounce action which increases toward S bounce ≃ + 12π 2 M 4 Pl Λout as the initial cosmological constant decreases.The resulting decay rate disfavors the largest possible values of the cosmological constant, and favors the smallest ones, as the terminal outcome, because the more curved backgrounds are more unstable.To be relevant, this regime must involve the cosmological constant values below the cutoff; otherwise it is excluded from the effective theory description.When this holds, in this regime the cosmological constant will discharge to the smallest values achievable.Due to the increase of the bounce action with the decrease of the cosmological constant, the resulting distribution of cosmological constant values is skewed toward the smallest possible values, that can be reached given a set of membrane charges.This stage is important for populating the de Sitter landscape in ways which favor the smallest values of the terminal cosmological constant.Conversely, in the approaches which resort to anthropic arguments this stage should be altered, or even excised out of the effective theory, for example by pushing it above the cutoff. Once Λ decreases to below the critical scale, discharge rate dramatically slows down.There are very important differences between the dynamics depending on whether discharge processes are controlled by linear 4-form flux terms, or by quadratic and higher power ones.If linear flux terms dominate, as in [1][2][3][4][5], certain processes which dominate the discharge channels in previous approaches, and lead to membrane nucleation rates which are asymptotically independent of the initial and final values of the cosmological constant, can be kinematically completely forbidden, and replaced by processes whose rates depend on the initial cosmological constant.For these processes the decay rate develops an essential singularity at Λ → 0 + , where M 2 Pl = 1/8πG N .This "braking" stage protects the tiniest values of cosmological constant by making those geometries most stable.Together with the faster discharge, "boiling" regime preceding it, the overall dynamics favors de Sitter spaces with the smallest attainable cosmological constant [1][2][3][4][5].This occurs for all generic, natural values of the initial cosmological constant near and below the cutoff of the theory, at a scale M UV ≤ M Pl .This is in sharp contrast to previous works [6,7,[9][10][11][12] where the flux sector does not include linear 4-form terms but starts with F 2 .Then, whenever the initial cosmological constant is not fine tuned, but starts near the cutoff value Λ ∼ M 4 UV < ∼ M 4 Pl , the terms which control the transitions select decay channels with rates without accumulation points, which asymptote to a constant value that depends on the difference of the cosmological constants in the parent and descendant bubbles, and not the individual values, so Here Λ QFT is the cosmological constant from the field theory sector which is being neutralized, and T and Q membrane tension and charge, respectively.In such cases, one finds that the terminal distribution of cosmological constant values can be uniform if the phase space of possible values were equiprobable, and if the preceding "boiling" stage was not long (or completely excluded).In this case one can select the final value by resorting to the anthropic principle.The 'boundary' between the "boiling" and "braking" stages is controlled by the ratio of the cosmological constant before a transition and, in general, the tension and charge of the membrane, in the units of Planck scale.The precise value of the critical value Λ * where the transition happens is detail-dependent (and actually somewhat broad) and we will consider the possibilities below.We stress this presupposes that both the "boiling" and "braking" stages are below the cutoff, within the realm of effective theory.This is not automatic for the "boiling" stage. This argument indicates that together, naturalness in the QFT sense and any constraints on membrane charges and tensions relative to the cutoff impose restrictions on the cosmological constant cancellation via screening and membrane discharge.Staying below the cutoff is not only an issue of reliability but of caution born of necessity: above the cutoff lurks the wormhole regime, which is notoriously unreliable [19][20][21][22].So the "brute force" tuning and retuning of parameters in the equations which arise in the semiclassical limit cannot be arbitrarily done to evade those restrictions.In this paper, we explore these restrictions and the model-building requirements they impose.Specifically, we deploy the bounds on the membrane dynamics which arise from the Weak Gravity Conjecture (WGC) [23] and apply them to the generic natural boroughs of the landscape, where the field theory contribution to the vacuum energy is technically natural, of order Λ QFT < ∼ M 4 UV .It should be immediately clear that the WGC bounds affect the nature of de Sitter space fundamentally once we introduce the flux screening and discharge mechanisms.Once the cosmological constant receives the contributions from fluxes, and charged tensional membranes are included, so that fluxes can change by membrane nucleation, the WGC bounds immediately imply that there is no absolutely stable de Sitter space!The reason is simple: to stop the quantum mechanically induced discharges, membranes must be decoupled, which means, the tension of all the membranes that can change the cosmological constant must go to, formally, infinity -or in practice, above the cutoff.This is precisely the limit prohibited by WGC [23].Even if we do not violate WGC, this limit indicates that the WGC bounds, which constrain the ratios of charges, tensions and the cutoff, will affect the specific details of discharge dynamics. We find that when the 4-form sector is dominated by quadratic and higher powers of fluxes, the natural discharges by membranes which obey WGC are mediated by the same types of instantons as in [6,7], which yield the asymptotic decay rate given by (3).The "boiling" regime is pushed out of the range of the effective theory and the initial vacuum energy to be cancelled, near the QFT cutoff, is already in the "braking" regime.Such setups can therefore be naturally used to provide a framework for anthropic selection of the terminal cosmological constant, as in [11]. If we chose to violate WGC for all charged membranes in the theory, the low scale attractor (2) will appear, and the "boiling" stage may reappear below the cutoff.However the WGC violations required to make the setup natural are quite severe, because the charges of affected membranes become very small, and restoring WGC by adding membranes which satisfy the bounds can also restore channels for dominantly uniform discharge, that bring back anthropics, or force more fine tuning. In contrast, when the linear flux terms are present, and when they dominate over higher powers in the effective action, the dynamics changes significantly [1][2][3][4][5].When the fluxes are natural, of the order of cutoff-scale QFT vacuum energy Λ QFT ≃ M 4 UV , and they satisfy WGC, both the "boiling" and "braking" stages are below the cutoff, and they, together, favor the nucleation of a sequence of bubbles ending with the smallest possible terminal cosmological constant.Hence in this case the universes with tiny cosmological constant arise naturally, and are favored by evolution without invoking anthropic reasoning.We also note, that even if some of the membranes involved are discharged by processes dominated by quadratic flux terms, the outcome remains unaffected as long as there are channels dominated by linear flux terms. The paper is organized as follows: in the next section we review WGC, starting with point particles and then provide a general set of inequalities for charged membranes in four dimensions.Next, in Section 3. we discuss the effective action for fluxes and charged tensional membranes using magnetic duals of 4-forms, and revisit the mechanics of flux discharges of [1][2][3][4][5].In Section 4. we turn to the implications of WGC and naturalness for flux discharge processes and explain the limitations which arise for effective theory description of de Sitter decay.We give a summary of the results and discuss open questions in the last section. WGC in a Nutshell: a Lightning Review If objects supporting event horizons were really forever, the retrieval of information about the material that went into them may be impossible.In quantum physics, this suggests that event horizons may catalyze unitarity loss, and hence endanger and obstruct quantum mechanics itself [24].Preempting this implies subtle consistency conditions on models of matter coupled to quantum gravity.A specific application concerns charged black holes.As is well known, generic black holes are actually not black since they radiate like black bodies at Hawking temperature.To ensure that they radiate out the charge that went in with the material which formed a black hole, it is necessary that there are sufficiently light charged particles that can stream outside.This imposes a condition on charge per unit mass [23], which is now called the electric WGC: for each conserved gauge charge there must be a sufficiently light charge carrier such that where e and m are the carrier charge and mass.This can be deduced very simply from conservation laws [25,26]: consider a black hole of mass M with charge Q, where by conservation of mass and charge M ≥ i m i (as we allow for energy contribution from neutral sources) and Q = i e i .Thus Applying this to extremal black holes M = Q/ √ G N , which have the largest ratio Q/M due to the horizon regularity constraints yields the strongest bound: Eq. (4), precisely. However many charged black holes can become ultra-cold in the extremal limit, and cease to emit Hawking radiation.If they were to last forever, they would cause problems behaving as troublesome remnants [27].Yet even if Hawking radiation ceases there are nonperturbative, non-thermal processes which lend to black hole discharge [28].Essentially, these are variants of Schwinger charged particle production in background electric fields [29].This decay channel arises thanks to Heisenberg uncertainty principle, whereby a particle-antiparticle pair emerges in an external electric field E. The field accelerates virtual particles in the pair away from each other and transfers enough energy to them that they can get on shell instead of annihilating away. A very nice intuitive argument is given in [30], building on the work of [31], which we revisit here.We will model the pair creation and their separation due to the work of the background field as the process of initially accelerating a negative energy "virtual" particle, which gains enough energy due to the acceleration to become a positive energy particle that propagates away, leaving behind a "hole" -a positive energy antiparticle after charge conjugation -that propagates away in the opposite direction.Working in the rest frame of one of the pair, which is also initially the rest frame of the pair, the dispersion relation after a small displacement δz is (ε + eEδz) 2 − ⃗ p 2 = m 2 .Solving for p z , with the initial condition ε = −m at δz = 0 (recall that c = ℏ = k B = 1), Clearly, the square root vanishes at δz = 0 and δz = 2m/eE.In between these two locations, the square root is imaginary, and so the Euclidean momentum π z = −ip z is real, π z = eEδz(2m − eEδz).In this regime the particle is "virtual", with imaginary momentum, being accelerated by the electric field E toward positive energies.Since the mass shells are tilted by the electric field potential energy, the particle can tunnel from the negative mass shell to the positive one, and subsequently propagate out to infinity [30].This occurs when the Euclidean momentum vanishes at δz = 2m/eE, which can be understood as the instant where the particle gains enough energy through the work of the electric field.Indeed, since δW ≃ eEδz, δW ≃ 2m implies that enough energy is transferred at δz ≃ 2m/eE.We can estimate the particle production rate to the leading order by employing WKB approximation, and computing the Euclidean action by integrating over the region where p z is imaginary (the "barrier"): to get the tunneling wavefunction Ψ = e −SE .The rate is given by Γ ∼ |Ψ| 2 , hence In weak electric fields E → 0, the rate goes to zero, while for strong fields E > πm 2 /e the exponential suppression disappears, and the rate is polynomially fast.Applying this formula to a charged black hole, and taking the strongest electric field available just outside of the outer event horizon, yields E = Q/r 2 + where Q is the black hole charge and r where M is the black hole mass, Q its charge, and It turns out that, despite technical subtleties, this equation gives the correct leading order decay rate describing black hole discharge due to nonperturbative quantum effects [28,30].Note, that these processes do not cease in the extremal limit, and that discharge continues even when M = Q/ √ G N .Further note, that while these processes are slow for large black holes, they speed up as the mass decreases.They can also be augmented by spurts of Hawking radiation that can restart the charge loss by light particle emission, and go faster when charge carriers are light, m ≪ M Pl .But at least in principle, as long as the Euclidean action S E can continuously decrease to ≤ 1, the discharge can proceed -and speed up near the end -with the black hole disappearing.As shown in [25], reaching m 2 r 2 + /eQ < 1 is inevitable as long as Eq. ( 4) holds.To see this, substitute r The resulting inequality after squaring it up and manipulating terms as in [25] becomes Next, demand M ≥ Qm/e to ensure that kinematical constraints can be met, for simplicity define Q = ζ 2 and divide everything by ζ 2 [25]; this maps (10) to The function f (ζ) has two zeros, one at ζ = 0 and the other away, at a location approximately determined by ζ 0 ≃ e 3/2 /2G N m 2 .In between these two values f is negative, and so the inequality cannot be satisfied there.To satisfy the inequality, ζ must exceed ζ 0 .However, as the black hole charge ζ = Q 2 decreases from some large initial value, the root ζ 0 must approach the origin, which means that at fixed e, the mass m must be dialed up to satisfy f ≥ 0 -eventually running afoul of Eq. ( 4).The fastest way for this to occur is along the parameter space curve extremizing f in the ζ direction, which implies Hence the strongest bound comes from imposing (10) at this value of ζ.Substituting in (10), we find that f ≥ 0 implies which is precisely the same as the bound of Eqs. ( 4), ( 5).This implies that as long as there are light particles carrying charge e which obey (4), charged black holes cannot linger forever.Both perturbative and nonperturbative particle production processes can discharge them.Conversely, if (4) does not hold for any charged species, neither discharge channel will be generally accessible, and remnants, and perhaps other problems, would seem to be difficult to avoid [23,25].Eq. ( 4) provides protection from such problems.There is also a magnetic variant of WGC, which deals with the interplay of magnetic solitons with gravity [23] (see also [32]).An issue here is that in the weak coupling regime of gauge theory magnetic monopoles are very heavy, with the mass m monopole ∼ M UV /e 2 , where M UV is the UV cutoff of the theory.Combining this with the WGC bound which means that the cutoff of a weakly-coupled gauge theory must be below the Planck scale.This ensures that the monopole is not a black hole: combining m monopole ∼ M UV /e 2 with the size of the monopole core R monopole ∼ 1/M UV yields immediately m monopole ≤ G N R monopole [23], violating the hoop conjecture which black holes satisfy. The WGC bounds discussed here for point particles can be generalized for extended objects.Specifically, we will be interested in the implications of charged tensional membranes in four dimensions.For them, the electric weak gravity bound generalizes to The statement of WGC then is that in the spectrum of the theory which includes charged membranes, for each type of charge there must be at least one membrane which satisfies the inequality (14).The magnetic form of the bound is a bit more subtle, having been deduced [32] to be the bound on the cutoff of the theory found by estimating the membrane tension in more than 4D by the energy stored in the field sourced by Q, using the expected scaling of the gravitational radius with the higherdimensional gravitational constant, requiring that there exists a magnetic membrane without the horizon, and then dimensionally reducing the result to 4D.The inequalities ( 14), (15) will play important role in our arguments in what follows. 3 Discharges with Linear and Quadratic Flux Terms General theories of 4-forms coupled to gravity and sourced by charged tensional membranes were examined in [4,5].They split into two qualitatively different classes, depending on whether the linear 4-form flux terms are present and dominant in the action, or not.For this reason we will focus here on the more special limiting forms, comprised of only linear and quadratic terms, which simplifies the discussion without any loss of generality.Further, the technical analysis simplifies by replacing the 4-form with its magnetic dual, F ↔ * λ, and replacing the 4-form Lagrangian L(F) with its Legendre transform L(λ) [33,34].As explained in [4], this amounts to the Routhian transformation of the theory.Concretely, we start with motivated by, e.g.[35], where α is a fixed 4-form theory coupling parameter induced by nontrivial CP-breaking effects [35].We then add the boundary term d 4 x 1 3 ϵ µνλσ ∂ µ (λ A νλσ ) to (16), define the new variable Fµνλσ = F µνλσ − (2λ − α) √ gϵ µνλσ and integrate F out.The resulting "bulk" action is [4] We next expand 2 λ − α 2 ) 2 = 2λ 2 − 2αλ + α 2 /2, and absorb the flux-independent term α 2 /2 into the QFT vacuum energy, L QFT + α 2 /2 → L QFT .Further, since 4-form should be viewed as a higher rank gauge theory, we add to (17) the gauge field charges -the charged tensional membranes -and boundary terms required to properly provide junction conditions across the membrane walls.This is motivated by the general lore that quantum gravity does not coexist with global symmetries [36,37], and without charges the 4-form theory would in fact admit generalized higher-form symmetries.When charges are present, those are broken by gauge currents [38,39]. Finally we parameterize 2α = c 1 M 2 UV , and note that the value of c 1 controls how close this term is to the cutoff.Such terms arise naturally in axion physics, when the CP-violating effects in some non-trivial gauge theory max out [35].We could even imagine that such a theory has an axion with a very large decay constant f > ∼ M Pl and where quantum gravity effects break shift symmetry very strongly.In any case, the final effective action for a single gauge sector coupled to gravity, which we will use in what follows, is where T A and Q A are the membrane tension and charge, respectively, the term ∝ K is the Israel-Gibbons-Hawking term for gravity which encodes boundary conditions across membrane walls, and [...] is the jump across a membrane.The coordinates ξ are coordinates along a membrane worldvolume, embedding it in spacetime.The charge terms are We take T A > 0 to avoid problems with ghosts and negative energies.This is a special case of actions discussed in [4], which suffices for our purposes here. To study the discharge processes, we Wick-rotate the action (18) to Euclidean time.This Euclidean action controls the nucleation rates Γ ∼ e −S E [14].The analysis is given in detail in [1][2][3][4], and we just summarize it here.To transition to Euclidean picture, we replace The Euclidean action by iS = −S E is (below we drop the subscript E): In the action we set ⟨L E QFT ⟩ = Λ QFT , with Λ QFT the regulated matter sector vacuum energy to an arbitrary loop order, since we consider transitions on backgrounds with local O(4) symmetry that have minimal Euclidean action and dominate the evolution [14,15].When the QFT vacuum energy is natural, QFT/gravity couplings imply Pl λ QFT , where M 4 UV is the QFT UV cutoff and ellipsis denote sub-leading terms [40,41].Hence the total cosmological constant in any bulk patch is where λ can vary from patch to patch across membrane walls. A nucleation of a membrane changes the flux of λ inside it, and hence the total cosmological constant in the interior.The resulting geometry comprises of two de Sitter patches glued along the membrane, with tension and charge controlling the membrane-sourced discontinuity.Away from the membrane, de Sitter patches are described with the metrics where dΩ 3 is the line element on a unit S 3 .The warp factor a is the solution of the Euclidean "Friedmann equation", The prime designates an r-derivative.From here on, we will drop the subscript "total".The boundary conditions induced on a membrane for gauge fields and gravity are [1-3] in the coordinate system where the outward membrane normal vector is oriented in the direction of the radial coordinate; r measures the distance in this direction.Subscripts "out" and "in" refer to the membrane's exterior ("parent de Sitter") and interior ("descendant de Sitter"), respectively.The discontinuities in λ and a ′ follow since a membrane is a Dirac δ-function source of charge and tension.We proceed by solving (23) for Pl , where ζ j = ±1 designate the two possible branches of the square root.Using this and the junction conditions for the magnetic fluxes (24), the value of Λ QFT cancels out and we obtain [1][2][3][4][5] where we take the flux to be made up of a large number of charge units, λ ≫ Q A .If this were not so, we would replace λ → λ out − Q A /4 in (25) (in the large flux case, the distinction between the "in" and "out" fluxes in these equations is irrelevant).The equations ( 25) play a crucial role, since they select the membrane discharge channel which relaxes the vacuum energy, and control relaxation dynamics.The point is, that the right hand side (RHS) of ( 25) can be written as Thus when |q| < 1, the terms in the parenthesis on the RHS of ( 25 with (where k ∈ {out, in}) Its value depends on the membrane radius at nucleation a, which in turn depends on the microscopic parameters and Λ according to From this formula we deduce there are two regimes of bubble nucleation for a fixed set of parameters, depending on which term on the RHS of ( 29) the dominant contribution to the membrane radius comes from.The boundary between the two regimes is controlled by the critical value of the cosmological constant, roughly set by Λ * ≃ 3( T A 4M Pl ) 2 (1 + q) 2 .To infer a more precise description, we can rewrite the bounce action (27) in terms of the out cosmological constant, membrane charge and tension, and the cutoff scale M UV .First, we can evaluate (28) by eliminating the square root terms on the RHS using the junction conditions (25).Next we express Λ in in terms of Λ out and membrane charge Q using (21) and the second of (24).Finally we eliminate powers of the membrane radius at nucleation a using Eq. ( 29).Then we can consider specific limits of this action, e.g.fixing the tension T and varying Q and Λ out relative to it to explore the possible tunneling regimes mediated by the instantons of Fig. (1). It is tempting to take a shortcut and merely focus on the leading order terms in this panoply of pieces in the limits Λ → ∞ and Λ → 0 to get the essential behavior of the bounce action (27) while skipping the algebraic tedium.This actually works well in the limit Λ → 0. However the limit Λ → ∞ is more delicate.The reason is that the a(Λ) dependence in (29) and the terms ∝ T a, which the bounce action (27) depends on after the square roots in (28) are evaluated using (25) show that Λ → ∞ and Λ = 0 are branch points of the bounce action viewed as a function of Λ (this can also be seen in scalar field tunneling in, e.g.[15]).In particular, although a(Λ) vanishes as Λ → ∞ in (29), the terms ∝ T a in the expression for the bounce action get multiplied by positive powers of Λ, and hence may not be negligible.Thus it is prudent to determine the exact form of S bounce before taking the limits. The calculation is straightforward albeit tedious; a simplifying step is to write the terms Pl , where the upper sign on the RHS corresponds to k = out and the lower sign to k = in, respectively, given our conventions and definitions here.This gives, after straightforward steps, Here, of course, q < 0 since we are focusing on transitions which reduce Λ out .It is now clear that Λ out → ∞ and Λ out = 0 are branch points.To take the limits it is further convenient to factorize this equation as a product of poles and functions which only include the branch points.A shortcut is to bring (30) under a common denominator, substitute Λ out = Λ out (a), which turns the numerator into a polynomial in a, and factorize the polynomial.We finally obtain It is now clear that for the dS → dS transitions, the bounce action (31) remains nonnegative.Given the positivity of the cosmological constants and the tension, and q < 0, the only way it could ever be negative is if the last factor is negative, or alternatively, if Pl in the second line of (31), we see that this can't happen: Using these equations we also see that if the cosmological constant dependent term dominates the RHS of (29) -i.e. in the limit Λ out → ∞ -the membrane's radius at nucleation is a ≃ Pl < 1.This is the initial regime which we model-build to be in 2 , since in this regime an initial de Sitter space with a large cosmological constant "boils" the bubbles of the smaller cosmological constant that can start populating the landscape.As we noted in the introduction, this is the regime of "barrier-less" tunneling, where dS → dS decays are unsuppressed.Also, (33) grows bigger as Λ out decreases, which means that de Sitter spaces with the largest Λ decay faster than those with a smaller Λ.This shows the trend of evolution: fast decay of the large Λ's and increased stability of subsequent lower Λ spaces.Clearly, if for any reason the "boiling" stage is pushed above the cutoff, the theory would not be under control in that regime, and this regime could not be invoked to set an initial population of Λ's.If this were realized, the landscape can turn into a desert. This regime ends when the total cosmological constant discharges enough so that the second term in (29) dominates.For |q| < 1, that occurs when Λ < Λ * = 3( T A 4M Pl ) 2 .The discharge proceeds by the |q| < 1 instanton in Fig ( 1), for which (ζ out , ζ in ) = (−, +).The action (27) reduces to in this limit, with . This action has a pole at Λ out = 0, and leads to the decay rate of Eq. ( 2), Γ ≃ exp −24π 2 M 4 Pl /Λ out .Just like the discharges are rapid when Λ > 3( T A 4M Pl ) 2 , they are very slow when Λ < 3( T A 4M Pl ) 2 .As a result, total evolution which results from combining the stages controlled by (33) and (34) , if both can be fit below the cutoff in the same effective theory, would exponentially favor smallest values of Λ terminal , "boiling" stage setting up the distribution and "braking" stage preserving it, and therefore provide a framework for naturally solving the cosmological constant problem.On the other hand, if the "boiling" regime were completely excised, being pushed above the cutoff, the resulting landscape could be very desolate with naturally large initial Λ.In this case, the decay of the cosmological constant would be very slow, and if the membrane charges are small, it would need to go through many steps until reaching the terminal Λ values near zero.At practical times, the distribution of Λ would be biased toward larger values, and the prospect of ultimately empty universe [42] would loom large. In the other case, when |q| > 1, the "braking" regime starts when Λ < A , depending on whether linear or quadratic terms dominate q.The 1 Note that the factor of 18π 2 can be easily compensated initially by picking the scale µ = T 1/3 A < M UV /5. 2 Or to never be in this regime, if we prefer to eventually rely on anthropics, see the discussion later on. discharges are mediated by the |q| > 1 instanton in Fig ( 1), for which (ζ out , ζ in ) = (+, +).For this instanton, in the bounce action ( 27), (28), the leading terms in (28) cancel out completely for both "in" and "out" contributions, and the sub-leading terms converge to (see, e.g.[1,2,6,7,14]) The decay rate saturates as the cosmological constant decreases.For example, when the quadratic flux terms dominate in q, the relative stability of de Sitter spaces with small cosmological constant is set by the ratio with the decay rate approaching (3).For a natural value of the screened initial vacuum energy Λ QFT ≃ M 4UV , this immediately shows that we need, somewhat loosely, to have a chance for sufficient longevity of de Sitter regions with small cosmological constant, necessary to fit a realistic late universe cosmology.If the tension were too low, the small curvature de Sitter spaces could decay too fast.However, since the rate is approximately constant, when (36) holds, and if the "boiling" regime ( 33) is not too long, the discharges can produce a multiverse with all values of Λ terminal approximately equally likely, and long lived.If this happens, then invoking anthropics can be used to address the observed smallness of the cosmological constant.As we will see below, this naturally occurs when all flux discharge processes are dominated by quadratic or higher order flux terms. WGC versus Discharges We now impose the WGC bounds of Sec. 2 on the discharge dynamics of the previous section.We will normalize the inequalities ( 14), (15) using the Planck scale instead of Newton's constant, ignoring the numerical factor of √ 8π ≃ 5, thus working with the original normalizations introduced in [23].The O(1) numerical factors will be of no serious consequence in this work, although in general one should be careful with their accounting since they can affect normalization of some physical parameters, as for example the duration of slow roll inflation and so on [43,44].In any case, the electric and magnetic weak gravity bounds which we will use are and In addition, following the approach of [5], we will impose a bound on the flux variation for each type of flux involved in screening and discharge.The reason for this is that in hindsight, when the 4-forms are generalized by adding a dynamical longitudinal mode and a mass term, which arises naturally whenever the 4-forms realize monodromy field theories in 4D, as in [45][46][47][48][49][50][51][52][53][54][55][56], in the axial gauge the longitudinal modes are monodromy-spanning "axions", whose total range must be limited by at least the requirement that their energy density does not exceed the Planckian energy, so the effective theory with gravity remains under control.Depending on the specifics of the theory, the bounds could be even tighter.Here, imagining that the effective theory enjoys protection from the gauge symmetries of the 4-form sectors, both continuous and discrete, we will require that it remains below the cutoff scale where by ||T µ ν (4 − form)|| we mean the operatorial norm of the stress energy tensor of the 4-form sector, i.e. its largest eigenvalue.With this in place, we are ready to find the implications of these bounds on the dynamics of screening and discharge.Conveniently, the technical aspects of this analysis are simplified by separately considering the purely quadratic flux case, as an avatar of frameworks where the linear flux term is sub-leading, and purely linear term, without any loss of generality. Quadratic Flux Dominance We will explicitly work with a single species of membranes for the most part, since the nucleations proceed one bubble at a time.However we bear in mind that, to be able to approach the observably allowed values of the terminal cosmological constant, we need a multiplicity of different membranes once we impose the field theory cutoff on the flux range [5,11].This means that in the formulas below we should really replace expressions like This means, that in our comparison of the cosmological constant to be cancelled and the cutoff, there is an in-principle multiplicative species factor, counting each flux that contributes to Λ total .Since it is < ∼ O(100) we will ignore it in what follows.When the screening terms in Λ total are dominated by the quadratic flux contributions, such that we must take Λ QFT < 0 to have a chance to cancel it [6,7,11].Then, for a natural value of Λ QFT ∼ M 4 UV ∼ M 4 Pl , the dynamics of discharge produces a nested system of bubbles bounded by membranes.Nucleation processes are controlled by Eqs. ( 25) -( 29), and, crucially, by the value of q.In the limit when quadratic terms dominate, q is given by for each individual flux λ i .The parameter q i is proportional to the slope of the tangent to the "spectral parabola" as depicted in Fig. (2).Since fluxes are quantized, λ i = 1 2 N i Q i (1/2 comes from our normalization of λ).Then, Figure 2: Λ-parabola, depicting the spectrum of Λ as a function of the screening flux.In the full multidimensional flux space, Λ is a paraboloid, and here we depict its projection to a single coordinate plane.The gold lines are tangents to the parabola whose slope is q, which controls the discharge process.The discrete points are the actual values of the quantized fluxes and the corresponding cosmological constant of the Λ-discretuum. plugging this into the formula for q i yields q where γ WGC is precisely the ratio of charge to tension in Planck units, which is subject to the electric weak gravity bound (37).Thus, if WGC is obeyed by a membrane "i", γ WGC > 1, and since we must screen a natural vacuum energy Λ QFT with multiple units of flux, N i > 1. Therefore q > 1 for any type of membrane obeying WGC, for all transitions which occur until Λ reaches its smallest positive value.As a result, the discharge processes of the natural vacuum energy by emission of these membranes can only proceed by the instanton with |q| > 1 of Fig. (1).This means, the bounce action for these processes asymptotically approaches Eq. ( 35) as the cosmological constant diminishes.The relevant inequalities are (36) and (38).In fact, we can rewrite all three of these inequalities in terms of dimensionless ratios, as follows (for Λ QFT ∼ M 4 UV ): , stability ; Furthermore, since the membrane charge and tension are distributed quantities, we require that they are below the cutoff scale, UV and T i < M 3 UV , so that they can be reliably included in the sub-cutoff effective description based on the low energy actions which we deploy here.In fact, these bounds are redundant: the electric WGC bound in (43) follows from the magnetic WGC bound and T i < M 3 UV .However we will retain the electric WGC bound for convenience with calculations below.We note that all of these inequalities can be satisfied simultaneously for some M UV < ∼ M Pl .On the other hand, model building 'economics' suggests that M UV is to be looked for not too far below M Pl in order to be able to use as few fluxes as possible [11].Add to this the argument of the previous section about the existence of the "boiling" stage, which further reaffirms this expectation. Finally, we note that, for as long as |q| > 1, the scale that separates the "boiling" from the "braking" stage for quadratic flux is given by When γ WGC > 1, the "boiling" stage is effectively completely excised out of the quadratic flux effective theory, and so essentially for all values of Λ < ∼ M 4 UV ∼ M 4 Pl the discharges occur during the "breaking" stage, with the bounce action asymptotically approaching the formula given in Eq. ( 35), S bounce ≃ 27π 2 2 T 4 i (∆Λ) 3 .Since ∆Λ = 2λ∆λ, and initially λ ≃ √ Λ > ∼ Λ QFT , the initial bounce action will start smaller than the asymptotic value , which may permit discharges at an approximately uniform rate, independent of initial and final cosmological constant values.The discharges will cease once the flux becomes small enough to obey the stability bound (36).Thus in this regime the theory where flux contributions to the cosmological constant are dominated by quadratic terms indeed provides an arena where invoking the anthropic principle can be used to address the observed smallness of the cosmological constant, as in [11]. Higher powers of flux do not affect this conclusion much.If e.g.2λ 2 is replaced by L(λ) = 2λ 2 1 + c 3 λ/M 2 UV + . . ., as in the examples of [4], and higher order terms are suppressed by the cutoff, or by M Pl , these terms will be sub-leading in the effective theory.If, on the other hand, the suppression is weaker for any single one of them, that term might compete with the quadratic flux term at large flux, and perhaps even produce novel regimes with tiny total cosmological constant, rearranging the effective theory near them but behaving similarly to when the quadratic dominates [4]. Thus the bottomline is that these processes can discharge the cosmological constant at a constant rate from one value to another, setting essentially a uniform distribution of values at late times, without automatically favoring any particular value of Λ, including the smallest ones.This sets the stage for invoking anthropic principle. In contrast, in [1][2][3][4][5] we have been pursuing a framework where the dominant flux terms are linear, and the instantons which discharge the cosmological constant have |q| < 1, so that their bounce action has a pole at a tiny value of Λ, which, as we argued, favors naturally the smallest Λ without needing anthropics.As we noted in the introduction, one might try to adopt similar processes to the cases when higher powers of the flux dominate, and the linear term is absent.One might think that by arbitrarily reducing the membrane charge, this might make q smaller than unity, so that the |q| < 1 instantons of Fig. (1) take over the discharges.If this had been possible, the spectrum of Λ as a function of the fluxes would have been altered, looking like Fig. (3), where for small parent Λ the slope of the tangent to the parabola would have been below unity.However, there are problems with this approach.2).As a result, the slope of the tangent as measured by q becomes smaller, and if |q| < 1 discharges would be mediated by the |q| < 1 instanton of Fig. (1).We show in the text that this is unfounded when quadratic flux terms dominate. Seeing the problem is straightforward.To get |q| < 1, so that the corresponding instantons are activated, formula (42) shows that we must violate the electric weak gravity bound considerably: solving (42) for N i , and so if |q i | < 1 and γ WGC > 1, we find N i < 0.75 -which completely excludes the possibility of screening any value of field theory vacuum energy by quantized fluxes.Indeed, to have a chance to screen a natural field theory vacuum energy λ QFT ≃ M 4 UV , and then relax the total by subsequent membrane nucleations, we need multiple units of flux: N i ≫ 1. Hence we need a serious violation of the electric weak gravity bound: and so if N i ≫ 1, we find γ WGC ≪ 1. Next, we can ignore the stability bound of Eqs.(43), since the bounce action in the "braking" stage for this case isn't (35), but (34), and so the stability is enforced by the Λ → 0 pole.However, if we rewrite Λ * as we see that γ WGC ≪ 1 implies that charges are very small: (21) and the second of ( 24) it then follows that Thus since M UV ∼ M Pl and T i < M 3 UV , in this regime the individual discharges change the cosmological constant by a tiny amount, ∆Λ/Λ ≪ T i /M 3 UV .Thus to relax it to nearly zero, we need many subsequent transitions during the "braking" stage, which will be ever more slow due to the attractor behavior of (34).Such a slow discharge sequence practically stabilizes the de Sitter background, and it could bring back the specter of the empty universe problem [42], since many small slow steps could result in difficult reheating. The empty universe problem could be averted by adding a single membrane with tension and charge which satisfy WGC bounds.This would maintain the option of UV-completing the theory, and it would mediate faster nucleations in the "braking" stage, by using the |q| > 1 nucleation processes with the bounce action (35).Even if those satisfy the stability bound (36), the transitions would be generically faster than the ones mediating (48).Discharges mediated by such a membrane (or membranes) can overtake the processes which have the attractor behavior and avert the empty universe.However, since the decay rate in this case would be uniform, this channel would usher the anthropics back.This is the obstacle to some of the proposals in [58], in using |q| < 1 instantons when quadratic (and higher power) fluxes dominate.The point is not that those small values of Λ aren't populated, but how they are populated.Indeed, on general grounds the whole landscape will be populated [59], but the details of the landscape 'demographics' must be looked at on a case to case basis. Linear Flux Dominance We have already extensively discussed aspects of Λ discharge when linear flux terms dominate in [1][2][3][4][5].Here we will revisit some of those results with particular attention given to the role of WGC bounds.First off, the screened total cosmological constant, as e.g.examined in [5], is where we implicitly take the linear term to dominate over the quadratic one, and complete the squares in the second line for the sake of convenience.In the case of multiple fluxes we can rewrite this as The spectrum of values of Λ is depicted in Fig. (4).Note that in this case it does not matter if Λ QFT is positive or negative (it had to be negative when quadratic fluxes dominate for screening to work).Due to the fact that linear fluxes dominate, they can screen Λ QFT of either sign.In this limit, the slope parameter is The equation for Λ * is still given by the expression (47), The important point is that now the parameter q i is completely independent of the units of flux N i -which can be as large as one wishes while q i is still fixed.We can keep the charge near the cutoff, Q i ∼ M 2 UV , and γ WGC < ∼ 1, but close to the bound, and ensure that |q i | < 1 by choosing c 1 i < 1.When γ WGC < ∼ 1, Λ * might thus be mere few units of charge above zero, but faster discharges during the "boiling" regime could discharge it to near zero. Then, to satisfy WGC, we in principle need to have only one charge per gauge group which satisfies the electric bound.Saturating the bound is acceptable, when the charge is light enough.So for each gauge group we need one of the membranes to obey γ WGC ≃ 1.From Eq. ( 51), for membranes with γ WGC ≃ 1 we need 2c UV , which can be met by c 1 i ≤ 1. Meeting these bounds may be easiest done near the Planckian cutoff, UV and so retain an epoch of "boiling" in the theory below the cutoff.So a membrane with a charge of order of the cutoff and tension somewhat smaller will also yield |q i | < 1 while marginally satisfying WGC bounds.For other membranes which carry the gauge charge of the same group, we can then violate the electric weak gravity bound, that will make achieving |q i | < 1 much easier.This can happen if, for example, those membranes are domain walls separating multiple vacua after a symmetry breaking at low energies, which could be viewed analogously to particles with fractional charges in QFT. Alternatively, we may even allow a membrane whose charge to tension ratio obeys the electric weak gravity bound to participate in the discharge of Λ even if it has |q i | > 1, as long as there are other species of membranes with |q i | < 1 (which may violate the WGC bounds).The reason is that although the processes for this one specific discharge channel are mediated by the |q| > 1 instanton, which leads to uniform distribution of the Λ values which are linked by these transitions, there is many more channels which proceed via the |q| < 1 instantons.Those produce overall distributions of Λ which are biased towards the smallest possible values, and have larger charges that require fewer steps to get the cosmological constant close to zero.Once near zero, those small values remain extremely stable.If a |q| > 1 channel is present, those values would not be absolutely stable: they could decay, for example to regions Λ < 0 eventually.But as we have seen, once a region of the universe ends up in the "braking" regime of either |q| < 1 or |q| > 1 instanton discharge, it is very stable and very long lived.Yet, because the terminal distribution of Λ is biased towards smallest possible values, we can still avoid invoking the anthropic arguments to explain why the cosmological constant is not huge.Exactly how close to zero it can be is controlled by the model building aspects of the theory, which we described in some detail in [1][2][3][4][5].We direct an interested reader to those references for additional information. Summary In this paper we have examined in detail implications of the weak gravity conjecture for the mechanisms for discharging cosmological constant via membrane nucleations.This is a natural and interesting question, given the role which the WGC bounds play in blocking the existence of eternal event horizons in gravity theories in order to protect unitarity.In the frameworks where the cosmological constant is screened by 4-form fluxes, and then the total background value of Λ is discharged away by the nucleation of membranes, stable eternal de Sitter spaces do not even exist.In fact, starting with a theory which has fluxes and membranes, the only way to recover an eternal de Sitter is to decouple all of the membranes in the theory by, e.g., sending their tensions to infinity.But this would violate the WGC bounds completely; compliance with the conjecture rules out eternal de Sitter just like the WGC bounds applied to charged particles rules out eternal extremal black holes [23].Conversely, in the example where quadratic flux terms control the discharge processes we saw that if the WGC bounds are violated, de Sitter space will not reach near-Minkowski limit unless the theory is fine tuned.From this point of view, an eternal de Sitter geometry is really analogous to a remnant, with regions forever removed from a dweller in the space.The details of the WGC bounds, when combined with naturalness of the initial, maximal cosmological constant, place limits on the decay processes of de Sitter space and the cosmological constant which sources it.The possible outcomes fall in two different classes.When the flux terms which control the screening and discharge of the cosmological constant are dominated by quadratic and higher order terms, the bounds from weak gravity conjecture and naturalness invariably lead toward anthropic outcomes.Interestingly, even if the WGC bounds are deliberately violated, the discharge rates still do not pick a preferential small value.On the other hand, if the theory involves linear flux terms, which dominate below the cutoff, anthropics can be avoided, because the cosmological constant naturally decays toward smallest possible values.This is because the evolution is comprised of two discharge regimes: the "boiling" stage, followed by a "braking" stage.For the cases when the linear fluxes are present and dominant, with WGC-compliant branes, the "boiling" stage processes have fast discharge rates, which generate the descendant regions with curvatures biased towards the smallest possible values of Λ > 0. The subsequent "braking" stage in turn slows down the most the decay rates of regions with smallest positive Λ after "boiling" has ended.Together, these stages produce a distribution of Λ which is extremely biased towards the smallest values. Conversely, if the controlling fluxes are dominated by quadratic or higher order terms, purposefully violating the WGC bounds may reproduce the enhanced "braking" at small values of Λ, making de Sitter patches with small Λ more stable than the strongly curved ones.However, the price to pay is that charges must be very small if the screening and discharge adjustment are to be natural.This changes the "boiling" stage, which could resurrect the empty universe problem.Further, by current lore, UV-completing the theory needs the means to maintain WGC.This can be done by adding a membrane which satisfies the electric WGC bound, charged under the same gauge group as the membranes which are adjusting Λ, one per gauge group.If gravity is universal, these membranes should also partake in the cosmological constant adjustment.If they obey WGC bounds, their vacuum energy would be dominated by quadratic flux terms and so they would yield discharge processes which would be uniform, since it would be mediated by the (+, +) instanton.These processes can be faster than the discharges using WGC-violating membranes, and flatten the distribution of terminal Λ at the small Λ end.This would usher back the anthropics. Note that our investigation of the discharges was consistently carried out below the cutoff of the effective theory, which avoids the direct confrontation with quandary that is the wormhole regime [19][20][21][22].In this sense, the WGC limits are useful, since they "regulate" the boundary conditions which quantum gravity imposes on the low energy effective theory, with some confidence that the phenomena retained because they obey the WGC bounds are meaningfully accounted for. Regarding the final numerical values of Λ, precisely how small those can be depends on the model building details, as noted in previous work [1][2][3][4][5].More precise model building is required to give a more specific answer.Finally, note that even if the various regimes occur concurrently, i.e. if the discharge processes are more diversified, involving both processes mediated by |q| > 1 and |q| < 1 instantons, as long as at least some channels are dominated by linear fluxes the spectrum of Λ will be skewed toward the smallest values. Figure 3 : Figure 3: Λ-parabola, depicting the spectrum of Λ as a function of the screening flux, but for smaller values of Q i than in Fig. (2).As a result, the slope of the tangent as measured by q becomes smaller, and if |q| < 1 discharges would be mediated by the |q| < 1 instanton of Fig.(1).We show in the text that this is unfounded when quadratic flux terms dominate. Figure 4 : Figure 4: Almost-linear spectrum of Λ as a function of the screening flux, projected onto a single coordinate plane.The slope of the tangent is practically a constant, and when |q| < 1 the membrane discharges are mediated by the |q| < 1 instantons of Fig. (1).
2023-09-15T06:42:15.210Z
2023-09-13T00:00:00.000
{ "year": 2023, "sha1": "c10bd48be4cc1081779299eadc288dfd22c1b356", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "51eafdef64914ee300f43db0a4c15dd0083860c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18813064
pes2o/s2orc
v3-fos-license
Investigating Warp Size Impact in GPUs There are a number of design decisions that impact a GPU's performance. Among such decisions deciding the right warp size can deeply influence the rest of the design. Small warps reduce the performance penalty associated with branch divergence at the expense of a reduction in memory coalescing. Large warps enhance memory coalescing significantly but also increase branch divergence. This leaves designers with two choices: use a small warps and invest in finding new solutions to enhance coalescing or use large warps and address branch divergence employing effective control-flow solutions. In this work our goal is to investigate the answer to this question. We analyze warp size impact on memory coalescing and branch divergence. We use our findings to study two machines: a GPU using small warps but equipped with excellent memory coalescing (SW+) and a GPU using large warps but employing an MIMD engine immune from control-flow costs (LW+). Our evaluations show that building coalescing-enhanced small warp GPUs is a better approach compared to pursuing a control-flow enhanced large warp GPU. INTRODUCTION Conventional SIMT accelerators achieve high performance by executing thousands of threads concurrently. In order to simplify GPU design the neighbor threads are bundled in grouped referred to as warps. Employing warp-level granularity simplifies the thread scheduler significantly as it facilitates using coarse-grained schedulable elements. In addition, this approach keeps many threads at the same pace providing an opportunity to exploit common controlflow and memory access patterns. Underlying SIMD units are more efficiently utilized as a result of executing warps built using threads with the same program counter and behavior. In addition, memory accesses of neighbor threads within a warp can be coalesced to reduce the number of off-core requests. Parallel threads overlap the communication overhead associated with some threads using computations required by other threads to maintain high resource utilization. Previous studies have shown that GPUs are still far behind their potential peak performance as they face two important challenges: branch and memory divergence [6,11]. Upon branch divergence, threads at one side of a branch stay active while the other side has to become idle. Upon memory divergence, threads hitting in cache have to wait for those who miss. At both divergences, threads suffer from unnecessary waiting periods. This waiting can result in performance loss as leaves the core idle if there are not enough ready threads. As we show in this work, warp size can impact performance significantly. Small warps, i.e., warps as wide as SIMD width, reduce the likelihood of branch divergence occurrence. Reducing the branch divergence improves SIMD efficiency by increasing the number of active lanes. At the same time, a small size warp reduces memory coalescing, effectively increasing memory stalls. This can lead to redundant memory accesses and increase pressure on the memory subsystem. Large warps, on the other hand, exploit potentially existing memory access localities among neighbor threads and coalesce them to a few off-core requests. On the negative side, bigger warp size can increase serialization and the branch divergence impact. Figure 1 reports average performance for benchmarks used in this study (see methodology for details) for GPUs using different warp sizes and SIMD widths. For any specific SIMD width, configuring the warp size to 1-2X larger than SIMD width provides best average performance. Widening the warp size beyond 2X degrades performance. In the remainder of this paper, we use an 8-wide SIMD configuration. In this paper we analyze how warp size impacts performance in GPUs. We start with studying GPUs using different warp sizes ranging from 8 to 64. We use our analysis to investigate the effectiveness of two possible approaches to enhance GPUs. The first approach relies on enhancing memory coalescing in GPUs using large warps. Once memory coalescing is enhanced, this approach uses effective control-flow solutions to address the resulting increase in branch divergence. The second approach aims at minimizing serialization in GPUs using small warps. Since small warps affect coalescing negatively, this approach requires taking extra steps to address memory stalls. We may expect the two approaches to be equally effective as they address GPU's performance degrading issues, memory and branch divergence, simultaneously. However, our experimental results show that often one outperforms the other. In this work we evaluate both approaches and estimate the performance return of both solutions. We show that starting with a small warp size, and then using dynamic memory divergence solutions is a better choice. In summary we make the following contributions: We study the impact of warp size on different GPU aspects including memory stalls, idle cycles and performance. We use our findings to identify an effective approach to enhancing GPU performance. We show that the combination of a static and simple approach to deal with branch divergence (using small warps) and dynamic memory stall reductions solutions is an effective approach. We also investigate the alternative and show that using a static solution to enhance coalescing (i.e., using large warps) combined with an ideal dynamic control-flow solutions falls behind the first approach due to frequent synchronization of a large number of threads. The rest of the paper is organized as follows. In Section 2 we study background. In Section 3 we review warp size impact. In Section 4 we present our machine models. In Section 5 we discuss methodology. Section 6 reports results. In Section 7 we discuss our findings in more detail. In Section 8 we review related work. Finally, Section 9 offers concluding remarks. BACKGROUND In this study we focus on SIMT accelerators similar to NVIDIA Tesla [12]. Stream Multiprocessors (SMs) are processing cores sending memory requests to memory controllers through on-chip crossbar network. We augment Tesla with private L1 caches for each SM. Each SM keeps context for 1024 threads. SM has one thread scheduler which groups and issues warps on one SIMD group. Threads within a warp have one common program counter. Control-flow divergence among threads is managed using re-convergence stack [6]. Diverged threads are executed serially until re-converging at the immediate post-dominator of branch. Instructions from different warps are issued back-to-back into the deep 24-stage, 8-wide SIMD pipeline. If the warp pool has no ready warp, the pipeline front-end stays idle leading to underutilization. Under such circumstances, all the warps are issued into the pipeline. However, there are ready threads that are inactive/waiting due to branch/memory divergence [13]. Current GPUs coalesce the global memory accesses of neighbor threads. We model a coalescing behavior similar to compute compatibility 2.0 [15]. Requests from neighbor threads accessing the same stride are coalesced into one request. Neighbor threads are aggregated over the entire warp. Consequently, memory accesses of a warp are coalesced into one or many stride accesses. Each stride is 64 bytes. Memory transaction granularity is the same as cache block size which is one stride. WARP SIZE IMPACT In this section we report how warp size impacts, the number of idle cycles, memory access coalescing, and performance. We do not report SIMD efficiency since our observations show that warp size has insignificant (less than 1%) impact on activity factor ( [10]). See Section 5 for methodology. Memory access coalescing. Memory accesses made by threads within a warp are coalesced into fewer memory transactions to reduce bandwidth demand. We measure memory access coalescing using the following equation: ˕JIˬ˥JI˩J˧ JIˮ˥ (1) We use this definition (equation 1) to estimate coalescing in different machine models studied in this paper. Figure 2 compares coalescing rates for different warp sizes. As shown in the figure, increasing the warp size improves the coalescing rate in all the benchmarks. An increase in warp size can increase the likeliness of memory accesses made to the same cache block residing in the same warp. This increase starts to diminish for warp sizes beyond 32 threads for most benchmarks as coalescing width (16 words of 32bit) becomes saturated. Accordingly, enlarging the warp beyond a specific size, returns little coalescing gain. Idle cycles. Figure 3 reports idle cycle frequency for GPUs using different warp sizes. Idle cycles are cycles when the scheduler finds no ready warps in the pool. Core idle cycles are partially the result of branch/memory divergences which inactivate otherwise ready threads [13]. Small warps compensate branch/memory divergence by hiding idle cycles (e.g. BFS). On the other hand, for some benchmarks (e.g., BKP), small warps lose many coalescable memory accesses increasing memory pressure. This pressure increases average core idle durations compared to larger warps (e.g. BKP). Performance. An increase in warp size can have opposite impacts on performance. Performance can improve if an increase in memory access coalescing compensates synchronization overhead imposed by large warps. Performance can suffer if the synchronization overhead associated with large warps outweighs coalescing memory access gains. Figure 4 reports performance for GPUs using different warp sizes. As reported, in most workloads, warp size has significant impact on performance. Performance improves in BKP, GAS, SR1 and SR2 with warp size. Performance is lost in BFS, MP, MU, NQU and SCN as warp size increases. Other workloads perform best under average warp sizes (16 or 32 threads). We conclude from this section that warp size can impact performance in different ways. We use our findings and explore two approaches to enhance performance further in GPUs. We describe our approaches in the next section. MACHINE MODELS In this section we introduce two machine models. Our first model is a coalescing-enhanced small warp machine, referred to as SW+. SW+ uses small warps but comes with ideal coalescing. Intuitively we study SW+ to measure the performance potential in building small warp machines. Our second model represents a control-flow-enhanced large warp machine, referred to as LW+. We use LW+ to estimate the performance improvement possible for a processor using a large warp size but equipped with an ideal control-flow solution. SW+ This machine exploits small warps (as wide as SIMD width). As described before, small warps lose some coalescing opportunities leading to redundant memory accesses. SW+ is enhanced to address the performance penalty associated with uncoalesced accesses. SW+ is equipped with ideal coalescing hardware, which coalesces the memory accesses of all threads. Ideal coalescing hardware keeps track of outstanding memory requests (of all threads) and merges read accesses with outstanding accesses whenever possible. This merging captures most coalescing opportunities occurring for large warps, compensating the penalty paid by small warps effectively. Our baseline architecture coalesces accesses within one warp. SW+ extends coalescing beyond one warp. The motivation behind investigating SW+ is to study if investing in a small warp size machine to enhance memory coalescing can lead to high performance returns. LW+ We investigate LW+ to evaluate if investing in a large warp size machine to enhance branch divergence is the right approach. LW+ groups threads in large warps (8x larger than the SIMD width). Exploiting large warps facilitates efficient usage of memory bandwidth by coalescing memory accesses. Large warps exacerbate idle periods imposed by branch divergence. LW+ addresses this issue as both sides of divergence are split and actively remain in the warp pool in this machine. This splitting does not return considerable performance gain since threads may never re-converge again leading to SIMD underutilization [7]. Therefore, we further enhance this machine by replacing the SIMD lanes with MIMD cores. Splitting upon divergence and using MIMD cores solves both problems. Previous studies have suggested solutions to reduce the impact of branch divergence [6,7,14,2]. DWS adaptively splits the warp upon branch/memory divergence [13]. DWF, TBC, LWM, SBI and SWI propose solutions to capture a considerable amount of MIMD performance by SIMD. Exploiting DWS on top of TBC or LWM can be viewed as a practical approach in building LW+-like processors. METHODOLOGY We modified GPGPU-sim [1] (version 2.1.1b) to model large warps and memory coalescing similar to compute compatibility 2.0 devices [16]. We used the configurations shown in Table 1 to model the baseline microarchitecture described in Section 2. 16 SMs provide peak throughput of 332.8 Gflops. Six 64-bit wide memory partitions provide memory bandwidth of 76.8 Gbytes/s at dual-data rate. We used a cache block size of 64 bytes, which is equal to memory transaction chunks. Our evaluations show increasing cache block size (and accordingly transaction chunk) to 128 bytes, downgrades the overall performance. We used benchmarks from GPGPU-sim [1], Rodinia [3] and CUDA SDK 2.3 [15]. We also included MUMmerGPU++ [8] third-party sequence alignment program. We use benchmarks exhibiting different behaviors: memory-intensiveness, compute-intensiveness, high and low branch divergence occurrence and with both large and small number of concurrent thread-block. Table 2 shows our benchmarks and their characteristics. RESUTLS In this section, we evaluate SW+, LW+ and processors using different warps sizes. In Section 6.1 we present memory access coalescing. SIMD efficiency is reported in 6.2. Finally in Section 6.3 we report performance. Figure 5 reports coalescing rate. As reported, SW+ provides the best coalescing rate. SW+ coalesces memory accesses among all threads of an SM to achieve this. Memory access coalescing Widening the coalescing to merge accesses from all threads can improve coalescing rate by 21% and 30% compared to coalescing width of 32 threads and 64 threads, respectively. LW+ is outperformed by a machine using 64 threads per warp. This is due to the fact that LW+'s MIMD execution does not keep threads at the same pace to coalesce their accesses. In some cases (e.g., MP and MU) splitting the warp upon divergence prevents merging memory requests. Under such circumstances, redundant memory accesses lead to poor coalescing rate. As we show later, this does not translate to performance loss since the memory subsystem is not under-pressure in these workloads (MU and MP). Idle cycles As discussed in Section 2, small warps reduce idle cycles by reducing unnecessary waiting due to branch/memory divergence. This idle cycle saving is lost partially since small warps lose memory access coalescing, pressuring the memory subsystem. SW+ addresses this drawback by exploiting ideal coalescing. As shown in Figure 6, SW+ shows lowest idle cycle share in most workloads. On average, using short warps combined with ideal coalescing (SW+), reduces idle cycles by 36%, 21% and 26% compared to processors using 8, 16 and 32 threads per warp, respectively. Our analysis shows synchronizing a large number of threads at every instruction increases the number of idle cycles in LW+ significantly. Figure 7 reports performance for SW+, LW+ and processors using different warp sizes. SW+ outperforms all alternatives in most benchmarks. On average, SW+ outperforms LW+, and machines using 8, 16 and 32 threads per warp by 11%, 16%, 12% and 19%, respectively. Performance LW+ synchronizes all threads of the warp at every instruction. Even MIMD cores cannot compensate this synchronization overhead. Therefore a big part of MIMD's gain is lost due to unnecessary waitings. On average, LW+ outperforms processors using 8, 16, 32 and 64 threads per warp by 5%, 1%, 7% and 15%, respectively. DISCUSSION In this section we comment on some practical implications and analyze our results further. Insensitive workloads. Warp size affects performance in SIMT cores only for workloads suffering from branch/memory divergence or showing potential benefits from memory access coalescing. Therefore, benchmarks lacking either of these characteristics (e.g. FWAL and DYN) are insensitive to warp size. Enhancing short warps. Among all configurations, a GPU using 8 threads per warp performs worst for many benchmarks (e.g., BKP) as it suffers from very low memory coalescing. SW+'s investment in addressing this issue comes with considerable (up to 95%) returns. However, this machine performs well for computationbounded benchmarks (e.g. BFS, MP, MU and NQU), which suffer from branch divergence significantly. Enhancing large warps. A closer look at the processor using 64 threads per warp shows that it performs well for a few benchmarks (e.g. BKP, GAS and SR1 and SR2), but falls behind for BFS, MU, MP, NNC, NQU and SC which are prone to branch divergence. Enhancing this processor with an effective control-flow solution, however, shows very high (up to a maximum of 73% in NQU) performance returns. Ideal coalescing and write accesses. SW+'s coalescing rate is far higher than other machines due to ideal coalescing hardware. However, ideal coalescing can only capture the read accesses and does not compensate uncoalesced accesses. Therefore, SW+ may suffer from uncoalesced write accesses. We found this rare as it can be seen only in the MTM benchmark. The coalescing rate of SW+ in MTM is higher than other machines since it merges many read accesses among warps. However, uncoalesced write accesses downgrades the overall performance in SW+. Practical issues with small warps. Pipeline front-end includes the warp scheduler, fetch engine, decode instruction and register read stages. Using fewer threads per warp affects pipeline front-end as it requires a faster clock rate to deliver the needed workload during the same time period. An increase in the clock rate can increase power dissipation in the front-end and impose bandwidth limitation issues on the fetch stage. Moreover, using short warps can impose extra area overhead as the warp scheduler has to select from a larger number of warps. In this study we focus on how warp size impacts performance, leaving the area and power evaluations to future works. Register file. Warp size affects register file design and allocation. GPUs allocate all warp registers in a single row [5]. Such an allocation allows the read stage to read one operand for all threads of a warp by accessing a single register file row. For different warp sizes, the number of registers in a row (row size) varies according to the warp size to preserve accessibility. Row size should be wider for large warps to read the operands of all threads in a single row access and narrower for small warps to prevent unnecessary reading. RELATED WORKS Kerr et al. [10] introduced several metrics for characterizing GPGPU workloads. Bakhoda et al. [1] evaluated the performance of SIMT accelerators for various configurations including interconnection networks, cache size and DRAM memory controller scheduling. Lashgar and Baniasadi [11] evaluated the performance gap between realistic SIMT cores and semi-ideal GPUs to identify appropriate investment points. Dasika et al. [4] studied SIMD efficiency according to the SIMD width. Their study shows the frequent occurrence of divergence in the scientific workloads makes wide SIMD organizations inefficient in terms of performance/watt. 32-wide SIMD is found to be the most efficient design for the studied scientific computing workloads. Jia et al. [9] introduced a regression model relating the GPU performance to microarchitecture parameters such as SIMD width, thread block per core and shared memory size. Their study did not cover warp size but concluded that SIMD width is the most influential parameter among the studied parameter. CONCLUSION Filling the performance gap between current GPUs and their potential requires addressing both memory and branch divergence. Finding the right configuration of a GPU is perhaps the most important decision in achieving high performance. Such static decisions, however, influence the dynamic solutions a system requires to deal with runtime challenges. Choosing the right warp size is one example. Approaching memory coalescing with a static solution (using a large size warp), leaves us with the challenge of finding effective dynamic control-flow solutions. An alternative approach is to deal with control-flow first by using small warps and then investigating dynamic solutions to address memory coalescing. We study the performance potential for both approaches and conclude that the latter approach comes with better performance returns for benchmarks and configurations used in this work.
2012-05-22T09:26:23.000Z
2012-05-22T00:00:00.000
{ "year": 2012, "sha1": "72f3fe27b93f6d72bb0f7e7ef7e3909ac96add14", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "72f3fe27b93f6d72bb0f7e7ef7e3909ac96add14", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
122080193
pes2o/s2orc
v3-fos-license
Inelastic neutron scattering study on crystal field excitations in PrMg 3 . We have studied the crystalline-electric-field excitation spectrum in PrMg 3 with non-Kramers doublet Γ 3 ground state by inelastic neutron scattering experiments on powder and single crystal samples. Experimental results have revealed the development of a dispersive structure in the Γ 3 -Γ 4 and the Γ 3 -Γ 5 excitations below 50 K. Moreover, the excitation spectra of the Γ 3 -Γ 4 transition were found out to consist of two peaks, resulting in the two branches in dispersion relation. While one branch shows a relatively strong q -dependence, the other branch is almost dispersionless. The dispersion of the former branch is considered to result from the magnetic dipole exchange interaction, but the origin of the excitation of the latter branch is not clear at present. Introduction Cubic compound including non-Kramers rare-earth ion are of special interest since a doublet Γ 3 ground state with only multipole degeneracies, i.e. with no magnetic dipole but electrical quadupole and magnetic octupole degeneracies, is possibly realized due to a crystalline-electricfield (CEF). PrMg 3 with the cubic Fe 3 Al-type structure has been identified by neutron studies as the Γ 3 ground state system [1], [2] . Moreover, its relatively large CEF, the splitting ∼ 50K between the ground and the 1st excited states, is a favorable condition for the investigation of pure multipolar system. Recent study on the low temperature properties has revealed no cooperative phase transitions but a broad anomaly in the specific heat measurements [3]. It is suggested that the observed anomaly airses from a quenching of the multipole degrees of freedom of the Γ 3 ground state by forming a strongly correlated state through the hybridization of with the conduction electrons. In this paper, we report the results of the inelastic neutron scattering experiments on the powder and the single crystal of PrMg 3 . Galera et al. [2] reported a noticeable dispersion in the excitation spectra of the Γ 3 -Γ 4 transition at 8 K for a polycrystalline sample. Our experimental results have revealed the development of a dispersive structure in the Γ 3 -Γ 4 and the Γ 3 -Γ 5 excitations below 50 K, and two branches in the dispersion relation. We discuss the origin of the dispersion in terms of exchange interaction. Experimental detail Single crystal of PrMg 3 was prepared by the Bridgman method with Mo crucible sealed in a high vacuum. The powder sample was obtained by grinding the single crystals. Inelastic neutron scattering experiments on the powder and the single crystal were performed using the invertedgeometry time-of-flight spectrometers LAM-D at KENS, KEK, Japan, and the IN20 triple axis spectrometer at the Institute Laue Langevin, ILL, France, respectively Experimental results and discussion Shown in figure 1 are the powder inelastic scattering spectra at several temperatures. The observed excitations and the resultant crystal field level scheme are consistent with the previous results [1], [2]. The lower-energy excitation around E = 4.8 meV corresponds to the transition from the ground state Γ 3 to the 1st excited state Γ 4 , and the higher one around E = 15.2 meV from Γ 3 to the 3rd excited state Γ 5 . The 2nd excited state is the singlet Γ 1 , which has no matrix element with Γ 3 . At higher temperatures, a peak appears around E = 7.8 meV, which corresponds to the transition from the thermally populated Γ 4 to Γ 1 . As shown in the inset of figure 1, the Γ 3 -Γ 4 line-width is found to increase with the decrease of the temperature below 50 K. On the other hand, the Γ 3 -Γ 5 line-width decreases when the temperature decreases and becomes constant from 10 K down to the lowest temperature. Note that the half-width at halfmaximum (HWHM) of the Γ 3 -Γ 5 transition at lowest temperature reaches ≈ 0.8 meV, a value much larger then the experimental resolution ≈ 0.4 meV. T (K) To obtain detailed information about the origin of the broadening of the excitation peaks at low temperatures, we performed inelastic scattering experiment on the single crystal along the three representative directions Γ-X (figure 2), Γ-K-X and Γ-L of the cubic Brillouin zone. A strong q -dependence was observed, as shown in figure 2. Moreover, the excitation spectra are found to consist of two peaks, as shown in figure 3(a). The q -dependence of the two peaks, which is obtained by least square fitting with double gaussian functions, is plotted in figure 3(b). Collecting the peak positions of these two peaks at each q , we obtain the dispersion relation reported in figure 3 (b). The peak with the stronger intensity and with the sharper width, corresponding to the closed circles in figure 3 (b), shows a relatively strong q -dependence. The inelastic spectra do not show any significant differences between T = 70 mK and 1.5 K although a broad peak was observed in the specific heat measurement at T ≈ 0.8 K [3]. The constant-Q scan spectra at T = 70 mK along the Γ-X direction. The solid lines represent the fitting results. The total dispersion reaches about 2 meV, which corresponds to the full-width of the lowerenergy excitation observed in the powder neutron inelastic scattering. Thus, the increase of the line-width with decreasing the temperature shown in the inset of the figure 1 arises from the development of the dispersive structure in the excitation curve. We have performed inelastic scattering experiments also on the Γ 3 -Γ 5 transition only along the Γ-L direction. We observed a relatively smaller dispersion ∼ 1 meV with the same q -dependence (not shown) compared with that of the Γ 3 -Γ 4 transition. The double peak structure could not be confirmed within the experimental resolution. The broad line-width of the Γ 3 -Γ 5 transition observed in the powder experiment at low temperatures is also attributed to the development of dispersion. The dispersion curve reported in figure 3 (b) shows that the Γ 3 -Γ 4 transition reaches its energy minimum at the L point for Q = (1/2 1/2 1/2), which is the propagation wave vector of the magnetic structure in the isomorphous compound of NdMg 3 [6]. Moreover, the energy of the total dispersion is comparable with the Curie-Weiss temperature, θ p = -36 K, obtained from the temperature dependence of the magnetic susceptibility [2]. These results invoke that a magnetic dipole exchange interaction make an important role in the formation of the dispersion. The dispersion relation of the excitations with the coupling treated in random phase approximation is given as where ω, ∆, α and J (q ) are the excitation energy, the transition energy between the CEF states, the matrix element and the exchange interaction, respectively [4]. The strong dispersion, observed experimentally, for the Γ 3 -Γ 4 excitation can be reproduced by taking account of up to 4th nearest neighbor (n.n.). In the figure 3(b), the solid line represents the dispersion calculated with α 2 J n = 1.5 meV (1st n.n.), 0.35 meV (2nd n.n.) and -0.4 meV (4th n.n.). The negative sign of 4th n.n. interaction, corresponding to a ferromagnetic-type interaction, is reasonable since the 4th n.n. locates twice as far as the 1st n.n. in the 〈 100 〉 direction [5]. About twice difference in the dispersion width between Γ 3 -Γ 4 and Γ 3 -Γ 5 transitions can be explained by the above dispersion relation, where the dispersion widths are in proportional to α 2 , which are obtained as |〈Γ 3 · Γ 4 〉| 2 = 9.3 and |〈Γ 3 · Γ 5 〉| 2 = 4 for Γ 3 -Γ 4 and Γ 3 -Γ 5 transitions, respectively . While the dispersion with a strong q -dependence is considered to result from the magnetic dipole exchange interaction, the origin of the broad excitation almost without q -dependence is not clear. Recent NMR and X-ray diffraction measurements at low temperatures have revealed no evidences of a distortion [7]. The only one Pr ion crystallographic site in the paramagnetic phase results in the only one degenerated pure magnetic mode, i.e. longitudinal and transverse modes are degenerated. Recent theoretical study has shown an appearance of the double peak structure of the inelastic spectrum in term of the doublet degeneracy of the ground state and multipole exchange interactions under cubic point symmetry [8]. In order to clarify the origin of the two components of the excitation, the more detail experiments such as studies of the magnetic field dependence and Q-dependence of the inelastic spectrum, are necessary and in preparation.
2019-04-19T13:03:35.446Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "64d9dd639b0b82181d4a364628b4238a5bfc1b38", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/150/4/042196", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e14143ba7c10c58ceedaaba8718b02fc96585f1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }