id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
81348768 | pes2o/s2orc | v3-fos-license | ROLE DEVELOPMENT OF NURSE MANAGERS IN THE CHANGING HEALTH CARE PRACTICE
Rapid changes in today’s healthcare industry are reshaping nurses’ role. The emergence of new healthcare system, the shift from service to business orientations, and extensive redesign of workplace affects the where and how nursing care is delivered as well as those who delivered the care. In the Philippines, healthcare system is in the midst of dramatic evolvementthe devolution of hospitals to LGUs, free healthcare for senior citizens and no balance billing policy yielding to increased client-nurse ratio. These impacts change the roles of nurse managers and their practice. The study aimed to understand the experience and phenomena of nurse managers, their roles in the dynamics of healthcare practice, and seek ways to enhance the development of these roles. The study used descriptive phenomenology-qualitative design, and utilized Colaizzi method during data analyses. Through researcher-made guide questions, the study employed purposively nurse managers as key informants from tertiary hospitals that experienced devolution using the inclusion and exclusion criteria. The researchers conducted interviews with audiotapes that then later transcribed. The study revealed that nurse managers encountered challenges in the workplace like deprivation of responsibility, less administrative support that challenged their responsibilities. The nurse managers identified new roles that they have developed over time like Carative-managerial role, responsibility to educate and the responsibility to the nursing profession. They are believers that the enhancement of such roles is realized through rest and recreation among staffs to avoid burnout and exhaustion, acquiring continuing professional education, clinical teaching and mentoring skills strategies to understand better human behavior. The researchers recommends consumption of such study findings as basis for improving hospital facilities for the provision of patient safety, revisiting and strengthening the hiring and screening policies for new nurses, administrative support for staff development and as basis to conduct further studies on the emerging roles to other settings.
INTRODUCTION
The changing health care system challenges the nurse managers of today. Whether they work on a medical floor in an acute care hospital or in a critical care unit, the nurse manager must deal with other people who work with them and for them, and they must use the resources wisely. A nurse manager on the other hand must recognize the need for growth within, which then translates into improvement of one's practice. Practicing nurse managers illustrate role perceptions; cite decision-making and problem solving as major roles for which maintaining objectivity is a special challenge.
Rapid changes in today's health care industry are reshaping the nurse's role. The emergence of new health care systems, the shift from service orientation to business orientation, and an extensive redesign of the workplace directly affect where and how nursing care is delivered as well as those who deliver the care.
To manage well, nurses must understand the health care system, the organizations they work and resources as well. They need to recognize what In the Philippines, Health Care System is in the midst of significant and dramatic development as it continues to rapidly evolve -the devolution of hospitals to the Local Government Unit; free health care for the senior citizens; and the no balance billing policy for the indigents. These resulted to increase number of patients in the hospital, which in turn increased the workload of staff nurses and nurse to patient ratio adding more burdens to the nurse managers.
The impact of these changes greatly affects the role of nurse managers in their practice. They are tasked with a wider range of playing both the key to ensuring quality patient care and excellent workplace for staff nurses.
During the literature review, no study had been made to document how the nurse managers respond with dynamism to the changing health care practice. This study will deeply understand the phenomena of role development of nurse managers in the changing health care practice which must be looked into for possible policy development for the improvement of nursing practice.
METHOD
The study utilized qualitative phenomenology to help shape the nurse's perception of a problem or situation and their conceptualization of potential solutions (Streuber, 2003). Purposive sampling was utilized in selecting the key informants following the inclusion criteria. The researcher chose nurses with appointment as nurse managers in any area of the hospital as participants either male or female, 35 years old and above with at least five years continuous experience. The study was limited to nurses working in a tertiary government hospital in the provinces of Negros Oriental and Leyte, Philippines.
The researcher engaged in some activities with the participants such as morning rounds, endorsements, consultative meeting to enrich information on the actual daily transcations which involves nurse managers' decision-making. Each participant signed informed consent prior to the conduct of the study and data were collected through semi-structured interview. To establish trustworthiness and credibility of data, the researchers observed prolonged involvement, persistent observation and triangulation. Data were transcribed verbatim from the recorded audiotapes.
RESULT
Themes on the challenges of nurse managers in the modern day practice: Deprivation of responsibility Less administrative support Challenges to responsibilty Development of themes on the new roles experienced by nurse managers: Collegial responsibility (carative managerial role) Responsibility to educate ( intermediary role) Responsibility to the nursing profession (instructor's role)
DISCUSSION
On challenges of nurse managers in the modern day practice. The nurse managers experienced challenges in their day-to-day practice. One nurse manager revealed that she was deprived with responsibility especially on abiding hospital policies and protocols. Staff nurses were sometimes not cooperative by not following duty schedules, which later lead to destruction of program and duty workloads of other staffs. As a nurse manager, they made it sure, that prior to the commencement of the next shift, receiving staffs should be complete atleast to continue the care. In the management, nurse managers do not only see human resource related to patient care (Towse, 2004); their scope expands to facilities and resources that were utilized during the entire process of patient caring. When the hospital lacks precautionary measures to ensure patient safety, the nurse managers perceived this problem since it affects the totality of human caring (ANSAP, 2001). On one hand, there are staff nurses who are adding burden or problem who in turn challenged the professional decision-making skills of the nurse managers. Majority of the informants agreed that instances like not following simple protocols in ward works, abseentism without prior notification of healthcare members and even conerns of safe workplace and environmental affairs contributes to disappointments of nurse managers.
On new roles experienced and have developed by nurse mangers in the workplace. Collegial responsibility refers to actions of nurse managers in uplifting the professional maturity of staff nurses. This was evidenced when nurse managers uttered that millenial nurses as per experienced were quite passive in patient interaction. As immediate superiors, they made certain measures to maintain quality care amidst scarcity of resources through preceptorship (Masters, 2005), constant mentoring and shadowing as part of carative role to novice nurses. The nurse managers have to remind consistently and frequently new nurses about the standards of care practice because they are not compliant and abide less in it. The head nurse expressed feeling of being an instructor than a supervisor. One nurse manager spoke that by experience, she double-checked new nurses especially in terms of work performance and execution of tasks and deliverables to their patients, thus representing instructor's function. Head nurses should assume specific roles to have a more focused task for their human resource development. This includes giving of orientation to newly hired nurses on policies affecting their practice, and mentoring skills as well.
Conclusion
The lived experiences of the participants in this study confirmed that nurse managers encountered various challenges in the workplace, which include deprivation of responsibility, less administrative support, which confront their responsibilities.
The nurse managers identified new roles that they have developed over time which include collegial responsibility, to educate young breed of healthcare workers and responsibility to the profession.
The new identified roles can be further enhanced through rest and recreation program among staffs to avoid burn out and exhaustion; assumption specific roles to have a more focused task for their human resource development such as giving of orientation to newly hired nurses on policies affecting their practice, and mentoring skills as well. Embodying knowledge and attitude in response to the changing work environment and people. This an be addressed through acquiring continuing professional education, clinical teaching strategies to appreciate better human behaviour.
Suggestion
From the findings of this study, the following recommendations are offered in relation to research, education, and practice to contribute for the enhancement of the phenomenon at hand.
The findings will be disseminated to the following: Nurse managers so that findings may be translated into practice; and hospital administrators and chief nurses of the participating hospitals so their concerns will be given consideration.
The findings of this study will be the basis for: Improving facilities in the hospital for the provision of patient safety. This can be made possible through proper requisition to the hospital administrators.
Revisiting and strengthening the hiring and screening policies for new nurses.
Supporting activities for staff development such as rest and recreation for employee and continuing Professional Education (CPE) to nurse mangers and staff nurses. | 2019-03-18T13:59:07.471Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "0116acee416e8688cd5a7e1e097af26f593d94b1",
"oa_license": "CCBYSA",
"oa_url": "http://jnk.phb.ac.id/index.php/jnk/article/download/272/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7c95ffbd4497d935f3f89a955e9e6ec5031a3120",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
252262118 | pes2o/s2orc | v3-fos-license | The Antiviral Effects of 2-Deoxy-D-glucose (2-DG), a Dual D-Glucose and D-Mannose Mimetic, against SARS-CoV-2 and Other Highly Pathogenic Viruses
Viral infection almost invariably causes metabolic changes in the infected cell and several types of host cells that respond to the infection. Among metabolic changes, the most prominent is the upregulated glycolysis process as the main pathway of glucose utilization. Glycolysis activation is a common mechanism of cell adaptation to several viral infections, including noroviruses, rhinoviruses, influenza virus, Zika virus, cytomegalovirus, coronaviruses and others. Such metabolic changes provide potential targets for therapeutic approaches that could reduce the impact of infection. Glycolysis inhibitors, especially 2-deoxy-D-glucose (2-DG), have been intensively studied as antiviral agents. However, 2-DG’s poor pharmacokinetic properties limit its wide clinical application. Herein, we discuss the potential of 2-DG and its novel analogs as potent promising antiviral drugs with special emphasis on targeted intracellular processes.
Metabolic Shift in Host Cells during Viral Infection
Viral infections induce virus-specific metabolic reprogramming in host cells [10]. Viral replication entirely relies on the host cell machinery to synthesize viral components, such as nucleic acids, proteins, glycans and lipid membranes [11]. As virus formation depends on the metabolic capacity of the host cell to provide required components and energy in the form of ATP, the majority of viruses modulate the host cell metabolism to optimize the biosynthetic needs for virus growth. Both DNA and RNA viruses have been shown to affect various aspects of host metabolism, including increased glycolysis, elevated pentose phosphate activity, and enhanced amino acid and lipid synthesis. Generally, viruses mainly increase the consumption of key nutrients like glucose and glutamine. However, the precise metabolic changes are often virus-dependent and can vary even within the same family of viruses, as well as with the host cell type [10].
There are multiple ways in which viruses can alter host cell metabolic processes. For example, it was shown that human cytomegalovirus (HCMV), herpesvirus-1 (HSV-1), and adenovirus (ADWT) increase glycolysis and the tricarboxylic acid (TCA) cycle, as well as enhance nucleotide and lipid synthesis [12][13][14].
According to Mayer et al. [15], most virus-infected host cells upregulate glycolysis. It should be explained that under homeostatic and aerobic conditions, cells maintain ATP production mainly by aerobic glycolysis, followed by feeding pyruvate into the TCA cycle and subsequent utilization of reduced molecules into the oxidative phosphorylation pathway. On the contrary, pyruvate is converted to lactate under anaerobic conditions, which is then eliminated to extracellular space. Aside from the anaerobic conditions, Otto Warburg observed that cancer cells utilize glucose mainly via glycolysis, even under normal oxygen conditions (normoxia), the so-called Warburg effect [16]. Cells infected by certain viruses appear to adapt similar metabolic alterations to cope with the high anabolic demands of virion production. Whereas the overall virus-induced metabolic abnormalities are unique and virus-specific, the upregulation of the glycolysis pathway is a common phenomenon, and as such has been considered a target for antiviral therapy. Significantly, increased glycolysis activity has also been described in coronavirus infections, including porcine epidemic diarrhea coronavirus (PED), MERS-CoV, and SARS-CoV-2 [17][18][19][20]. One of the common mechanisms induced by a viral infection that leads to glycolysis is upregulation of the phosphatidylinositol-3-kinase (PI3K)/protein kinase B (PKB/Akt) signaling pathway, which regulates the expression of glucose transporters 1 or 4 (GLUT1, GLUT4) [18,[20][21][22]. Elevated expression of GLUTs facilitates an increase in glucose uptake by infected host cells. In response, the level of glycolytic enzymes, such as hexokinase (HK), lactate dehydrogenase (LDHA), or phosphofructokinase 1 (PFK-1), is also higher [21,23,24]. Viruses that were confirmed to induce the Warburg effect in host cells are summarized in Table 1. Additionally, viruses such as human T cell leukemia virus type 1 (HTLV-1) utilize GLUT1 as a receptor for entry [25]. While there is no literature that investigates the effect of 2-DG on HTLV-1 replication or infection, it is possible that 2-DG could impact HTLV-1 infection through competitive binding with the GLUT1 receptor. Table 1. Viruses upregulating host cell glycolysis as a mechanism required for their optimal replication.
The Importance of Host Glycosylation Process for Viral Replication
Carbohydrates are one of the most critical components necessary for the synthesis of N-glycans in the endoplasmic reticulum (ER). Numerous viruses rely on the expression of specific viral oligosaccharides crucial for viral entry into the host cells, proteolytic processing, protein trafficking, and evading detection by the host immune system [43,44]. In the N-glycosylation process, a high mannose core is attached to the amide nitrogen of asparagine in the context of the conserved motif Asn-X-Ser/Thr. It occurs early in the protein synthesis, followed by a complex process of trimming and remodeling of the oligosaccharide during transit through the ER and Golgi [43]. It has been shown that several viruses hijack the cellular glycosylation pathway to modify viral proteins. Adding N-linked oligosaccharides to the envelope or surface proteins promotes proper folding and subsequent trafficking using host cell chaperones and folding factors. Often, viruses use calnexin and or calreticulin to facilitate the proper folding of overexpressed viral proteins [45]. Although the cell cannot distinguish between host and viral proteins, one difference noted is an increase in the level of glycosylation in many viral glycoproteins. During viral evolution, glycosylation sites are easily added and deleted, increasing the possibility of viral modifications. Glycosylation sites have a significant impact on the survival and transmissibility of the virus and small changes can alter protein folding and conformation, affecting portions of the entire molecule [46]. Further, changes in glycosylation can affect interactions with receptors, influence virus entry, and protect the virus from neutralizing antibodies [44,47]. It was shown that many viruses even use glycosylation for important functions in their pathogenesis and immune evasion, including influenza A and B, HIV, and hepatitis C [23,28,48,49].
The glycosylation process requires mannose, an essential constituent of N-glycans. Mannose enters the cells via hexose transporters present in the plasma membrane. It is immediately phosphorylated by HK and then either catabolized via mannose phosphate isomerase (MPI) or diverted toward glycosylation through phoshomannomutase-2 (PMM2) [50]. On the other hand, mannose-6-phosphate (M6P) could also be obtained via the MPI-catalyzed isomerization of fructose-6-phosphate, synthesized from glucose-6-phosphate in the glycolysis pathway [51]. Moreover, a virus-induced metabolic shift in infected host cells results directly in higher activity of HK, which upregulates the glycosylation process that is required for rapid and massive production of infectious progeny in order to disseminate the infection.
Based on the mechanisms mentioned above, inhibition of glycolysis could be a potent antiviral approach. One of the most widely used glycolysis inhibitors is the D-glucose analog, 2-deoxy-D-glucose (2-DG).
2-DG Molecule and Its Intracellular Effects
The D-glucose analog, 2-DG has been the primary compound for glycolysis inhibition for a long period of time. 2-DG is a synthetic analog of glucose in which hydrogen replaces the hydroxyl group at the second carbon position ( Figure 1A). Similar to D-glucose, 2-DG is taken up by the cells, mainly via glucose transporters (facilitated diffusion), in particular, GLUT1 and GLUT4, although active transport SGLT transporters also occurs [52,53]. Once intracellular, 2-DG is phosphorylated to 2-deoxy-D-glucose-6-phosphate (2-DG-6-P), producing a charged compound now entrapped inside the cell. However, because it is missing the 2-OH group it cannot undergo isomerization to fructose-6-P, therefore, it accumulates in the cell and causes inhibition of glycolysis and glucose metabolism [54] (summarized in Figure 1B). 2-DG inhibits hexokinase and phospho-hexose isomerase responsible for conversion of phosphoglucose to phosphofructose and, thereby, blocks glycolysis at the initiation stage. Glycolysis inhibition results in depletion of ATP required for maintaining intracellular processes and, thus, facilitates autophagy and apoptosis initiation. Insufficient ATP levels inside the host cells also limit the possibility of fast viral replication and new virus production, thus limiting viral infection.
Historically, 2-DG was synthesized from D-glucose by the elimination of the hydroxyl group at C-2. However, eliminating the hydroxyl group at C-2 in the D-mannose molecule leads to the same 2-DG compound ( Figure 1A). Thus, 2-DG can interfere with the metabolism of both D-glucose and D-mannose. Disrupting mannose-related metabolic pathways leads to dysregulation of the N-glycosylation process, a crucial cellular process required for production of viral glycoproteins and virions [55]. As 2-DG is substituted into the growing N-glycan chain for mannose, the oligosaccharide chain is truncated, due to the lack of a hydroxyl group in the C-2 position incorporated in the place of mannose ( Figure 1C). The 2-DG-dependent inhibition activity of HK also affects mannose processing to further increase the negative effect on production of viral glycoproteins.
As an analog of D-mannose, 2-DG diminishes the cellular pool of mannose for protein glycosylation. Insufficient protein maturation results in lower quality of viral glycoproteins and induces ER stress called "unfolded protein response (UPR)". The UPR completely shuts down further protein synthesis to alleviate this stress. Consequently, protein synthesis becomes limited and viral envelope formation is disrupted, resulting in inhibition of viral infection.
Inhibition of glycolysis results in limited ATP generation, which is essential for maintaining cellular function and molecular synthesis. In response to the low AMP/ATP ratio, the autophagy process is induced [56]. Prolonged autophagy and/or UPR stress are known to be apoptosis inducers leading to cell death [57,58]. Moreover, 2-DG action has been reported to generate ROS, which is harmful for proteins, nucleic acids, and other intracellular molecules, facilitating programmed cell death [59]. Our team has previously published a detailed description of the intracellular effects of 2-DG action [54]. Historically, 2-DG was synthesized from D-glucose by the elimination of the hydroxyl group at C-2. However, eliminating the hydroxyl group at C-2 in the D-mannose molecule leads to the same 2-DG compound ( Figure 1A). Thus, 2-DG can interfere with the metabolism of both D-glucose and D-mannose. Disrupting mannose-related metabolic pathways leads to dysregulation of the N-glycosylation process, a crucial cellular process required for production of viral glycoproteins and virions [55]. As 2-DG is substituted into the growing N-glycan chain for mannose, the oligosaccharide chain is truncated, due to the lack of a hydroxyl group in the C-2 position incorporated in the place of mannose ( Figure 1C). The 2-DG-dependent inhibition activity of HK also affects mannose processing to further increase the negative effect on production of viral glycoproteins.
As an analog of D-mannose, 2-DG diminishes the cellular pool of mannose for protein glycosylation. Insufficient protein maturation results in lower quality of viral glycoproteins and induces ER stress called "unfolded protein response (UPR)". The UPR completely shuts down further protein synthesis to alleviate this stress. Consequently, protein synthesis becomes limited and viral envelope formation is disrupted, resulting in inhibition of viral infection.
Inhibition of glycolysis results in limited ATP generation, which is essential for maintaining cellular function and molecular synthesis. In response to the low AMP/ATP ratio, the autophagy process is induced [56]. Prolonged autophagy and/or UPR stress are known to be apoptosis inducers leading to cell death [57,58]. Moreover, 2-DG action has been reported to generate ROS, which is harmful for proteins, nucleic acids, and other intracellular molecules, facilitating programmed cell death [59]. Our team has previously published a detailed description of the intracellular effects of 2-DG action [54].
Antiviral Action of 2-DG
Due to the above-mentioned biological properties of 2-DG and its ability to interfere with various cellular processes, 2-DG is an efficient cytotoxic agent that was tested in different models of viral infections. 2-DG has been explored as a single antiviral agent or as an adjuvant agent for various groups of clinically used drugs.
SARS-CoV-2 and Other Coronaviruses
As mentioned before, 2-DG is a drug candidate for SARS-CoV-2, and its emergency approval in India allowed its clinical use in COVID-19 patients. Numerous in vitro studies have shown that 2-DG efficiently limits SARS-CoV-2 replication [60][61][62]. According to Bhatt et al. [61], SARS-CoV-2 infection of Vero E6 cells induced upregulation of GLUT1, GLUT3, and GLUT4 proteins. GLUT3 is the main transporter determining the high influx of glucose into the cell confirmed by using a fluorescent 2-DG analog (2-NBDG). In addition, increased levels of key glycolytic enzymes, such as HKII, PFK-1, and pyruvate kinase 2 (PKM-2), were also observed [61]. Importantly, 2-DG did not exert a cytotoxic effect on non-infected cells up to 5 mM concentration. In SARS-CoV-2-infected cells, 2-DG [5 mM] reduced cytopathic effects and cell death. Moreover, 2-DG was shown to disrupt the glycosylation of viral proteins, leading to reduced infectivity observed by newly formed virions from collected media [62]. Results demonstrated by Bhatt et al. [61] agree with data published by Bojkova et al. [62], showing the inhibitory effect of 2-DG against SARS-CoV-2 replication in the Caco-2 cell line. The IC 50 value was estimated to be 9.09 mM [62]. Further, Codo et al. showed that a metabolic shift also occurred in inflammatory cells, such as monocytes, in response to SARS-CoV-2 infection [63]. Due to impaired oxidative metabolism, HIF-1α protein becomes upregulated in infected monocytes, resulting in a prolonged pro-inflammatory state. This, in turn, leads to pro-inflammatory cytokine production, which further deteriorates neighboring cells in a paracrine way, including T-cells [63].
Before the onset of the SARS-CoV-2 pandemic, 2-DG was already recognized as an efficient antiviral compound against other coronaviruses. In 2014, Wang et al. [17] examined the influence of 2-DG on the porcine epidemic diarrhea virus (PEDV). The authors found that 2-DG [10 mM] inhibits PEDV replication in Vero cells, mainly affecting the glycosylation process and UPR stress induction [17].
Altogether the above data demonstrate that inhibition of glycolysis/glycosylation is an effective strategy to limit coronavirus infection. Thus 2-DG's implication as a clinical anti-viral therapy is promising and well justified.
Papillomaviruses
2-DG [7.5 mg/mL] was shown to be capable of suppressing the transcription of the human pathogenic papillomavirus type 18 (HPV18) in HeLa cells [64]. Interestingly, the authors found that using the intracellular Ca 2+ antagonist-TMB-8, the 2-DG effect can be abolished. However, the mechanism connecting 2-DG and calcium signaling has not been explained. Antiviral 2-DG action against HPV18 was also demonstrated by Kang et al. [65] who showed that 2-DG downregulates the Sp1 transcription factor activity, leading to restricted HPV18 early gene expression. The molecular mechanism of 2-DG action preferably affected the glycolysis process, whereas the reduced ATP generation was involved only to a limited extent [65].
The importance of HPV-16-mediated metabolic shift was also demonstrated in studies by Ma et al. [66]. It has been reported that E6 and E7 HPV oncoproteins, which contribute to viral-induced cervical carcinogenesis, also determine cervical cancer resistance to 5-fluorouracil (5-FU) treatment, due to the upregulated glycolysis and Akt-dependent signaling pathway [66]. In the presence of 2-DG [1 mM], virus-induced glycolysis was inhibited, and cervical cancer was sensitized to 5-FU cytotoxic action.
It should be emphasized that with around 604,000 cases and 341,000 deaths in 2020 [67], cervical cancer is the fourth most common cancer worldwide and the most common malignant transformation caused by HPV infection. It is estimated that about 90% of all women contract an HPV infection in their lives. In about 10% of cases, the virus persists, and cervical intraepithelial neoplasia (CIN) develops. Approximately 1% of women with high-risk HPV infection will develop a cervical carcinoma within 1 to 20 years [68]. The efficient inhibition of HPV replication using 2-DG is an exciting approach that should be verified in the clinics.
Rhinoviruses
Rhinoviruses (RV) are the causative agents of the common cold and other respiratory tract infections. Despite the vast prevalence, effective treatment or prevention strategies are still lacking [69]. Previously, it was shown that RV infection also induces a metabolic shift in infected host cells making it potentially susceptible to antiviral effects of 2-DG. According to Gualdoni et al. [39], 2-DG administration [5 mM] to RV-infected HeLa cells reversed many RV-induced modifications in cellular metabolism. For instance, 2-DG abolished RV-induced glycogenolysis and led to a significant increase in the levels of several fatty acylcarnitine's that were decreased during infection. These changes were accompanied by reduced levels of various phospholipids, sphingolipids, and ceramides, which, taken together, suggest a shift away from anabolic and lipogenic processes during 2-DG treatment [39]. In vivo analysis of RV infection in the murine model showed reduced lung inflammation in 2-DGtreated animals [5 mM] with no visible side effects upon treatment. Therefore, 2-DG might be considered as a strategy to combat this widespread pathogen.
Noroviruses
Noroviruses (NoV) are nonenveloped, positive-sense, single-stranded RNA viruses of the Caliciviridae family that cause acute, nonbacterial gastroenteritis globally. Murine norovirus (MNV) infection of macrophages causes changes in the host cell metabolic profile characterized by an increase in central carbon metabolism. Energetic profiling, combined with experiments inhibiting the pentose phosphate pathway (PPP) and OXPHOS with 6-aminonicotinamide and oligomycin A, respectively, revealed that these pathways have a minor role in murine MNV pathogenesis compared to glycolysis. Investigations of Akt and AMPK pathways showed that MNV infection caused an increase in Akt activation, while inhibition of Akt signaling reduced both cellular glycolysis and MNV infection [20]. Downregulation of glycolysis with 2-DG [10 mM] treatment significantly reduced MNV infection in RAW 264.7 cell line. In contrast, 2-DG was ineffective against human astrovirus in vitro, suggesting that metabolic changes and viral dependence upon selected intracellular processes might be virus-specific [20].
Hepatitis B Virus
Hepatitis B virus (HBV) is a partially double-stranded circular DNA virus whose genome is approximately 3200 bases with four overlapping open reading frames (ORFs) and it belongs to the Hepadnaviridae family. HBV prevalence varies worldwide, with high rates reported in low-income countries. Approximately 90% of HBV infections are acute, while 10% progress to chronic infection among adult patients [70]. As an intracellular pathogen, the reproduction of HBV depends on the occupancy of host metabolism. Wu et al. [29] showed that large viral surface antigens (LHBS) interact directly with cellular PKM-2, a key regulator of glucose metabolism in hepatocytes, thereby increasing glucose utilization and lactate production. Next, the authors showed that 2-DG treatment [0.5-10 mM] caused dose-dependent suppression of HBV protein synthesis, leading to inhibition of viral replication [29]. Supporting data showing the potency of 2-DG action in HBV treatment have been published by Wang et al. [30]. 2-DG treatment [1, 5, 10 mM] significantly decreased glycolysis in HepG2.2.15 cells with concomitant reduction of intraand extracellular HBV DNA and RNAs, and the addition of pyruvate did not affect 2-DG action. Moreover, 2-DG modulated the cellular AMP/ATP ratio, thereby activating AMPK kinase and autophagy. These data confirmed that 2-DG could inhibit glycolysis, HBV gene expression, and replication in HepG2.2.15 cells.
Strikingly, chronic hepatitis B infection affects more than 300 million people worldwide and is a leading cause of liver failure and cancer [71]. Although current treatments for chronic HBV suppress viral replication and reduce the risk of liver cancer and endstage liver disease, it does not constitute a complete virus elimination. Thus, treatment interruption may result in a resurgence of viral replication and hepatic disease progression.
Targeting infected host cell metabolism via 2-DG or other glycolysis inhibitors may represent a viable approach that needs to be clinically tested.
Zika Virus
Zika virus (ZIKV), a mosquito-transmitted flavivirus, spread in recent years from Africa and Asia to Latin America and parts of the United States. It has rapidly emerged as an important pathogen that can cause significant morbidity [42]. Singh et al. [42] showed that ZIKV requires inhibition of AMPK signaling and concomitant upregulation of glycolysis to promote viral replication. This corresponds to increased glucose uptake and mRNA expression of GLUT1, HK2, triosephosphate isomerase (TPI), and monocarboxylate transporter 4 (MCT4) in infected HReEC endothelial cells. The glycolysis induction is a crucial mechanism for successful ZIKV infection. The use of 2-DG [1 mM] markedly reduced the number of ZIKV Ag-positive HRvEC cells and viral titer relative to untreated cells. Further, 2-DG treatment increased the phosphorylation of AMPK and restored its activation upon ZIKV challenge. Moreover, ZIKV NS3 protein expression was undetectable during 2-DG treatment [42]. In studies performed by Lin et al. [72], 2-DG [10 mM] was also confirmed to inhibit ZIKV replication in the Vero cell culture model. ZIKV infection during pregnancy can cause microcephaly in newborns, yet the underlying mechanisms remain largely unexplored. Very recently, Pang et al. [73] showed that ZIKV infection caused aberrant metabolism in infected brains. Using LC-MS global proteomic data, the authors were able to identify the enriched pathways in ZIKV-infected brains related to amino acid, purine and pyrimidine metabolism. Downregulated pathways included the TCA cycle, OXPHOS, and pyruvate metabolism [73]. The observed inhibition of the OHPHOS and TCA cycle occurred in neurons and neuroblast cells, suggesting a correlation between mitochondrial dysfunction and ZIKV-induced neural cell death. Additionally, downregulated purine and pyrimidine metabolism toward RNA and DNA synthesis at the protein level implied a low proliferation state for cells of ZIKV-infected mouse brains. Limited glucose utilization via TCA and OHPHOS promote a glycolytic shift in infected neurons that could be targeted with 2-DG treatment. Taken together, these results confirmed the importance of glycolysis for ZIKV replication and showed the potential of 2-DG treatment in ZIKV infection.
Herpes Simplex Virus 1
The herpes simplex virus (HSV) is the causative agent of herpes infection. Herpes can appear on various parts of the body, most commonly on the genitals and mouth. There are two types of HSV: HSV-1-primarily causes oral herpes and is generally responsible for cold sores and fever blisters around the mouth and on the face; HSV-2-primarily causes genital herpes and is generally responsible for genital herpes outbreaks [74]. According to the WHO, about 3.7 billion people under age 50 (67%) have HSV-1 infection, whereas 492 million people aged 15-49 (13%) worldwide have HSV-2 infection [75].
Abrantes et al. [24] showed that HSV-1 induces glycolysis in infected cells via upregulation of PFK-1 activity leading to increased ATP content inside the host cells. Interestingly, no data confirm a similar metabolic shift in HSV-2-infected cells. According to Varanasi et al. [76], 2-DG action against HSV-1 changes during different stages of HSV pathogenesis and can have either detrimental or beneficial effects. In the case of HSV-1 infection in mice, upregulated metabolism and glucose uptake was observed in CD4 T cells compared with T cells from naive animals. Treatment with 2-DG reduced glucose uptake and limited the differentiation of effector T cells in the in vitro model. On the other hand, in vivo results demonstrated that 2-DG treatment diminished SK lesions, due to reduced effector T cell responses. In this context, 2-DG appeared to inhibit HSV-1 infection via modulating inflammatory CD4 effector T cells response, resulting in damaging consequences in the unique environment of the eye [76]. On the contrary, 2-DG administration in the acute phase of ocular infection resulted in death from herpes encephalitis in many animals. Taken together, Varanasi et al. [76] concluded that metabolic modifying drugs should be used with caution, especially during HSV-1 infections. When 2-DG therapy was used when the HSV virus was still replicating, viral replication was enhanced, which could have had lethal consequences due to the virus spreading to the brain.
Distinct observations concerning 2-DG effects against HSV-1 infection have also been reported by Knowles and Person [77]. 2-DG [10 mM] and glucosamine were found to inhibit cell fusion caused by a syncytial mutant of HSV and to also inhibit glycosylation of viral glycoproteins in infected HEL cells. These effects were substantially reduced when mannose was also present during infection. The correlation between fusion and glycosylation in the presence of 2-DG and mannose suggests that the cells cannot fuse if their glycoproteins have a considerably reduced carbohydrate content [77]. According to the presented data, 2-DG appeared to affect the glycosylation process that was crucial for cell fusion. However, the authors did not evaluate the glycolysis process in HEL cells, and the 2-DG effect on glycolysis was not discussed.
On the other hand, studies published by Kern et al. [78] and Shannon et al. [79] demonstrated a lack of antiviral action of 2-DG in the treatment of cutaneous infections with HSV-1 in mice and genital infections with HSV-2 in mice [78] and guinea pigs [78,79]. In all experimental models, 2-DG treatment (topically (HSV-1)/intravaginally (HSV-2), three times a day with 0.2% or 0.5% 2-DG solution beginning 3 h after inoculation) did not significantly affect viral replication, lesions development, severity, mortality, or latency.
In summary, studies testing 2-DG efficacy in HSV infections are limited, and there is no significant progress in this area. It seems that positive cell culture data does not translate into positive outcomes in animal studies. We hypothesize that a lack of satisfactory in vivo 2-DG effects could be correlated to the poor pharmacokinetic properties of 2-DG [54]. That issue has been discussed previously and is supported by clinical data from 2-DG studies in the past.
2-DG in Clinical Trials
Due to 2-DG's ability to inhibit glycolysis, ATP synthesis and protein glycosylation, 2-DG appears to be very efficient in killing highly glycolytic cells. As mentioned previously, metabolic shift is characteristic of viral infection and cancer cells. Importantly, all described 2-DG effects are mostly observed in glycolytic cells, without significant influence on the viability of normal cells [80]. Thus, 2-DG has been explored as a cytotoxic compound or an adjuvant agent for various clinically used chemotherapeutic drugs in breast, prostate, ovarian, lung, glioma, and other cancer types. 2-DG was also tested as a radio-sensitizing agent in cancer radiotherapy. The efficacy of 2-DG as an anticancer agent was reviewed in detail in our paper [54]. Due to the importance of cancer treatment for global population healthcare, 2-DG has been tested in oncological clinical trials. Clinical trials registered in India using 2-DG in COVID-19 patients are the first documented cases for clinical use of 2-DG in viral infections.
Despite the numerous preclinical and clinical studies, the use of 2-DG in cancer and viral treatment has been limited. Its rapid metabolism and short half-life (according to Hansen et al., after treatment with infusion of 50 mg/kg 2 -DG, its plasma half-life was only 48 min [81]), make 2-DG a relatively poor drug candidate. Moreover, 2-DG must be given at relatively high concentrations (≥5 mmol/L) to compete with blood glucose [82]. According to Stein et al. [83], the dose of 45 mg/kg received orally on days 1-14 was defined as safe because patients did not experience any dose-limiting toxicities. Notably, at the dose of 60 mg/kg, two patients experienced dose-limiting toxicity of grade 3-asymptomatic QTc prolongation. According to former studies published by Burckhardt et al. [84] and Stalder et al. [85], among patients exposed to 2-DG, non-specific T wave flattening and QT prolongation, without any event of severe arrhythmia, developed.
A study of 2-DG in humans was published in 2013 and reported the results of an association regimen of 2-DG and docetaxel in patients with advanced solid tumors [86]. In this study, based on the overall tolerability of the 2-DG treatment, the authors used a starting dosage of 63 mg/kg, which was considered safe. At the higher dose of 88 mg/kg, patients presented plasma glucose levels above 300 mg/dL and glucopenia symptoms, including sweating, dizziness, and nausea, mimicking the symptoms of hypoglycemia [86]. Other significant adverse effects recorded during the trial at 63-88 mg/kg doses were gastrointestinal bleeding (6%) and reversible grade 3 QTc prolongation (22%). After the end of the study, one patient died from a serious adverse event of cardiac arrest 17 days after the last dose of 2-DG. ECG done ten days before death showed persistent T-wave inversion and no QT prolongation [86]. However, it should be noted that the eligibility criteria of patients in this study, who had advanced or metastatic solid tumors, could have played a confounding role in relation to survival and overall patient condition. Clinical testing of 2-DG as a chemotherapy has been performed in humans and demonstrated good tolerability. Antiviral efficacy of 2-DG has been demonstrated in various models and showed a good tolerability profile. Currently, there are no available reports presenting data about safety and efficacy of 2-DG in the COVID-19 clinical trials. It is also possible, if not likely, that some hospitalized patients receiving i.v. fluids may also receive glucose at 5%, 10% or even higher (not limited to COVID-19 patients). This can be especially the case for patients that are intubated and unable to drink and eat. However, our primary goal is to reduce hospitalization rates and improve recovery through early treatment of patients that are not receiving i.v. fluid therapy yet, especially in outpatient groups of COVID-19 infected population.
Based on the available data, our group is not aware of any specific negative impact of this type of therapy against other viral infections. The only exception could be related to Herpes infection and that is being addressed in the other section of the paper. In general, it is certainly possible that patients receiving glucose i.v. therapies may not benefit from 2-DG, but we can only speculate at this stage.
Nevertheless, the above-described poor pharmacokinetic properties and possible side effects encourage identification of other molecules that affect the same metabolic pathways but could overcome these problems. One possible solution is the use of novel 2-DG analogs, which maintain 2-DG-mediated biological efficacy, but have better drug-like properties, which is essential for successful clinical introduction.
WP1122 enters the cells via passive diffusion rather than relying upon specific glucose transporters. Inside the cells, WP1122 undergoes deacetylation by esterases releasing active 2-DG molecules ( Figure 2). Further, 2-DG undergoes phosphorylation at the C-6-hydroxyl group, and it is trapped inside the cells. 6-phosho-2-DG competitively inhibits HK, blocking phosphorylation of glucose and thereby inhibiting the glycolytic pathway [86]. Furthermore, it has been shown that WP1122 crosses the blood-brain barrier (BBB), making it a promising drug candidate for glioma therapy [87] and possibly viral encephalitis. 2-DG is rapidly metabolized, whereas the prodrug WP1122 releases 2-DG slowly, increasing its half-life. WP1122 demonstrated good oral bioavailability, resulting in a two-fold higher plasma concentration of 2-DG than that achieved via administration Compound WP1122 was prepared from commercially available 3,4,6-tri-O-acetyl-Dglucal in a two steps synthesis. At the first step 3,4,6-tri-O-acetyl-D-glucal was selectively deacetylated to 3,6-di-O-acetyl-D-glucal, which, in the next step, was treated with water solution of hydrobromic acid to give WP1122 as the final product.
WP1122 enters the cells via passive diffusion rather than relying upon specific glucose transporters. Inside the cells, WP1122 undergoes deacetylation by esterases releasing active 2-DG molecules ( Figure 2). Further, 2-DG undergoes phosphorylation at the C-6hydroxyl group, and it is trapped inside the cells. 6-phosho-2-DG competitively inhibits HK, blocking phosphorylation of glucose and thereby inhibiting the glycolytic pathway [86]. Furthermore, it has been shown that WP1122 crosses the blood-brain barrier (BBB), making it a promising drug candidate for glioma therapy [87] and possibly viral encephalitis. 2-DG is rapidly metabolized, whereas the prodrug WP1122 releases 2-DG slowly, increasing its half-life. WP1122 demonstrated good oral bioavailability, resulting in a two-fold higher plasma concentration of 2-DG than that achieved via administration of 2-DG alone [87].
In vitro studies showed that WP1122 effectively inhibits glycolysis with 2-10 times more potent action when compared to 2-DG. Moreover, WP1122 was well tolerated by mice in an orthotopic glioma model, even with prolonged exposure [88]. WP1122 is currently licensed to Moleculin Inc. and is in phase 1 clinical trials in COVID-19 patients. According to a statement by Moleculin Inc. [89], WP1122 antiviral action has been tested in cooperation with Goethe University in Frankfurt in Germany and showed complete inhibition of SARS-CoV-2 replication in cell culture. The data indicated that WP1122 could be more beneficial clinically than 2-DG alone.
The other group of 2-DG analogs are halogenated D-glucose derivatives, described previously by Lampidis et al. [90], such as 2-fluoro-2-deoxy-D-glucose (2-FG), 2-chloro-2-deoxy-D-glucose, 2-chloro-2-deoxy-d-glucose (2-CG), and 2-bromo-2-deoxy-D-glucose (2-BG) [90], see Figure 3. The authors also evaluated the ability of halo-derivatives to interact with HKI enzyme and their cytotoxic potential against glycolytic cancer cells [90]. It appeared that there was a negative correlation between the size of halogen substituent at the C-2 position and drug activity. As halogen size increased (2-FG > 2-CG > 2-BG), the ability to bind the HKI active site reduced, leading to diminished production of 6-O-phosphorylated intermediates, which is crucial for glycolysis inhibition. Interestingly, the authors did not analyze the iodo-analogs that, according to a recent analysis published by Ziemniak et al. [91], could also have an inhibitory potential against HK activity. Further studies are needed to verify whether halo-analogs could also exert antiviral effects in infected cells.
Perspectives
The SARS-CoV-2 pandemic reminded societies globally of the importance of viral diseases in human health. The rapid spread of SARS-CoV-2 and its millions of infected patients have demonstrated the lack of effective broad-spectrum antiviral treatments. Moreover, as described above, other viral infections like HBV, HPV, HSV and ZIKV also have significant health, economic and worldwide significance. All of them generate a high demand for an effective therapy that reduces infections and protects patients from the long-term harmful consequences of viral diseases, including cancer patients. Activation of glycolysis in infected cells is the common link between various viral infections making inhibition of glycolysis a promising therapeutic approach for broad-spectrum drugs. As can be seen from the numerous studies described above, 2-DG exhibits effective antiviral activity against many types of viruses, including SARS-CoV-2. Recent clinical trials of 2-DG in SARS-CoV-2-infected patients support the strategy of targeting the metabolism of infected host cells as a way to limit virus growth and dissemination in infected host. However, based on the cited data on the effects of 2-DG clinical trials for oncological indications, including the reported side effects and poor pharmacokinetic properties, it seems that there is an unmet need to search for new molecules with an analogous mechanism of action but with significantly better drug-like properties.
In this light, molecules like WP1122 appear to have great potential for development as a drug candidate in antiviral indications. We look forward to the final reports of clinical The authors also evaluated the ability of halo-derivatives to interact with HKI enzyme and their cytotoxic potential against glycolytic cancer cells [90]. It appeared that there was a negative correlation between the size of halogen substituent at the C-2 position and drug activity. As halogen size increased (2-FG > 2-CG > 2-BG), the ability to bind the HKI active site reduced, leading to diminished production of 6-O-phosphorylated intermediates, which is crucial for glycolysis inhibition. Interestingly, the authors did not analyze the iodo-analogs that, according to a recent analysis published by Ziemniak et al. [91], could also have an inhibitory potential against HK activity. Further studies are needed to verify whether halo-analogs could also exert antiviral effects in infected cells.
Perspectives
The SARS-CoV-2 pandemic reminded societies globally of the importance of viral diseases in human health. The rapid spread of SARS-CoV-2 and its millions of infected patients have demonstrated the lack of effective broad-spectrum antiviral treatments. Moreover, as described above, other viral infections like HBV, HPV, HSV and ZIKV also have significant health, economic and worldwide significance. All of them generate a high demand for an effective therapy that reduces infections and protects patients from the long-term harmful consequences of viral diseases, including cancer patients. Activation of glycolysis in infected cells is the common link between various viral infections making inhibition of glycolysis a promising therapeutic approach for broad-spectrum drugs. As can be seen from the numerous studies described above, 2-DG exhibits effective antiviral activity against many types of viruses, including SARS-CoV-2. Recent clinical trials of 2-DG in SARS-CoV-2-infected patients support the strategy of targeting the metabolism of infected host cells as a way to limit virus growth and dissemination in infected host. However, based on the cited data on the effects of 2-DG clinical trials for oncological indications, including the reported side effects and poor pharmacokinetic properties, it seems that there is an unmet need to search for new molecules with an analogous mechanism of action but with significantly better drug-like properties.
In this light, molecules like WP1122 appear to have great potential for development as a drug candidate in antiviral indications. We look forward to the final reports of clinical trials with WP1122 in patients with COVID-19 or other viral infections of public health importance. | 2022-09-15T15:27:18.422Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "5a8eb8320abe9a317571519a24fd3ae918506088",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/18/5928/pdf?version=1663060380",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28ec7ce414bbc93064189d2fdab8c2be2db530c3",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13010368 | pes2o/s2orc | v3-fos-license | Polymorphisms of genes coding for ghrelin and its receptor in relation to colorectal cancer risk: a two-step gene-wide case-control study
Background Ghrelin, an endogenous ligand for the growth hormone secretagogue receptor (GHSR), has two major functions: the stimulation of the growth hormone production and the stimulation of food intake. Accumulating evidence also indicates a role of ghrelin in cancer development. Methods We conducted a case-control study to examine the association of common genetic variants in the genes coding for ghrelin (GHRL) and its receptor (GHSR) with colorectal cancer risk. Pairwise tagging was used to select the 11 polymorphisms included in the study. The selected polymorphisms were genotyped in 680 cases and 593 controls from the Czech Republic. Results We found two SNPs associated with lower risk of colorectal cancer, namely SNPs rs27647 and rs35683. We replicated the two hits, in additional 569 cases and 726 controls from Germany. Conclusion A joint analysis of the two populations indicated that the T allele of rs27647 SNP exerted a protective borderline effect (Ptrend = 0.004).
Background
Ghrelin, an endogenous ligand for the growth hormone secretagogue receptor (GHSR), is a 28-amino residue peptide predominantly produced by the stomach [1]. In addition to the mature form of ghrelin, several posttranscriptional and post-translational variants have been reported [2]. Two molecular forms (ghrelin and des-acyl ghrelin) resulting from a different post-transcriptional modification of the protein, are observed in human plasma. The ghrelin receptor has two known isoforms: one which is functional (GHSR-1a) and one spliced variant (GHSR-1b) with no known function [3]. Only the acylated form of ghrelin can bind the GHSR-1a receptor [1]. Two main functions of ghrelin are documented: first to stimulate growth hormone (GH) production through the activation of the GHSR-1a in the hypothalamus [4] and second to increase appetite and food intake [5,6] by mechanisms that could be independent of GHSR [7].
Circulating ghrelin levels are correlated with obesity, and insulin may be an important regulator of plasma ghrelin levels in different states of nutrition [8][9][10][11]. Several studies of different populations have shown that levels of ghrelin are related to body size [12], although its mode of action as a regulator of body fat stores remains unclear [12]. Obesity induces a number of metabolic disturbances known as the metabolic syndrome, and is associated with an excess risk of insulin resistance, diabetes, and cardiovascular disease [13][14][15].
Obesity and related metabolic abnormalities are consistent risk factors for CRC [16]. In most studies, obesity (measured as BMI, waist circumference or waist-to-hip ratio) is associated with a relative risk of 1.5 to 2.0 compared with a low or normal BMI [17][18][19][20]. Similarly, associations for circumference measures have been noted for large or advanced adenoma, the proximate precursor to most colon cancers [19,21,22]. Overall, the data strongly support that some metabolic characteristics associated with central or abdominal adiposity increases risk of CRC.
Polymorphisms in the coding region of the ghrelin gene were suggested to be involved in the aetiology of obesity and to modulate glucose-induced insulin secretion in different ethnic study groups [27]. Hence, variations in the ghrelin gene influencing the expression and/ or function of the ghrelin protein might alter energy balance, contribute to obesity and, indirectly, to CRC risk. We postulate that Single Nucleotide Polymorphisms (SNPs) in the genes coding for ghrelin and its receptor maybe associated with an altered risk of CRC.
In this report we investigated the genetic variability of the GHRL and GHSR genes. Using a tagging approach and selecting 7 SNPs in GHRL gene and 4 SNPs in the GHSR gene we covered all the known genetic variation of the two genes. We tested the impact of GHRL and GHSR SNPs on CRC risk in a case-control study based on subjects from the Czech Republic. In a second step we replicated the best associations in an unrelated German population. To our knowledge this is the first report on polymorphisms of GHRL, GHSR and CRC risk.
Study populations
In this study we have used two distinct populations: one from Czech Republic and the other from Germany. All SNPs were typed in the Czech population while only two SNPs showing an altered risk of CRC in the first population were typed in the German cases and controls.
Czech population
The population has been extensively described elsewhere [28,29]. Briefly: cases were CRC patients visiting nine oncological departments (two in Prague, one each in Benesov, Brno, Liberec, Ples, Pribram, Usti nad Labem, and Zlin) distributed in all geographic regions of Czech Republic and being representative of the population of the entire country. This study includes 680 patients who could be interviewed and provided biological samples of sufficient quality for genetic analysis. All cases had histological confirmation of their tumor diagnosis.
Controls were selected among patients admitted to five large gastroenterological departments (Prague, Brno, Jihlava, Liberec, and Pribram) all over the Czech Republic, during the same period of the recruitment of cases.
Selected controls were all of Czech Caucasian origin. Only subjects whose colonoscopic results were negative for malignancy, colorectal adenomas or IBD were chosen as controls. Among 739 invited controls, a total of 593 (80.2%) were analyzed in this study (lost controls were similar to those included with respect to sex distribution).
Cases included in this study had a mean age of 61 years (range 27-90), while controls had a mean age of 56 years (range 28-91). Study subjects provided information on their lifestyle habits (smoking, drinking, diet etc.), and family/personal history of cancer, with the use of structured questionnaires.
The genetic analyses did not interfere with diagnostic or therapeutic procedures for the subjects. All participants signed an informed written consent and the design of the study was approved by the Ethical Committee of the Institute of Experimental Medicine, Prague, Czech Republic.
German population
CRC cases comprised 569 German Caucasian index patients (age range 9-88 years, mean 43.6 years) recruited by the six German university hospitals of Bochum (BO), Bonn (BN), Dresden (DD), Düsseldorf (DÜ), Heidelberg (HD) and Munich/Regensburg (MR). Cases were collected as part of a large study on susceptibility to hereditary nonpolyposis CRC (HNPCC). Inclusion criteria for the cases were (i) a family history of CRC or (ii) CRC diagnosed under the age of 50. Analysis for microsatellite instability was applied as a pre-screening test prior to mutation analysis in the MSH2 and MLH1 genes. All cases were tested to be microsatellite stable.
The control series consisted of 726 healthy, unrelated, sex-and age matched blood donors (26-68 years, mean 45.9 years) which were recruited between 2004 and 2006 by the Institute of Transfusion Medicine and Immunology, Faculty of Mannheim, Germany. The matching intervals for age were 'younger than 30 years', five-year groups (30-34, 35-39, ..., 60-64) and 'older than 65 years'. Blood sampling was performed during regular blood donation according to German guidelines. Selected controls were all of German Caucasian origin. The study was approved by the competent local Ethics Committees, and written informed consent was obtained from all individuals.
Selection of tagging SNPs
We aimed at surveying the entire set of common genetic variants in the GHRL and GHSR genes. For this purpose, we used the Tagger algorithm [30] that was developed to select maximally informative sets of tagSNPs in candidate-gene association study. All polymorphisms in the region of the two genes of interest with minor allele frequency (MAF) ≥5% in Caucasians from the International HapMap Project (version 22; http://www.hapmap.org), were included. Tagging SNPs were selected with the use of the Tagger program within Haploview http://www.broad.mit.edu/mpg/haploview/; http://www.broad.mit.edu/mpg/tagger/ [31,32], using pairwise tagging with a minimum r 2 of 0.8.
This resulted in a selection of 11 tagging SNPs, 7 for the GHRL gene (with a mean r 2 of the selected SNPs with the SNPs they tag of 0.967), and 4 for the GHSR gene (with an r 2 of 0.989). Our selection thus captures to a very high degree the known common variability in this gene.
DNA extraction and genotyping
DNA was extracted from blood samples with standard proteinase K digestion followed by phenol/chloroform extraction and ethanol precipitation. The order of DNAs from cases and controls was randomized on PCR plates in order to ensure that an equal number of cases and controls could be analyzed simultaneously. All the genotyping was carried out using the Taqman assay, according to manufacturer's protocol. The pre-designed Taqman assays were purchased from Applied Biosystems (Foster City, CA).
All samples that did not give a reliable result in the first round of genotyping were resubmitted to up to two additional rounds of genotyping. Data points that were still not filled after this procedure were left blank. Repeated quality control genotypes (8% of the total) showed an average concordance of 99.5%.
Statistical Analysis
The frequency distribution of genotypes was examined for the cases and the controls. Hardy-Weinberg equilibrium was tested in the cases and in the controls separately by chi square test. We used logistic regression for multivariate analyses to assess the main effects of the genetic polymorphism on CRC risk using a codominant inheritance model. The most common allele in the controls was assigned as the reference category. All analyses were adjusted for age and sex.
Additionally, we performed a logistic regression stratifying for the cancer site (colon versus rectum) and smoking (smokers versus non smokers and heavy smokers versus light smokers) or alcohol drinking (drinkers vs. non-drinkers) habits.
For SNPs rs27647 and rs35683 the analysis were performed in the two populations. Odds ratios were calculated for the two populations separately and jointly.
All the analyses were performed with STATA software (StataCorp, College Station, TX).
Results
We performed a case-control study using two different sets of SNPs in two distinct populations of German and Czech origins. The first SNP set was made of 11 tagging SNPs which we tested in 680 cases and 593 controls from the Czech Republic. The second SNP set consisted of the two best hits, namely SNPs rs27647 and rs35683, which we replicated in additional 569 cases and 726 controls from Germany. The genotype frequencies among the controls were in Hardy-Weinberg equilibrium for all the SNPs and in both populations.
Results for the Czech population
The distribution of the genotypes and their odds ratios (ORs) for association with CRC risk are shown in Table 1.
We found that, in this sample set, carriers of the T allele of SNP rs27647 had a decreased risk of CRC, with an OR of 0.70 (95% confidence interval (95% CI) 0.55-0.91; P value = 0.013), for C/T heterozygous individuals and an OR of 0.57 (95% CI 0.40-0.80; P value = 0.002) for T/T homozygous individuals (P trend = 0.001).
Moreover we found that carriers of the C allele of SNP rs35683 had a decreased risk of CRC, with an OR of0.71 (95% CI 0.51-0.98; P value = 0.04) for C/C homozygous individuals. The OR of 0.80 for the heterozygous individuals was not statistically significant (95% CI 0.60-1.05; P value = 0.42) (P trend = 0.02).
We did not find any statistically significant association between the other SNPs and CRC risk.
Results for the German population
The distribution of the genotypes and their odds ratios (ORs) for association with CRC risk are shown in Table 2.
The associations found in the Czechs were not confirmed in the German population. However, the effect of SNP rs27647 was statistically significant in a joint analysis of data from the two populations. The carriers of the T allele in the joint group exerted a protective effect, with an OR of 0.82 (95% CI 0.69-0.98; P value = 0.02), for C/T individuals and an OR of 0.73 (95% CI 0.58-0.93; P value = 0.01) for T/T homozygous individuals (P trend = 0.0043).
Applying the Bonferroni correction for multiple testing the P trend of rs27647 in the joint population remained borderline significant (P trend = 0.0043 × 11 = 0.047), although neither the ORs for heterozygotes nor for the homozygotes remained statistically significant after correction. A test for heterogeneity indicated that the results for the two populations were statistically different (P heterogeneity = 0.014).
Stratified analysis and interactions
Analyses stratified by cancer site (colon vs. rectum), alcohol and smoking habits did not show any significant interaction with polymorphisms (data not shown).
We performed analysis stratified by age using the median age as a cut off. In the German population we found no difference in the genotype distribution in the two population strata concerning cancer risk. For the Czech population we found that only in the older group (age >66) the homozygous carriers of the variant alleles of two SNPs showed a statistically significant association with cancer risk: OR of 0.39 (95% CI 0.21-0.73 P value = 0.003) for SNP rs27647 and OR of 0.46 (95% CI 0.28-0.78 P value = 0.004) for SNP rs35683.
Finally we performed an analysis combining the two polymorphisms in order to assess the impact of cancer risk of a multi-locus risk score, but the results did not explain more of the genetic susceptibility to the disease than the two SNPs alone (data not shown).
Discussion
Recent evidence indicates that obesity and related metabolic abnormalities are associated with increased incidence and mortality for CRC. Since it has been shown that circulating levels of ghrelin are related to body size we postulated that polymorphisms that could alter protein expression and/or function may also alter CRC risk.
In this study we investigated the genetic variability of the GHRL and GHRS genes in relation to CRC risk using a tagging approach and selecting 11 SNPs. Using this method we covered all the known genetic variation of the genes, an effort lacking from all previous studies.
On the first phase of the project we typed all the 11 tagging SNPs in CRC cases and controls from the Czech Republic and we found that two SNPs (rs27647, rs35683) were associated with a decreased risk of CRC.
SNP rs27647 is situated in the promoter region of the gene and has been found associated with insulin level and obesity [33], while SNP rs35683 is situated in the first intron of the gene and has been found associated with BMI in a Caucasian population [34]. Since BMI, obesity and insulin level are well known risk factors for CRC, the two SNPs may indirectly affect cancer risk. We sought to replicate these findings in an independent group and we used a German population with a similar sample size. In this second population the findings were not replicated. There are two possible explanations for these findings: either the associations observed from the Czech population are false positives, or the differences in the association results are due to differences between the two selected populations. We can confidently exclude a major role of ethnic differences. According to Globocan [35][36][37], the Czech and the German populations have a comparable CRC incidence. Moreover in a recent study Nelis and colleagues investigated the underlying population stratification in Europe showing that there were very little, if any, differences in the genetic make-up of Germans and Czechs [38]. Another explanation for the inconsistent findings may be due to different environmental factors in the two countries. However dietary habits and food intake are not dramatically different in the two countries http://faostat.fao.org/site/609/Desk-topDefault.aspx?PageID = 609. It has to be noted that the German subjects were on average young and had a family history of CRC, whereas the Czech cases were unselected.. It may be speculated that genetic predisposition to familial and sporadic cases is due to groups of genetic variants that do not overlap entirely. According to this hypothesis, rs27647 could be more relevant for sporadic cases but not for familial ones. In addition if ghrelin acts indirectly through the effects of increased BMI, Germans subjects may not have been old enough to have had sufficient exposure to increased BMI to show the increased incidence of cancer. In fact, when we stratified the analysis for age groups, only in the older patients the observed association remained significant. Finally, it has not to be overlooked that ORs for rs27647 were similar in the two groups, but the effect does not reach statistical significance in the Germans,. This may indicate that the association could be true, and a larger, independent study is needed to confirm or disproof this finding.
Conclusion
In conclusion, we are not able to completely exclude a possible effect of the rs27647 SNP in CRC risk, while we can confidently exclude a major role for the other common SNPs in GHRL and GHSR as CRC risk factors. | 2014-10-01T00:00:00.000Z | 2010-09-28T00:00:00.000 | {
"year": 2010,
"sha1": "4c3d5180ea42d65c2d5deb53061fe88ff2e059bf",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-10-112",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47f6fd0fd47f9bd3c45ad6b3293092bfeaaea814",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268382429 | pes2o/s2orc | v3-fos-license | Impact of LKB1 status on radiation outcome in patients with stage III non-small-cell lung cancer
Preclinical studies suggest that loss of LKB1 expression renders cancer cells less responsive to radiation partly through NRF2-mediated upregulation of antioxidant enzymes protecting against radiation-induced DNA damage. Here we investigated the association of an alteration in this pathway with radio-resistance in lung cancer patients. Patients with locally advanced non-small cell lung cancer (LA-NSCLC) who were treated with chemoradiotherapy (CRT) and analyzed for LKB1 expression using semiquantitative immunohistochemistry. Clinical characteristics and expression of LKB1 were analyzed for association with radiotherapy outcomes. We analyzed 74 available tumor specimens from 178 patients. After a median follow-up of 40.7 months, 2-year cumulative incidence of locoregional recurrence (LRR) in patients who had LKB1Low expression was significantly higher than those with LKB1High expression (68.8% vs. 31.3%, P = 0.0001). LKB1Low expression was found significantly associated with a higher incidence of distant metastases (DM) (P = 0.0008), shorter disease-free survival (DFS) (P = 0.006), and worse overall survival (OS) (P = 0.02) compared to LKB1High expression. Moreover, patients with LKB1Low expression showed a significantly higher 2-year cumulative incidence of LRR (77.6% vs. 21%; P = 0.02), higher DM recurrence (P = 0.002), and shorter OS (P < 0.0001) compared with the EGFR-mutant group. For all patients with LKB1Low who had LRR, these recurrences occurred within the field of radiation, in contrast to those with LKB1High expression having both in-field, marginal, and out-of-field failures. LKB1 expression may serve as a potential biomarker for poor outcomes after receiving radiation in LA-NSCLC patients. Further studies to confirm the association and application are warranted.
RT Radiotherapy STK11
Serine/threonine protein kinase Outcomes of treatment for patients with locally advanced non-small-cell lung cancer (LA-NSCLC) remain poor 1 .Radiotherapy with or without chemotherapy is the mainstay treatment for unresectable LA-NSCLC 2 .However, nearly a third of patients receiving treatment have a locoregional recurrence (LRR) within one year which is associated with reduced long-term survival 3 .There are limited data on biomarkers that can predict treatment response to radiotherapy.An increased understanding of the mechanism of resistance to radiation can help lead to improved patient selection for this treatment as well as energize novel strategies to improve treatment efficacy.
Reactive oxygen species (ROS) have been shown to play a critical role in cell death caused by ionizing radiation (IR) 4 .Therefore, inadequate removal of ROS when exposed to IR can lead to an increase in cellular oxidative stress, DNA damage, and ultimately cell death.In contrast, increased expression of antioxidant enzymes or the presence of free radical scavengers in tumor cells can lower intracellular ROS levels and confer a radio-resistance state 5 .Alteration of the Kelch-Like ECH-Associated Protein 1 (KEAP1)/ Nuclear Factor Erythroid 2-Like 2 (NRF2) pathway is a mechanism of radio-resistance utilized in many cancers, including lung cancer, by virtue of enhancing the expression of ROS scavengers and enzymes in detoxification pathways.(6-11) Altered KEAP1 function interferes with the activation of the NRF2 pathway leading to decreased expression of cytoprotective enzymes, such as NADPH quinone oxidoreductase 1 (NQO1) 6 .KEAP1/NRF2 mutations have also been shown to be associated with LRR after radiation in patients with NSCLC 7 .
Liver kinase B1 (LKB1), also known as serine/threonine protein kinase 11 (STK11), is a tumor-suppressor gene in NSCLC 8,9 .Recent research demonstrated that the KRAS/LKB1 mutant NSCLC is highly enriched with either KEAP1 mutations or bi-allelic loss, and expressed higher levels of NRF2-regulated genes 10,11 .Loss of LKB1, in part through NRF2-mediated upregulation of antioxidant enzymes, can protect against ROS-mediated damage and may lead to radio-resistance.(17) Preclinical studies have suggested that loss of LKB1 renders tumor cells less responsive to radiotherapy, but the interplay of these mutations with radio-resistance in NSCLC patients has not been well characterized (18-20).
To fill this research gap, our study investigated the association of LKB1 expression and outcomes of radiotherapy in patients with LA-NSCLC treated with definitive radiotherapy.The correlation between LKB1 and NRF2/NQO1 expressions was also examined.
Patient characteristics of the study population
A total of 238 patients with LA-NSCLC patients between January 1, 2013, and December 31, 2017, at our institution.Of these, 178 patients who underwent definitive chemoradiotherapy (CRT) with curative intent were enrolled (Fig. S1).Baseline characteristics are summarized in Table 1.The median age of the study population at diagnosis was 64 years.The majority of the patients were male (73%) and had good Eastern Cooperative Oncology Group (ECOG) performance status (PS) 0 to 1 (89%).Nearly one-third (32%) were current smokers with 34% reporting as former smokers.Patients were diagnosed with non-squamous cell carcinoma (76%), T3 or T4 stage (79%), and regional lymph node involvement (95%).Eighty-nine percent of patients received concurrent chemotherapy with or without neoadjuvant or consolidation chemotherapy.Most of the patients received radiation doses more than or equal to 60 Gray (Gy) (82%).Of the 138 patients who received concurrent chemoradiation, 103 (74.6%) were able to complete the preplanned chemotherapy schedule.However, among the 18 patients who received sequential chemoradiation, only 2 (11.1%) patients were able to complete the preplanned chemotherapy schedule.All patients were treated prior to the availability of anti-programmed cell death-ligand 1 (PD-L1) durvalumab in Thailand, thus no patient received durvalumab.EGFR mutation and ALK status was evaluated in 69 (39%) patients.Of these, EGFR mutation and ALK positive was observed in 22 (32%) and 3 (4%), respectively.
LKB1 expression and radiotherapy outcome
To investigate the association between LKB1 expression in tumor tissue and radiotherapy outcomes, we next analyzed the clinical parameters in correlation with tumor tissue expression of LKB1 by IHC on 74 available tumor specimens.The level LKB1 expression was determined by calculated H-scores with a median of 50 (range 0-200) as described in the method section.Baseline characteristics of these 74 patients are summarized in Table 2.
Using ROC analysis, we chose an H-score cutoff value of 17.5 to distinguish high (≥ 17.5) versus low LKB1 expression (< 17.5), which yielded a sensitivity of 50%, a specificity of 73%, and AUC of 0.68 (95% CI 0.54-0.81)based on the occurrence of locoregional recurrence (shown in Fig. S2).There were no significant differences in clinicopathologic parameters identified between the LKB1-high and LKB1-low groups (Table 2).
Pattern of recurrence after CRT according to EGFR mutation and LK1 status
A total of 60 patients developed relapses, including 21 patients with EGFR mutations and 39 patients with LKB1 group.There were no significant differences in the number of locoregional failures and distant metastasis failures between the EGFR mutation and LKB1 groups (Table 4).However, patients with LKB1-low expression showed significantly higher cumulative incidence of LRR (1-year was 32.7% vs. 5.9%, 2-year was 77.6% vs 21%.HR 3.42, 95% CI 0.77-15.14,P = 0.02).Low LKB1 expression was also significantly associated with higher DM recurrence (HR 2.58, 95% CI 1.22-5.44,P = 0.002), and shorted OS (HR 3.69, 95% CI 1.83-7.41,P < 0.0001) compared with the EGFR-mutant group.Moreover, LKB1-low expression group showed higher bone and adrenal gland metastasis, but less lung metastasis as compared with EGFR-mutant group.No significant differences were observed in LRR and DM failures between LKB1-high expression and the EGFR-mutant group.
LKB1 expression and pattern of locoregional recurrence
To further investigate the association between LKB1 expression and radio-resistance, we performed additional analyses on the pattern of radiation failure.Among the 25 patients with LRR, 19 patients had in-field failures, 4 patients had marginal failures and 2 patients had out-of-field failures.We further classified the in-field recurrences into two groups, central high-dose, type A, and peripheral high-dose, type B. These subgroups incorporated both the location of the centroids as well as dosimetric criteria for the rGTV.There were 16 patients with type A failures, 2 patients with type B failures, and 1 patient with two separate type A and type B lesions.All patients with LKB1-low expression had in-field failures with 10 type A and 2 type B failures.Among patients with LKB1-high expression, distribution of failure type included 7 in-field (six type A and one for both type A and B), 4 marginal and 2 out-of-field (P = 0.03) (shown in Fig. 3).The above findings may support the influence of LKB1 expression on radiation therapy outcomes that LKB1-low expression had biological rather than technical issues underlying the majority of LRR.
Associations between LKB1 and NRF2 expressions and its downstream target gene, NQO1
Additional IHC analyses were completed to assess whether altered expression of LKB1 was associated with the expression of downstream targets, NRF2 and NQO1.The NRF2 and NQO1 expressions were calculated with a median H-score of 50 (range 0-300) and 45 (range 0-300), respectively.We detected an inverse correlation between LKB1 expression and both NRF2 expression (r = − 0.445, P < 0.001) and NQO1 expression (r = − 0.302, P = 0.03) in the non-squamous NSCLC group.This association was not detected in patients with the squamous cell carcinoma subtype (shown in Fig. S3, S4).
Discussion
In this study, we analyzed the outcomes of 178 LA-NSCLC patients treated with chemoradiation to determine the association with radiotherapy outcomes.A total of 67.4% of patients had disease recurrence with LRR found in 41% of those patients (49/120) with a median DFS of 11 months.These outcomes are similar to previous reports of LA-NSCLC patients receiving definitive chemoradiation 1,3 .We found a significant association between low LKB1 expression and worse outcomes, higher LRR, higher DM, shorter DFS and OS.Low LKB1 expression was found associated with a 5 times higher cumulative incidence of LRR.These findings support the role of LKB1 as a biomarker for LA-NSCLC treated with chemoradiation.
One of the most important factors influencing outcomes of LA-NSCLC patients treated with chemoradiation is the treatment intensity.Our study showed comparable radiation treatment delivery, chemotherapy regimen used, and follow-up time in both LKB1 expression groups which support the differential outcomes likely to be associated with the differential LKB1 expression rather than the treatment discrepancies.Our study also showed that all patients with LKB1-low expression who developed LRR had only in-field failures in contrast to the LKB1-high group that had both in and out-of-field failures.In addition, more type A in-field failures were observed in LKB1-low expression.These findings support the idea that radiation resistance can occur in LKB1low expression rather than technical issues such as the planning and delivery of the radiation being the cause of radiation ineffectiveness.We also demonstrated that the association between decreased LKB1 expression and increased NRF2 expression as well as its downstream target gene, NQO1, might be the mechanism underlying the radio-resistance induced by LKB1 alteration.Finally, we found differential expressions of LKB1 depending on histological subtypes.The integrity of the LKB1/NRF2/NQO1 pathway was intact in non-squamous NSCLC, whereas this association was not observed in squamous NSCLC.We recommend further study to determine whether this pathway may be dependent on histological subtype.Previous studies have shown that LKB1-deficient tumors are highly enriched with either KEAP1 mutations or bi-allelic loss, and express higher levels of NRF2-regulated genes as a compensatory mechanism to maintain redox homeostasis during oxidative stress. 10,12More broadly, the KEAP1/NRF2 pathway has been found to be In non-squamous cell carcinoma subtype, LKB1 Low patients had a significantly higher cumulative incidence of LRR than in LKB1 High patients (P = 0.0001); however (B), there were not detected among squamous cell carcinoma subtype.(C) The cumulative incidence of DM recurrence in non-squamous cell carcinoma subtype, LKB1 Low patients had a significantly higher cumulative incidence of LRR than in LKB1 High patients (P < 0.0001); however (D), there were not detected among squamous cell carcinoma subtype.one of the mechanisms for radio-resistance in NSCLC 7,[13][14][15][16][17] and serves as a predictive biomarker for LRR after RT in patients with localized NSCLC 7,18 .Recent research demonstrated that LKB1-deficient tumors displayed a radio-resistant phenotype that is particularly dependent on the KEAP1/NRF2 pathway and modulation of LKB1 expression modified the sensitivity to radiation 19 .This supported our clinical observations and provided evidence for a causative role of LKB1 in modulating response to radiation.
It is noteworthy that our study analyzed results prior to the routine use of the PD-L1 inhibitor durvalumab for maintenance therapy after chemoradiotherapy, based on the phase III PACIFIC study 20,21 .Our radiation outcomes are thus not confounded by the use of the durvalumab given that LKB1/STK11 alterations may mediate resistance to PD-1/PD-L1 blockade 22 .Recent studies have suggested that both LKB1 mutations as well as the loss of LKB1 revealed an immunologically inert phenotype characterized by a markedly suppressed immune microenvironment within the tumor 10,23 .The tumor microenvironment plays a pivotal role in determining the response to radiation 24 .Therefore, an inert or "cold" tumor immune microenvironment might be contributing to radio-resistance.Further studies are needed to elucidate the role of tumor microenvironment-mediated radioresistance mechanisms in LKB1-deficient tumors.
The primary limitation of our study is the use of retrospective data which can cause selection bias and often contain data inconsistency problems.In addition, we note that all patients were treated at a single institution which can increase data and clinical consistency but may be specific to treatment implementation at the single institution.The clinical data and pathologic samples from the patients treated with radiation according to clinical practice might affect the outcome of treatment, at least partially due to clinical selection.Furthermore, our cohort did not perform tumor genotyping, hence, we do not know if other genetic alterations are associated with radioresistance.Moreover, the variety of genomic alterations in LKB1/STK11 and the complexities of intratumorally heterogeneity make it difficult to interpret results.Validation by other cohorts is needed to delineate the threshold of LKB1 expression that best identifies patients at increased risk of poor radiation outcomes.The complexity of LKB1 loss, which can occur through genomic and non-genomic mechanisms, can be captured by quantitative IHC for LKB1 expression and has been validated in previous studies 22,25 .Thus, the evaluation of LKB1 expression by IHC may further enhance the predictive utility of this mutation.Identification of LKB1 is a simple and cost-effective method that can be applied to clinical NSCLC specimens.Additional study is required to assess whether the role of LKB1 expression can translate into clinical practice for patients with LA-NSCLC who might be less likely to respond to radiation and more likely to suffer poor outcomes.Finally, differentiating between recurrence and post-treatment changes could be challenging, particularly after radiation therapy.The use of PET-CT can facilitate this process.However, it's crucial to note that during the acute/subacute post-treatment period, FDG avidity may also arise from inflammation.While PET/CT was not routinely utilized in most patients for radiotherapy planning and subsequent follow-up after the completion of radiation in our study, we took measures to further mitigate potential bias.The radiation oncologist and diagnostic radiologist, responsible for determining local recurrence and identifying the type of local recurrence in our study, were blinded to the results of the LKB1 status.
In summary, our study suggested that LKB1 expression may be a potential predictive marker for identifying patients with LA-NSCLC who are at risk of developing recurrence and have poor prognosis.Further validation of these findings is warranted.
Study population
We selected retrospectively LA-NSCLC patients who received treatment at the King Chulalongkorn Memorial Hospital (KCMH) from January 2013 to December 2017.The main inclusion criteria were histologically confirmed diagnosis of NSCLC, stage III according to the 7 th edition of the AJCC TNM staging system who underwent definitive chemoradiotherapy (CRT) with curative intent.Archival tumor tissues were retrieved and the level of LKB1 and NRF2/NQO1 expression was determined using immunohistochemical staining (IHC).Methods indicating the study were carried out in accordance with the declarations of Helsinki.The study
Analysis of recurrences
Recurrence images were registered with CT simulation images that were used for radiation treatment planning.We contoured the recurrent gross tumor volume (rGTV) and the centroid (center of the rGTV) on the recurrence images and used the planning target volume (PTV) contours from the original treatment plan for further geographic analysis.Eclipse software (version 11.0.31,Varian Medical Systems, Palo Alto, USA) was used for both registration and contouring.LRR was defined as CT evidence of progressive soft tissue abnormalities or new lesions in the same lobe and/or any intrathoracic lymph node recurrence.Based on geometric data, LRR was classified as an in-field failure (centroid originating inside PTV), marginal failure (centroid originating outside PTV and recurrent lesion within 1 cm in any direction around the PTV), or out-of-field failure (centroid originating outside PTV and recurrent lesion located beyond 1 cm around the PTV).In addition, we further classified in-field recurrences into two groups using both geometric and dosimetric data.( 25) Type A (central high dose) recurrences were defined as the dose to 95% of rGTV (rGTVD95%) with ≥ 95% of the dose prescribed to PTV.Type B (peripheral high dose) recurrences were defined as rGTVD95% received < 95% of the dose prescribed to PTV.Distant metastases (DM) recurrence was defined as any disease recurrence in any other location.The typical representative disease failure patterns are shown in Figure S5.
Statistical analysis
Receiver operating characteristic (ROC) curve analysis was employed to determine the optimal cutoff value of LKB1 expression.The relationship between LKB1 expression and clinicopathological characteristics was assessed by Pearson's chi-square test or Fisher's exact test as appropriate.Correlations between LKB1 and NRF2/ NQO1 expressions were analyzed using Spearman's correlations.Outcomes were analyzed in terms of LRR, DM, disease-free survival (DFS), and overall survival (OS).Events (recurrence or death) were calculated from the date of diagnosis.Patients who did not develop the event at the end of the study were censored at the date of the last observation which was defined as September 22, 2019.Univariate and multivariate analyses were performed using the Cox model and hazard ratios (HRs) and 95% confidence intervals (95% CIs) were calculated.P-values < 0.05 were considered statistically significant.All statistical analyses were conducted using GraphPad Prism version 8.00 for Windows (GraphPad Software, La Jolla, California, USA) and SPSS 23.0 (SPSS Inc, Chicago, Illinois, USA).
Figure 1 .
Figure 1.LKB1 expression status correlates with radiation outcome.(A) Cumulative incidence locoregional recurrence (LRR).Patients with low LKB1 expression had a significantly higher cumulative incidence of LRR than high LKB1 expression (P = 0.0001).(B) Cumulative incidence of distant metastatic (DM) recurrence.LKB1 Low patients was also associated with higher DM recurrence than in LKB1 High patients (P = 0.0008).(C) Disease-free survival (DFS).DFS was significantly worse in LKB1 Low patients than in LKB1 High patients (P = 0.006) and (D) Overall survival (OS).OS was significantly worse in LKB1 Low patients than in LKB1 High patients (P = 0.02).
Figure 2 .
Figure 2. Cumulative incidence of LRR and DM recurrence according to LKB1 expression status and histologic subtype.(A)In non-squamous cell carcinoma subtype, LKB1 Low patients had a significantly higher cumulative incidence of LRR than in LKB1 High patients (P = 0.0001); however (B), there were not detected among squamous cell carcinoma subtype.(C) The cumulative incidence of DM recurrence in non-squamous cell carcinoma subtype, LKB1 Low patients had a significantly higher cumulative incidence of LRR than in LKB1 High patients (P < 0.0001); however (D), there were not detected among squamous cell carcinoma subtype.
Figure 3 .
Figure 3. Pattern of locoregional disease according to LKB1 expression status.All LKB1 Low patients had in-field failures with 10 type A and 2 type B failures.LKB1 High patients, distribution of failure type included 7 in-field (six type A and one for both type A and B), 4 marginal and 2 out-of-field (P = 0.03).
Table 2 .
Clinical characteristics of patients with locally advanced NSCLC according to LKB1 expression status.CRT: chemoradiotherapy, ECOG PS: Eastern Cooperative Oncology Group Performance Status, IMRT: intensity-modulated radiotherapy, IQR: interquartile range, LKB1: Liver kinase B1, NSCLC: non-small cell lung cancer, NOS: not otherwise specified, SABR: stereotactic ablative radiotherapy, VMAT: volumetric modulated arc therapy, 3D-CRT: three-dimensional conformal radiotherapy.† ECOG PS denotes the Eastern Cooperative Oncology Group (ECOG) scale of performance status (PS) (a performance status grade of 0 indicates asymptomatic, 1 restricted in strenuous activity but ambulatory and 2 ambulatory and capable of all self-care but unable to carry out any work activities).‡ Clinical staging was performed according to the seventh edition of the AJCC TNM staging system.
Table 3 .
Univariate and multivariate analysis of DFS and OS on potential risk factors among 74 stage III NSCLC patients.†Categoryafter the slash (/) was set as reference category.CI: confidence interval, DFS: disease free survival, ECOG PS: Eastern Cooperative Oncology Group performance status, HR: hazard ratio, OS: overall survival, SQ: squamous cell carcinoma.Significant values are in[bold].
Table 4 .
Pattern of failure according to EGFR mutation and LKB1 status.
was approved by the Institutional Review Board (IRB) of the Faculty of Medicine, Chulalongkorn University (No.268/61).Written Informed consent was waived from individual study participants according to the ethics committee/IRB, Faculty of Medicine, Chulalongkorn University policy for retrospective study.The permission to conduct the study was approved by the director of the hospital. | 2024-03-15T06:18:45.789Z | 2024-03-14T00:00:00.000 | {
"year": 2024,
"sha1": "f9499978cc52815dba2629bdbfffda2b3640c43b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f6649b7cff1cc9caac17ea1cca504a8300c3044",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236548824 | pes2o/s2orc | v3-fos-license | The Challenges of Promoting Social Inclusion through Sport: The Experience of a Sport-Based Initiative in Italy
: Social inclusion is broadly recognized as a priority to accomplish at an international level. While the influence of sport toward this social mission has been largely debated, literature lacks contributions capturing the challenges of sport when promoting social inclusion. Based in case study methodology, the investigation explores the impact of a multi-stakeholder sport initiative developing social inclusion for socially vulnerable youth and the related challenges of the intervention through in-depth interviews with diverse program stakeholders. The main findings indicated the emergence of four challenges: limited transferability of program outcomes for youth in living conditions of severe vulnerability; drop-out of youth in living conditions of severe vulnerability; limited sustainability of program social workers; lack of sports club management skills. The work highlights some limits of sport-based programs for social inclusion and discusses some implications for practice to maximize the societal impact of such interventions.
Sport for Promoting Social Inclusion of Vulnerable Youth: An Overview
Currently, social exclusion is fully recognized at an international level as a social concern to be addressed. The 16th sustainable development goal of United Nations quotes the promotion of inclusive societies as a fundamental priority to accomplish within 2030. In recent years, the contribution of sport toward the achievement of this social goal has been largely argued, with several academics underlining that sport can be used as a vehicle to address certain aspects of social inclusion [1][2][3][4][5][6]. If, on the one hand, there is substantial agreement on the positive impact of sport at a societal level, academics argue that more research is needed in order to understand the conditions in which sport may act on social inclusion [5][6][7][8][9][10][11]. The investigation of such conditions is still in its infancy [12][13][14][15] and would provide a strategic and agency-focused approach for further planning of sport-based interventions [16][17][18][19][20][21][22][23].
When talking about socially vulnerable youth, literature generally refers to individuals between the age of 11 and 24 years of age who are subjected daily to multifaceted stressors (e.g., social, emotional, and economic) that create conditions for social maladjustment [24,25]. These conditions include: (i) living in areas of low socioeconomic status and poor housing quality; (ii) receiving residential care or nonresidential counseling [26]; (iii) poor family management; and (iv) peers engaged in deviant behaviors [27,28].
In line with what has been previously exposed, literature on sport and socially vulnerable youth largely confirmed the fact that sport may develop positive social outcomes that can be associated with social inclusion, such as life skills, positive psychological capital, active citizenship, pro-social behaviors, and employment. Furthermore, in line with the main indications provided by international academics, the majority of studies have focused on understanding the conditions of sport that promote the achievement of social inclusion outcomes [17,18,22,23,26,29,30].
hand, researchers are aware that sport can act only on certain aspects of inclusion, on the other hand [4], there is a lack of scientific reflections involving those who work on the field for dwelling on the challenges of sport in promoting inclusion. This knowledge gap is particularly felt within the Italian context, where there are still few studies concerning sport for development and peace [22]. Faced with many sports projects dedicated to promoting social inclusion, in fact, there is still limited research dedicated to the topic [23].
In order to overcome this gap and provide a nuanced image of the potential of sport, the current research seeks to explore the challenges that may occur when using sport as a tool to promote social inclusion. In particular, the research focuses on the experience of three Italian sports clubs that were involved in a sport and social inclusion project for socially vulnerable children.
Materials and Methods
The work is based in case study methodology [39] and focuses on three grassroots sports clubs located in the province of Milan (the North of Italy) that took part in a sportbased program aiming to promote social inclusion among socially vulnerable young people.
A case study is an intensive investigation of a single case where the purpose of that study is-at least in part-to shed light on a larger class of cases (a population) [40]. This case aims to provide insights about the larger class of sport-based programs promoting social inclusion for socially vulnerable youth.
Despite the current study involving three sports clubs, the study will be treated as a single case because the three realities shared common characteristics (e.g., same professional involved, common project manager, common donor, common criteria selection for participants) that led us to consider them as a comprehensive case study. The manuscript analyzes the experience of these sports clubs through in-depth interviews with the aim of exploring the challenges that may occur in a sport-based program when using sport as a tool to promote social inclusion for socially vulnerable youth. The case, thus, is atheoretical [41] since it aims to explore these challenges without previous theoretical hypotheses grounding the study. In doing so, the current work seeks to answer the following research question: what are the challenges of developing social inclusion for socially vulnerable youth through sport-based programs?
The Case under Analysis
The sport-based program analyzed in this case study provided two weekly soccer training sessions (90 min each) and a total of 10 h of workshop time for transversal skills, which development participants were able to select from a list of options (pastry, cooking, mosaic, and music) based on their personal interests.
Each club selected an ad hoc soccer team for the participants to join by informal agreements with middle schools. These teams had access to the specially qualified staff employed. Each team had access to a sport coach (offering regular training), an educator (co-managing the training and providing regular feedback to the youth), and a psychologist (in a supervisory and advisory capacity).
During the three years of program implementation, the youngsters attended an average of 200 sessions of soccer training and 20 sessions of transversal skills development.
In these clubs (whose anonymity has been preserved) a total of 49 young participants took part in the soccer activities. All participants were aged between 11 and 15 years (with an average of 13 years) and most were several-generation Italian (75%); a minority (25%) were second-generation Italian. Inclusion criteria for program participants were: poor school attendance, poor academic performance, lack of family interaction with the school, a tendency to break school rules, and deviant behavior or a tendency toward relational isolation (as reported by the schools). Thus, young people recruited for the program experienced various psychological and/or social problems (from high to moderate vulnerability) that prevented them from taking full advantage of their relationships with significant others and their peers in the school environment-a condition that contributed to their level of vulnerability as reported in Table 1. The educators and psychologists involved in each soccer team assisted in the recruitment of participants. They also helped with stakeholder liaison, a process aimed at connecting the club with schools and social/health service providers (thus, forging strong and multiple-stakeholder relationships). Schools also engaged in networking through the program. Consistent with the principle of interconnectivity, staff members (educators and psychologists) implemented activities for networking with civic and government agencies. This included meetings with local municipalities, youth associations, and educational centers (e.g., day centers). Staff held monthly meetings with target schools during the final two years of program implementation. In Club C, five participants encountering high risks (e.g., parents in prison, neuropsychological disorder[s], etc.) were referred for specialized assistance; this was monitored by the program staff.
Consistent with the principles of positive youth development, the program aimed to develop social inclusion through diverse strategies: -creation of a positive climate by the sport coach leading to the development of individuals' sense of acceptance and perception of inclusion; -positive communication and positive feedbacks by the sport coach strengthening participants' self-efficacy in sports; -individual meetings between the psychologists and participants for overcoming individual challenges during sport training; -collaboration with youth school teachers.
Procedure
Participants were involved in the research process at the end of the three-year project with the intention of reflecting back on the strengths and weaknesses of the activities.
Each participant was personally contacted by the researcher in order to show the purposes and procedure of the investigation. Participants who agreed to be interviewed were asked to sign a written consent form.
The authors declare that the procedure met the international norms and ethical principles established by the European Union 2016/679 Regulation (UE, 2016), the Declaration of Helsinki (World Medical Association, 1964) and related revisions, with written informed consent obtained from each participant.
Data Analysis and Tools
As previously reported, the aim of the current investigation was to explore the challenges that may occur in a sport-based program when using sport as a tool to promote social inclusion for socially vulnerable youth. Within the interviews, this specific research aim was investigated by asking diverse types of questions, including a number related to the specific inclusion impact of the program. This choice was dictated by the fact that it would have been difficult to address the challenges and difficulties of the program during the interview without an in-depth understanding of the social inclusion impact of the activities. Due to this, the interviews guided the following areas of investigation: (i) Impact of the program in terms of social inclusion (e.g., how did the program affect the social inclusion of the youth involved? What features of the program contribute to address social inclusion through sport?) (ii) Challenges when promoting social inclusion through sport for socially vulnerable youth. (What challenges did you face during the three years of program implementation? What features of the program were most challenging?) The interviews lasted about 45 min to an hour; they were conducted at the sport clubs (with the exception of the teachers, who were interviewed at school). All material was analyzed by an inductive content analysis bottom-up approach, where themes emerged directly from the data [42]. The trustworthiness and validity of the data analysis was guaranteed by a process of data triangulation [43]. We triangulated themes that emerged through different sources of interviews (sport workers, social workers, teacher,; the project manager, and donors). Communalities, divergences, and connections across themes were pointed out and discussed by two independent researchers (CD-CC), who read the material and compared main themes emerging from each source of data. Divergences were discussed until agreement was achieved.
Sampling
Interviewees included soccer club coaches (4) and administrators (4), educators (3) and psychologists (3), donor foundation managers (2), the project manager (1), and teachers (4), for a total of 21 interviews collected as reported in Table 2. The mean age of participants was 40. Interviewees involved in the research were selected according to the logic of purposive sampling.
Findings
The results are summed up in Table 3 and highlight the impact of the project in terms of social inclusion and the related challenges reported by the respondents. Table 3. Results.
Program Impact Challenges
Improved youth self-efficacy Development of youth social capital Development of social capital at the community level serving the social inclusion of youth Limited transferability of program outcomes for youth in living conditions of severe vulnerability Drop-out of youth in living conditions of severe vulnerability Limited sustainability of program social workers Lack of sports club management skills 3.1. Program Impact 3.1.1. Improved Youth Self-Efficacy Interviews highlighted that the program provided insecure young people with a positive environment in which to gain more confidence in their abilities, especially regarding their capacity to relate with peers. Improved self-efficacy was also associated with a general improvement at a physical level. For instance, program workers in sport club C reported the case of a boy who had great difficulties in interacting with his peers and properly using his corporeal skills during soccer training. At the end of the project, both social workers highlighted a change at an individual level related to improved self-efficacy and confidence during training and teachers pointed out improved social interactions at school, as described by these interviewees: "He was clumsy, sometimes almost catatonic, in front of the ball: he did not know what to do and the project helped him in terms of self-efficacy." Educator (sport club C) "When he arrived here, he found it hard to look you in the eye, he hardly talked. If you looked at him, he became intimidated and stopped. He developed effective skills from a motor point of view, he had the chance to meet youths like him. He found a number of conditions that made him say "wow, I'm not alone in the world!"" Psychologist (sport club C) "He had great difficulties in relating to the rest of the class, and therefore, surely it can be seen that he improved on both sides [the teacher is referring to school and sport settings]. He is more outgoing, more involved, this is a boy who could barely say his name and surname. Now he is more extroverted; he participates more in the didactic activities, and this is also underlined by the educator and psychologist of the program. Thus, we can say that the program was a full success for him." Teacher (sport club C)
Development of Youth Social Capital
The program was also a resource for establishing positive relationships for the young people who were socially isolated at the baseline. For instance, in sport club B, the middle school recruited a migrant youth who had recently arrived in Italy. The positive environment created by program workers supported him to learn Italian and engage in relationships with Italians peers, as described in these interviews: "He learned Italian with us and found new friends; he established a trusting relationship with A; they even meet outside the sport context. Thus, his case is an example of fighting social isolation." Educator (sport club B) "His participation in the project aimed to bring him closer to new relationships, and to the language also. And here, too, we had positive results." Teacher (sport club B)
Development of Social Capital at the Community Level Serving the Social Inclusion of Youth
One of the innovative features of the program was the implementation of meetings between the program's staff with the middle-school teachers who recruited the participants.
The purpose of these meetings was to discuss the educational progress of participants in the sports program, as well as their educational path at school. The results highlight that such collaboration resulted in improved educational work with the young people.
"I liked the way we worked together [the teacher is talking about the meeting with social workers] because we really worked together, I mean, we had significant exchanges of observation and understanding about the young people that could lead teachers to reflect on themselves and to observe something new in their students." Teacher (sport club B) "If there was something wrong with the participants and they were not aware of the reasons for it [the teacher is referring to a program educator and psychologist], we informed them by reporting why, in that situation, participants were a little nervous. We told them everything." Teacher (sport club C) Meetings between program social workers and teachers each month provided teachers with the opportunity to know the participants from another perspective (by learning how they were behaving within the sport clubs).
In contrast, program educators and psychologists could better understand youth behaviors and attitudes by comparing what teachers reported during meetings. Thus, the sharing of information was useful for attaining a deeper and more integrated understanding of the young people's performance of the educational work given by teachers and the program of social workers.
In some cases, this exchange allowed teachers to change their stereotyped view of particular vulnerable children, as described by this interviewee: "I knew from program educator that this girl did a very good job in creating the plot of the video. She mainly wrote down the plot, and she even took the job home to finish it, to improve it. It has been a real success." Teacher (sport club C) Even if sport workers did not directly participate in meetings with the schools, in some cases, the information shared in the meetings was disseminated among the sport club professionals. For instance, a sport administrator claimed that connecting with schools was quite effective as it enhanced his awareness and consciousness of the young people's needs outside the sport environment: "The constant dialogue with the schools this year was very helpful because they gave us a different perspective. We used to see the guys only from an athletic point of view, and teachers showed us a different vision: kids don't do just sport, they experience fatigue due to studying, and they have difficulties sitting for five hours at school, which made us realize that we did not consider a number of facts before." Sport administrator (sport club C) Furthermore, such meetings guaranteed mutual help among the network of adults. Teachers started requesting the support of program staff if youngsters were in trouble. For instance, this teacher explained that she asked for the program psychologist's help in coping with a situation she could not manage on her own: "A program participant had an argument with her speech therapist and decided to stop her therapy with her. It was a very delicate situation, so I talked with the girl, and at the end of our chat, I told her I was going to call the psychologist of the sport program. I thought he was more skilled than me in coping with this situation. I told her he could help her better than me." Teacher (sport club B) In sport club C, teachers even created linkages between the program psychologist and educators and public service providers who assisted youth who needed special help (for instance, psychological support): "I will give you an example, two program participants were suspended from school activities for three days because they assaulted a guy. Then their teacher called me and said, "Can I give your phone number to their psychologist?" I answered, "yes, of course, give her my number." The psychologist calls me and explains the situation to me and asks me for further information about the girl, and then she says to me: "Listen, tomorrow, we have a meeting with the social service providers taking care of the girl; do you want to come to the meeting?" I couldn't go to the meeting, but I told her to keep in touch in order to better understand how to manage the situation. How was I helpful in this case? I gave her my point of view on the relationship of the mother with the daughter, the relationship of R. with her mother and the relationship with the school." Psychologist (sport club C) One teacher also claimed that the connection between the sporting context and schools was an important added value for the common good. In fact, the program started to be viewed as a valuable resource for the educational community, as reported by these participants: "Because of the project, we have now strengthened the ties, maybe we have created a triad with the project because we know that there is also this project that can intervene in support of this girl or that boy in need." Teacher (sport club B) "Because of the project, we are more present as an educational body that can provide support for the school. While this was not the case before, the kids went to school and then they practiced sport, but we had never thought about talking to each other before." Program Educator (sport club C) Such findings suggest that the collaboration between program workers-who were observing the youths inside the sport environment-and teachers-who were aware of youth needs and challenges at school-permitted the formation of a broader and less stigmatized image of the young people, which serves the understanding of their progress at an educational level. Furthermore, such a collaboration provided the young people with increased emotional and social resources useful in overcoming their vulnerabilities.
Limited Transferability of Program Outcomes for Youth in Living Conditions of Severe Vulnerability
Although data shows the positive impact of the program in terms of self-efficacy and social capital development, teachers and educators reported that the project did not have any kind of impact at the school level for most vulnerable youth.
"I would say [the teacher is talking about the main results of the project] acceptance and understanding of the social norms. In some cases, they have not been achieved-for some participants we cannot say that they [teacher is talking about program outcomes] have been completely achieved; especially with regard to respecting rules." Teacher (sport club B) For instance, in sport club C during the third year of implementation, the young people were involved in a video-making workshop. They were asked to structure and act in a plot for a video. The plot of the video was centered on a boy who was a victim of bullying who was gradually able to develop skills and ultimately emancipate himself through sport.
One girl, who was a bully at school, had the chance to play a positive leadership role in the group during the making of the video as she devised the plot. Although she became more aware of her behaviors, in the end she did not change her actions at school, because her family situation was particularly risky from a psycho-social point of view, as explained by the program psychologist.
"She is very seductive, she is very manipulative, she is very borderline, she has a series of problems [the psychologist is referring to the fact that the girl has an absent father] that also make her think a bit. The project is giving her a regulatory container that she needs that is serving her a lot, because she finds her own dimension. It is very useful to her in individual terms to have a space and in regulatory terms to learn to confine her exuberance, in terms of role models because she doesn't have any, and therefore, she is growing a lot from this point of view. At the moment, this does not have a huge impact on the school because the problem is much more complex, as also reported by her teacher." Psychologist (sport club C) It emerges that, when the social vulnerability of youth is very high, the benefits of sport are hardly transferred at the school level.
The Challenge of Youth Drop-Out in Living Conditions of Severe Vulnerability
Participants reported challenges in terms of youth recruitment. During the first year of the program, school leaders and teachers sent a relevant percentage of highly vulnerable youth to program activities in sport club C that impacted the work of the coaches and social workers, who often had to deal with episodes of violence and fighting among the young people. Interviewees reported several episodes where the young people did not respect the rules during sport training.
"They [the educator is referring to program participants] are quite agitated. During the first training, it was tiring. They don't respect the rules, they do not respect the role, even with me they have exaggerated several times. Last time, they started kicking balls at the sport coach who was preparing the football goal and insulted him." Educator (sport club C) Thus, the youngsters' behaviors impacted the delivery of the activities and the rate of participant dropout. Nevertheless, the soccer team was comprised primarily of highly vulnerable adolescents, and as such, they were at risk of reiterating mechanisms of exclusion and stigmatization. These features were highly challenging; they led to a second round of recruitment in order to recruit more widely during the second year of the program. As explained by the following donor, it was relevant to better define the target population and avoid recruiting mostly highly vulnerable youngsters: "During the second, year we focused better on our own target. The recruitment of the boys was done in a slightly more structured way, that is, we talked with schools about dropping out in a broad sense, that is, we asked them not to only send us people who dropped out of school. They started sending us who was at risk, students who they saw as possibly at risk." Donor foundation manager
Limited Sustainability of Program Social Workers
In sport clubs B and C, educators were crucial actors that facilitated the clubs' connections with the middle-school level. Interviewees reported that they were responsible for connecting with the schools and understanding what was happening there, while coaches were asked to work on the ground with youth.
Data showed that the role assumed by social workers obstructed sport clubs' leadership in networking with schools and being active interlocutors with teachers. Indeed, teachers reported being connected to program educators and psychologists rather than the sport clubs.
"Absolutely not, I haven't seen them [interviewer is talking about the networking of the program and is referring to the relationship with members of the sport club]. I haven't relationships with the soccer clubs, no, we are not in a network with them." Teacher sport club C "The main interlocutors for the program were the program educator and psychologist." Teacher (sport club B) In terms of sustainability, this aspect was critical. Once the funding of the foundation ends, the network built by social workers is at risk for collapse: "Building this kind of network [interviewer is referring to the connection with the middle schools] requires time, it requires effort, it requires availability. I come from the other side of Milan so probably tomorrow, I won't go back there once there is no foundation. To do this kind of work you need resources; if that piece is missing there [interviewer is referring to funding], it is difficult to maintain such a structure." Psychologist (sport club C) As explained by the project manager, the challenge at the end of the three years of implementation was mainly related to the autonomy of football schools in working with the wider community. The challenge of transferring the skills and know-how of social workers to sports clubs, thus, emerges.
"The challenge is to make sure that this project is truly integrated within the territory and that it can then walk a little with its own legs. Because of the skills acquired by the soccer clubs; that still remains as a challenge, but I do not see it as a difficulty, that is, the fact that this territory is always constantly to be involved, the fact that there are potential issues and collaborations that we have not yet developed." Project Manager One sport administrator pointed out the need of psychologists and educators in the sport club to work with coaches and parents in facing the challenges of youth during their sport path. Such professionals, however, are highly expensive and require specific funding that is not easy to find for a grassroots sport society.
"A psychologist attends our camps during training, during the games it is very important, (...) We had a psychologist, and they also need to be correctly paid, so I negotiated with them to reduce costs a little bit and then when I asked the sponsors to contribute. Many times, I didn't succeed [he means that he didn't manage to cover the costs of the psychologist]." Sport administrator (Sport Club C) Thus, the necessity for psycho-social figures inside the sport clubs clashes with the theme of economic resources. The role of the foundation is, thus, fundamental for the sustainability of such figures. The challenge in this respect concerns the economic livelihood of this professional figure within the sports environment.
Lack of Sports Club Management Skills
Interviews highlighted the diversity and uniqueness of the participating sport clubs in terms of organizational culture and management. These unique qualities required implementation of ad hoc interventions adapted to each sports environment. In the initial implementation phase, the different organizational cultures and administrative approaches to sport management could have potentially facilitated/hindered engagement of the clubs in the program mission and activities.
Interviews in sport club A, for instance, showed how the lack of sport management skills could affect the impact of the program. At the time of the development of the program, sport club A had over 170 athletes. From a managerial point of view, sport club A had only two people (the president and secretary) delivering and handling the bureaucracy. Soccer training was delivered by an average of 30 volunteer coaches. The sport administrator was strongly committed and engaged with the social aims of the program because they were aligned with his personal values of inclusion that he practiced in the management of the club.
"When I talk about the boys of my teams, I get excited, I have been a volunteer for 30 years. Many people ask me "why do you do it?" but my "profit" is the fact that they give emotions that they don't even know they give. In every team we have one or two children from difficult contexts, not everyone has the opportunity to buy the sport equipment, sometimes teams have made collections to pay for the shirt or the jacket for those kids who cannot afford it." Sport administrator (sport club A) The philosophy of "unconditional" inclusion, however, caused the sport administrator to host too many youths in sporting activities, even when they were not formally registered on the program. As a consequence, soccer training was attended by different young people every week sent by different stakeholders of the community. Most of the time, there was no communication of these new arrivals with the coach and social workers, who found themselves with new participants at soccer training.
In this regard, there was confusion and a lack of formality in the way the sport administrator managed the access of participants who were sent by different stakeholders in the community. However, this informal culture was not favorable for the daily work with the young people by the coach, the educator, and the psychologist. The openness toward the larger community and the great tension of this with the inclusion of marginalized people was, thus, not properly run, as reported by a program educator: "The soccer club has great capacity to welcome but not the ability to manage. Their policy is "we welcome all the people who come here, who are looking for a place to feel good, play and have fun."" Educator (sport club A) The low quality of the management impacted the delivery of soccer training: there was a high turnover of sport coaches in program activities, and this affected athletes' feelings of mistrust toward the club. In the end, after one year, the program in this sport club was closed by the founders because of the lack of management skills.
On the contrary, in sport club B, the sensitivity towards pedagogical-educational issues by the sport administrators facilitated the fine tuning of the sports club to match the program's objectives: "Sport club B was a paradise, we couldn't believe it. They are well organized in reality but also very connected to pedagogical-educational logic. They are very sensitive and work within an area and with an approach which is already very close to the purposes of the program. Thus, we have not struggled to be in tune with them." Project Manager "Sport club B is a positive environment, I mean that they have a favorable approach [the interviewer is talking about the approach toward the program's scope]; inclusion is in their DNA." Psychologist (sport club B) As the project manager explains, understanding and adapting to the uniqueness of each sporting reality was one of the biggest challenges of the project: "The difficulty has been to discover a diversity also linked to the characteristics of the people who are inside, that is, to discover that there are no common criteria that regulate sport and educational activities within the sport clubs, and this, at the beginning, surprised us a lot. I have to say that we had a slightly different representation, I thought that there were recognizable criteria with respect to management of the club, times, spaces, people, etc., instead, I found completely different realities and each made in its own way." Project Manager At a managerial level, it emerged that the analysis of the characteristics of the sport clubs and of their local community created powerful conditions for the effective implementation of the program activities. However, such tailored work required a strong investment of time.
"At the beginning of program implementation, we worked to gain knowledge and understanding of the sport clubs, people's knowledge, municipalities, of the context, of the main interlocutors. We spent four to five months thinking about these issues, but this is common for all community projects. You spend the first year doing this job, that is, building things and then you can work well in the following years" Project Manager
Discussion and Conclusions
The current research is original since it analyzes an Italian case for exploring the challenges of sport-based programs in promoting social inclusion [22]. Indeed, faced with many sports projects dedicated to promoting social inclusion, in Italy, there is still limited research dedicated to the topic [23]. This work highlights four challenges related to the usage of sport as a tool for social inclusion that suggest several insights at a theoretical level.
The first challenge is related to dropout of youth in living conditions of severe vulnerability; the high percentage of vulnerable youth in sport club C challenged the positive implementation of activities and gradually led to participants' dropout. This element of the research provides a warning about the impact of sport-based projects on highly vulnerable children. At a theoretical level, as reported also by Jeanes et al. [4], it is confirmed that sport can act on limited aspects of vulnerability but cannot fully impact the condition in a broad sense.
The second challenge is related to the limited transferability of program outcomes for youth in living conditions of severe vulnerability; our study found that, while young people were exposed to several positive benefits of sport, such as increased self-efficacy and acknowledgment of social norms, the intervention failed to effect behavioral changes at a wider social level, for instance at school, for youth living with marked social marginalization. This is in line with other studies of the field [27,32]. Drawing on Bailey's [5,6] theoretical contribution, the research reports that, when youth are living situations of extreme marginalization, outcomes on power dimensions are the most difficult to achieve [5,6]. Indeed, the research shows that the most vulnerable youth benefited little from taking more control over problematic aspects of their lives. At a theoretical level, data suggest that the impact of sport on social inclusion may vary according to the diverse grade of vulnerability of participants.
The third challenge is the limited sustainability of social workers; the research shows that the collaboration between sport and community actors helped to mitigate youth vulnerabilities by promoting insights and knowledge in the sport context and the wider community. This was helpful in order to strengthen the relationship among adults inside the network who maximized the emotional and social sustenance of the youngsters [25]. Monthly meetings with teachers actually impacted the comprehension of the adolescents in terms of their behaviors and attitudes in variant environments (school and sporting contexts). These features improved the holistic knowledge of the young people among the adults taking care of their educational path (teachers, social workers, sport workers).Although the work of interconnection between teachers and program educators and psychologists proved to be valuable for all the reasons mentioned, the sustainability challenge of social workers, who were the central nodes of this network, has been reported. The presence of social professionals indeed permitted bonds to form at the community level serving the social inclusion of youth. At a theoretical level, these results confirm that sport may affect the relational dimension of inclusion [5,6]. Young participants were indeed included in a network of collaborative adults who acted to promote their wider well-being. Such figures, however, are not taken for granted in amateur sports contexts, which are mainly based on voluntary work, at least in the Italian context [44]. The interviewees' experience suggests that sports clubs are struggling to find specific funds to pay these professionals. Furthermore, the presence of social workers partially obstructed the development of sports clubs' capacity to promote social inclusion on their own in collaboration with schools; this results in a paradox related to their presence in the sport context.
The last challenge is related to the lack of sports club management skills; the case study highlighted that some sport contexts may not be adequately equipped to manage social programs and to work in connection with other professions and sectors. Although working in connection with the community is an essential condition within sport-based programs promoting inclusion, sports clubs may not be prepared to manage this process. Nevertheless, the combination of sport clubs' conventions with social and educational aims can be challenging, if not adequately supported. In the cases under analysis, the sport clubs' cultural openness toward wider social scopes facilitated implementation of the activities. This confirms the assumptions of several systemic and ecological theories sustaining how the micro-sport environment may strongly affect diverse development and social outcomes [32]. However, if sport clubs are not properly equipped in terms of management, as in the case of sport club A, the program risks failing in its aims. As pointed out by the program project manager, this work constitutes a challenge in terms of time and effort. The project manager's field experience highlights the need to work step by step with sports clubs to support them in implementing social inclusion processes. In this sense, the experience of sports club A is emblematic. The unstructured "inclusion" implemented by the sport management was, in fact, difficult to reconcile with the organizational culture of the foundation, which was much more focused on the codification and standardization of processes. This implies work of support, negotiation, and collaboration between the local actors and the foundation that is not always sustainable in terms of time and resources.
These challenges also suggest a number of implications for practice that should guide in designing and implementing sport-based programs for socially vulnerable youth.
The first one is related to the criteria for participants' recruitment [7]. Sport-based programs should equally involve youth living in diverse grades of vulnerability in order to avoid drop-out.
Second, with respect to the limited translation of sport outcomes in other life domains, a stronger involvement of the distal ecological systems [32] for supporting youth educational path outside sport should be considered. Several studies reported the benefit of a multi-stakeholder approach in sport, resulting in "economy of effort", namely, better use of resources and competencies for supporting youth [35,[45][46][47]. The multi-stakeholder approach, thus, should be intensified, especially when working with vulnerable youth.
Third, the limited sustainability of social workers within sports clubs shows the paradox and complexity of working with vulnerable youth and remains unsolved. On the one hand, social workers are needed for promoting vulnerable youth social inclusion. Interviews indeed pointed out that educators and psychologists played the role of connectors with the community since they formed the relationship with the teachers outside the sports clubs. Schulenkorf [48] speaks of a "change agent" when describing such functions within the community. Change agents may be defined as an "anchor-person" acting as an external party who initiates contacts and facilitates cooperation among groups. According to this author, these figures generally have a crucial impact during planning and implementation phases of projects, as they support new contacts among groups and facilitate new collaboration.
The research confirms the cruciality of such figures in terms of social capital building at a local level serving the fragility of the youth. In more detail, the involvement of such professionals taking the role of "change agents" in opportunities for interorganizational work (e.g., monthly meetings, etc.) with community stakeholders (municipality representatives, teachers, community social workers, youth associations) promoted the discussion of youth vulnerabilities within and without the sporting environment and constituted a precious occasion to plan educational actions to support youth in need. Furthermore, concrete spaces of co-working, such as the monthly meetings, served as hubs for building "bridges" of collaboration among diverse actors [23]. On the other hand, the research also highlights that their presence obstructed the development of sport systems' capacity to include. Furthermore, sports clubs struggle in funding these figures.
Finally, the lack of management skills could be overcome through the implementation of specific training for sport coaches and administrators. Another possible strategy is the implementation of participatory approaches when planning and designing sport-based interventions on the behalf of donors [49,50]. This methodology, indeed, permits adaptation of actions and strategies in response to local contexts and needs. The implementation of participatory actions could be useful in order to draw the contextual sports clubs' characteristics, which eventually allows the creation of ad hoc paths promoting actions respecting local peculiarities [51,52]. Understanding local specificities at the planning stage, indeed, could help intercept a lack of managerial skills on the part of sports clubs that results in a specific capacity, building pathways for those clubs that show themselves to be more fragile [49][50][51][52][53][54][55][56].
Limitations
First, the study is not generalizable. Second, the research does not consider the point of view of participants, and this, of course, limits the understanding of the impact of the program at an individual level on youth and the challenges from their perspective. In this domain, future research should include young participants' engagement in the evaluation process.
Furthermore, the research cannot provide a conclusion about the long-term validity of the outcomes pointed out. In this domain, future research should encourage follow-up studies in order to understand what happens to the interorganizational network once the funding (or grant) has ended. Is the network of schools and sport clubs still functioning?
Future studies should also compare more cases in order to provide a wider understanding of the phenomena (8).
Author Contributions: C.D. and C.C. collected and analysed data; C.C. wrote the manuscript; C.D., C.C. and CG conceptually framed the manuscript; C.G. scientifically supervised the work. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki. Ethical review and approval were waived for this study, due to the fact that author's institution didn't have an Etichs Committee at the time data were acquired. | 2021-08-02T00:05:47.107Z | 2021-05-12T00:00:00.000 | {
"year": 2021,
"sha1": "a643055f0d72327c0dbdd4ba481df390765e0e10",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4698/11/2/44/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e74b7ae763932a015509befb309da28d525df42b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
3523908 | pes2o/s2orc | v3-fos-license | Design of ultra-swollen lipidic mesophases for the crystallization of membrane proteins with large extracellular domains
In meso crystallization of membrane proteins from lipidic mesophases is central to protein structural biology but limited to membrane proteins with small extracellular domains (ECDs), comparable to the water channels (3–5 nm) of the mesophase. Here we present a strategy expanding the scope of in meso crystallization to membrane proteins with very large ECDs. We combine monoacylglycerols and phospholipids to design thermodynamically stable ultra-swollen bicontinuous cubic phases of double-gyroid (Ia3d), double-diamond (Pn3m), and double-primitive (Im3m) space groups, with water channels five times larger than traditional lipidic mesophases, and showing re-entrant behavior upon increasing hydration, of sequences Ia3d→Pn3m→Ia3d and Pn3m→Im3m→Pn3m, unknown in lipid self-assembly. We use these mesophases to crystallize membrane proteins with ECDs inaccessible to conventional in meso crystallization, demonstrating the methodology on the Gloeobacter ligand-gated ion channel (GLIC) protein, and show substantial modulation of packing, molecular contacts and activation state of the ensued proteins crystals, illuminating a general strategy in protein structural biology.
M embrane proteins play a critical role in mediating cellular processes, as they reside in the lipid bilayer of biological membranes and they are responsible for communication between the intracellular and extracellular environments. Understanding their function is of paramount importance in designing and developing drugs and pharmaceuticals targeting disorders or diseases caused by their malfunction or change in activity.
Crystallizing membrane proteins is a challenging task, particularly for those with large extracellular domains (ECDs), yet, this provides the foundation for membrane protein structural biology. Specifically, crystallization is to date the only viable approach for resolving the complex three-dimensional protein structures and therefore deciphering the underlying mechanisms of their intercellular interactions.
Since its inception in 1996 1 , the in meso membrane protein crystallization has become a revolutionary technique leading to the resolution of~360 membrane protein structures in the Protein Data Bank (PDB). This alternative approach relies on a lipid, often a monoacylglycerol, which once combined with water spontaneously self-assembles into a biomimetic artificial membrane capable of providing a "native-like" mesophase for membrane protein reconstitution and, upon additional precipitants, crystal nucleation and growth 2,3 . Among the greatest recent successes of the in meso method, it is noteworthy the resolution of the structure and therefore the understanding of the mechanism of action of the highly relevant class of G-Protein coupled receptors [4][5][6] .
In spite of recent progress made toward understanding the mechanistic changes that drive crystal formation within the lipidic bilayer 15,16 as well as enhancing the widespread use of in meso crystallization [7][8][9] , a major limiting factor hindering crystal formation has remained the relatively small size of the lipidic mesophase aqueous domains. Typically LCPs are characterized by two sets of interpenetrating and non-interconnected water channels with a diameter of 3-5 nm, separated by a threedimensional lipid membrane percolating through space. This geometric constraint prevents the reconstitution of membrane proteins with large extracellular or intracellular domains and restricts the use of the lipidic host matrices to membrane proteins with small hydrophilic domains.
Attempts to overcome this major structural limitation have been done in and beyond the context of membrane proteins crystallization. Cherezov et al 17 incorporated a series of additives, mainly small amphiphiles, leading to an increase of the mesophase lattice parameter up to 40% before a transition to a highly disordered sponge phase was observed and used this system as host for membrane protein crystallization. Other formulations have been proposed with varying degrees of success to swell the host lipidic mesophases while preserving the bicontinuous cubic phase symmetry, including sugar esters 18 , surfactants (e.g., octyl glucoside) 19,20 , lipids (e.g., diglycerol monooleate (DGMO) 21 , cholesterol 22,23 ), and phospholipids (e.g., soybean phosphatidylcholine 20 , dioleoyl phosphatidylserine (DOPS) 23,24 , dioleoyl phosphatidylglycerol (DOPG) 24 , distearoyl phosphatidylglycerol (DSPG) 25,26 ).
Electrostatic swelling of the mesophase via addition of charged lipids is a promising tool to generate thermodynamically stable swollen cubic phases. Engblom et al. 25 have used an anionic phospholipid, DSPG, to swell the monoolein (MO)/water mesophase at room temperature up to a maximum lattice parameter of 268 Å and were able to obtain Im3m, Ia3d, and Pn3m cubic phases at different lipid-phospholipid-water ratios. They also observed that the electrostatic swelling effect from the anionic phospholipids, used in the ternary MO-phospholipid-water systems, allows for substantially more water (~70% w/w) to be contained within the swollen cubic phase than the binary MO-water system (~40% w/w) 25,26 . By mixing charged lipids and cholesterol as membrane-stiffening agent, the largest thermodynamically stable and structurally ordered cubic phases were obtained by Barringa et al. 24 and Tyler et al. 23 achieving swollen Im3m cubic phases with a maximum lattice parameter approximately four times larger than the classical (MO)/water sytem 24 . However, the largest swelling was observed at high temperature (i.e., 35-45°C 23 and 55°C 24 ) and the only symmetry observed was the primitive Im3m cubic phase, both factors being unsuitable for in meso membrane protein crystallization, which typically requires lower temperatures and Pn3m symmetry. By combining the electrostatic swelling effects with epitaxial growth from capillary walls, very recently Kim et al. 27 reported super swollen mesophases of Im3m, Ia3d, and Pn3m symmetries with lattice parameters up to of 68.4 nm, and the unusual coexistence of double-gyroid symmetries with excess water 27 . Nonetheless, these conditions were reported far from equilibrium, and the meta-stability of the system reduces its applicability for membrane protein crystallization, which may require weeks for crystals to nucleate and growth: indeed authors did not consider it for this specific use and to date, in meso crystallization of membrane proteins from thermodynamically stable ultra-swollen LCP is still to be achieved.
Here we present a system based on anionic phospholipid (DSPG) with monoacylglycerol monopalmitolein (MP) leading to thermodynamically stable ultra-swollen cubic phases of Ia3d, Pn3m, and Im3m symmetries. We study the phase diagram of these systems and show a re-entrant behavior of the Ia3d and Pn3m symmetries upon increasing hydration level, which is responsible for a swelling of the lattice parameters and water channels up to five-fold compared to typical lipidic mesophases. We exploit the thermodynamic stable nature of these systems to crystallize a representative membrane protein with large ECDs, the Gloeobacter violaceus ligand-gated ion channel (GLIC), otherwise inaccessible to the classical in meso crystallization techniques ( Fig. 1), and we show how the crystallization in meso of this protein leads to significantly improved packing of the proteins within the crystals and a differently observed space group compared to all the deposited structures of the same protein obtained by vapor diffusion crystallization, opening a promising strategy for crystallization of challenging membrane proteins.
Results
Fine-tuning the design of ultra-swollen LCP systems. Our starting point in the formulation of the LCP is the selection of a monoacylglycerol, MP, different from the traditional MO for the relatively shorter hydrophobic tail (C16 vs. C18 for GMO), but known for its higher maximum hydration point, as well as the capacity to form cubic phases with larger structural parameters than those found in the MO-water system 28 . To this system, DSPG was used as an electrostatic swelling lipid 25 Before having an in-depth analysis of the MP-DSPG-water system, we wanted to determine how the addition of DSPG to the MP-water system influences the maximum hydration point of the lipid mixture as well as the structural parameters of the formed mesophases. Synchrotron small-angle X-ray scattering (SAXS) analysis of the resulting mesophases revealed a substantial swelling compared to that of MO-water systems 25 , and allowed the assessment that varying amounts of DSPG lead to different bicontinuous cubic symmetries ( Fig. 2 and Supplementary Figure 1). More precisely, addition of 3, 5, and 8 wt% of DSPG resulted in the formation of highly swollen-primitive Im3m, double diamond Pn3m, and double gyroid Ia3d cubic phases ( Fig. 2) at different levels of hydration. This opens up the possibility of designing suitable LCP symmetries for membrane protein crystallization via minor changes in lipid composition.
Most importantly, SAXS analysis at 80% hydration revealed the room temperature (20°C) formation of bulk phase, highly swollen bicontinous cubic phases of both double-gyroid and double-diamond symmetries that far supersede all binary monoacylglycerol-water and most of the previously reported swollen mesophases in terms of structural parameters. Respectively, we observed an Ia3d cubic phase with a lattice parameter of 525 Å and a water channel diameter of 226 Å, and a Pn3m cubic phase with a lattice parameter of 301 Å and a water channel diameter of 204 Å (approximately five times larger than the classical MO-water cubic phase used for membrane protein crystallization). Their thermodynamically stable nature at 20°C makes them the ideal hosting matrices for membrane proteins with large ECDs.
Further increasing the amount of DSPG (up to 10 wt%) resulted in a coexistence of phases (Ia3d and L α ) at maximum hydration. This could be tentatively explained by localized hydrated domains of self-assembled phospholipids (L α ) within the swollen cubic matrix, once the maximum doping capacity of the system is reached (Supplementary Figure 1).
Phase diagram and re-entrant behavior of MP-DSPG-water. Analysis of the phase diagram ( Fig. 3a) revealed that the DSPG-MP-water system was able to retain 10% more water than the DSPG-MO-water system, as expected a priori from our choice of the specific monoacylglyerol used. The ability of the lipid system to retain larger amounts of water directly modulates the maximum attainable structural parameters and therefore plays a key role in the use of the system for membrane protein crystallization.
Along with the capacity to retain more water, the DSPG-MP-water system revealed another surprising feature in lipid mesophase order-to-order transitions. More specifically, for two distinct lipid compositions containing different amounts of added phospholipid (5 or 8 wt%), a re-entrant behavior of the respective swollen bicontinuous cubic phase at the higher hydration levels was consistently observed. For example, in the case of the 5 wt% DSPG/MP-water system, a double-diamond Pn3m cubic phase is initially observed at 55-60% w/w hydration. Upon increasing the water content in the system to 65-75% w/w H 2 O, the phase transitions to a primitive Im3m cubic phase. A further increase in water content, which would normally drive the system toward a coexistence with excess water, induces a second order-to-order transition back to the highly swollen doublediamond Pn3m cubic phase at 80% w/w hydration (Fig. 3).
Similarly, the 8 wt% DSPG/MP-water system transitions from an initial double-gyroid Ia3d cubic phase at 60% w/w water content to a Pn3m cubic at 65-70% w/w hydration, before returning to the highly swollen Ia3d symmetry at higher hydration levels (75-80% w/w water). Although re-entrant behavior is typically observed when competing interactions are at play, at the moment, the driving forces behind these unusual transitions remain unclear and mandate deeper future investigations. Figure 3b, c show the evolution of the water channel size as a function of the total amount of water available in the system, at a fixed temperature of 20°C for the different mesophases considered in the MP:DSPG:water systems containing 8 wt% and 5 wt% phospholipid, respectively (for the calculation of the channel radii see supporting information and supporting Eq. 1 to 2a-c). The re-entrant phase behavior and the large increase in structural parameters can both be readily observed. More importantly, this allows setting the thresholds for the use of the lipid matrices for the encapsulation and crystallization of membrane proteins with large ECDs (e.g., GLIC). By comparing the size of the LCP water channel with the diameter of the extracellular domain of the model protein GLIC (~75 Å), we can determine the minimum level of hydration needed in the MP-DSPG system in order to obtain a cubic phase with the structural characteristics to successfully reconstitute the large membrane protein into its lipidic bilayer.
A robust toolbox for large membrane proteins crystallization. In order to assess whether the swollen mesophases are a suitable system for membrane protein crystallization, a few obstacles have to be cleared first. To start, when using electrostatically swollen mesophases, we need to pay attention to the addition of crystallizing salts, such as sodium chloride in concentrations higher than 100-150 mM, which can annihilate the electrostatic swelling effects. This limitation was overcome via a systematic prescreening of protein buffers and detergents, which led to identifying the ideal protein buffers suitable for our scope. More specifically, after expression and purification of all proteins considered, the protein buffer was exchanged to a low-salt buffer allowing both stable solutions and in meso crystallization without disrupting the swollen mesophases.
Secondly, to demonstrate feasibility of crystallization from the swollen mesophases, a control experiment with a model membrane protein, which is crystallisable in meso also via conventional lipidic mesophases, is needed. To this end, we selected the highly stable transmembrane domain of the Escherichia coli virulence factor, intimin (PDB: 5G26) 29 , previously crystallized in meso from an MP-based cubic phase 15,16 , and purified into a low-salt/low-detergent buffer (see Materials and Methods) preserving the swelling of the MP/DSPG/water systems. We then subjected intimin to crystallization trials using the 5 and 10 wt% DSPG systems yielding at 80% hydration swollen Pn3m and mixed Ia3d/L ɑ phases, respectively and immediately observed protein crystal growth from both systems confirmed by means of UV-microscopy (Supplementary Figure 2). Moreover, the crystals grown from the swollen systems were similar in shape and size to those previously obtained from MPbased cubic phases 15 , confirming the suitability of the swollen mesophases for membrane protein crystallization. This is in agreement with the observations of Sparr et al. 26 who successfully crystallized bacteriorhodopsin from a swollen MO-based mesophase, observing both improvement of crystal quality and growth speed compared to the crystals of the same protein obtained in standard MO-based mesophases.
We then moved on to assess the full potential of the swollen mesophases in crystallizing membrane proteins with large ECDs Fig. 3 Phase diagram and water channel sizes of the DSPG-MP-water system. a Phase diagram of swollen systems using 5 and 8 wt% DSPG/ MP/water compared with the normal MP/water system, and water channel diameter size as a function of the total amount of water available in the b 8 wt% and c 5 wt% DSPG/MP/water systems for the different cubic mesophases considered. Maximum attainable water channel structural parameters are shown in the top left section of the plot. The cubic mesophases Pn3m, Im3m, and Ia3d are represented using blue, green, and orange colors, respectively. GLIC protein extracellular domain size is shown on the right side of the plot by selecting a membrane protein that would be inaccessible to the "classical" in meso approach due to the prohibitive size of its hydrophilic domain (Fig. 3b). The GLIC is a pentameric 174 kDa (1585 amino acid) membrane protein, characterized by a large extracellular domain that surpasses in size the water channel diameter of most LCPs. GLIC was thus purified in a low-salt/lowdetergent buffer that would not disrupt the swollen mesophase during protein reconstitution. Crystallization trials were then setup using three distinct lipidic systems containing 5, 8 and 10 wt% DSPG, yielding highly swollen Pn3m, Ia3d, and Ia3d/L ɑ phases prior to the addition of the crystallization buffer. Protein crystal growth was then consistently observed in all the tested systems (Fig. 4a, d, g) and confirmed by means of cross-polarized microscopy (Fig. 4b, e, h), UV microscopy ( Fig. 4c, f, i), and single-crystal diffraction (Fig. 4j, k), confirming the successful crystallization of GLIC using the in meso approach. Importantly, control experiments run with the same crystallization buffers from MP-based non-swollen mesophases produced no visible or diffracting crystals.
Interestingly, although the morphologies of the crystals were identical for the three tested systems (i.e., rod-like crystals with a maximum length of~30 µm), the time required for crystal formation varied based on the initial symmetry of the hosting mesophase. Respectively, the crystallization from an initial double-gyroid symmetry (8 and 10 wt% DSPG) yielded crystals after~7days, whereas crystallization from an initial doublediamond symmetry required slightly longer time (10 days) for crystallization to take place. This may be related to different protein diffusion rates in the lipid bilayer of the different cubic mesophases 16 .
Single-crystal diffraction experiments from the in meso protein crystals allowed us to resolve the structure of the pentameric membrane protein (Fig. 5, Table 1) validating the use of the swollen cubic phases for membrane protein crystallization, thus opening previously unexplored pathways in protein structural biology. The obtained resolution of 6 Å is modest, but typical for first hits of membrane protein crystallization trials 30 . Further optimization of crystallization conditions would presumably enable higher resolution to be reached; however, this was not the purpose of the present study. More importantly, in-depth structural analysis revealed that the in meso grown GLIC crystals crystallized in a completely different and unreported space group for this protein, exhibiting a tighter packing arrangement, with less solvent content (56.3% solvent content) compared to loose crystals grown via vapor diffusion techniques (typically 76.7% solvent content) 30 . Moreover, the structure obtained from in meso grown crystals shows the GLIC molecules to be in a closed state (see Structure solution and refinement in the Methods section and Supplementary Figure 3), in spite of the presence of H 3 O + ions at the crystallization pH of 4, which normally stabilize the open state of the channel, as found in the vast majority of reported GLIC structures 30 . We conclude that this stabilizing effect, which allows the "entrapment" of this observed protein conformation, might be due to the specific protein-protein contacts existing in the newly generated crystal packing, which further exemplify the benefits associated with the swollen in meso crystallization method for membrane proteins with large ECDs.
Discussion
In conclusion, mixing charged phospholipids within a host MP lipid bilayer in the presence of water results in the formation of highly swollen, thermodynamically stable cubic phases with interesting structural features; the most notable being the reentrant double-gyroid and double-diamond bicontinuous cubic phases upon increasing hydration levels. Although the physical mechanisms behind these changes are not yet fully understood, the observed phase behavior suggests competing effects at play in the establishment of the observed mesophases.
The ultra-swollen mesophases were successfully used for the crystallization of a membrane protein, GLIC, with large ECDs otherwise inaccessible to the conventional in meso crystallization method. Analysis of the ensued crystals resulted in the structure of a large membrane protein, obtained from single-crystal diffraction experiments grown using the swollen in meso These results showcase the possibility of expanding the reach of the in meso crystallization method to previously unreachable proteins and to design protein crystals into different space groups, packing efficiency, and activation states inaccessible via other crystallization methods, providing powerful tools of significance to membrane protein structural biology.
Methods
Materials. MP (M-219) was purchased from Nu-Chek Prep (Minnesota, USA). 1,2-Distearoyl-sn-glycero-3-phospho-rac-glycerol, sodium salt (DSPG, 560400), was kindly provided by LIPOID PG (Steinhausen, Switzerland). LCP glass plates with a 100 µm double-sided tape spacer and 200 µm plastic seals were purchased from Molecular Dimensions. All the salts and detergents needed for the protein purification and preparation of the crystallization buffers were purchased from Sigma Aldrich, unless otherwise stated.
Mesophase sample preparation. A range of MP lipidic mixtures with 3, 5, 6, 7, 8, 9, 10, 11, and 12 wt% DSPG were initially prepared to determine the specific phospholipid/MP ratios to swell the pn3m (i.e., 5 wt% DSPG at 80% hydration) and ia3d (i.e., 8 wt% DSPG at 80% hydration) lyotropic cubic mesophase (Supplementary Figure 1). Lipidic mixtures were prepared by co-dissolving the appropriate weighed amounts of dry lipids, MP and DSPG, in chloroform. Solvent was completely removed by rotary evaporation. Mesophase samples were then prepared by mixing weighed quantities of DSPG/MP lipid and Milli-Q water (i.e., 40-80% hydration) inside sealed pyrex tubes by vortexing at room temperature until a homogenous mixture was obtained. The prepared mesophase was then allowed to equilibrate at room temperature for 72 h.
Small-angle X-ray scattering. Data were collected at the SAXS/WAXS beamline at the Australian Synchrotron. Data were obtained at a constant temperature of 20°C . The experiments used a micro-sized beam of dimensions 100 μm × 100 μm, using a wavelength λ = 1.0322 Å (12.0 keV) for the MP-based samples, with a typical flux of 1.2 × 10 13 photons per second and a 1 s exposure time. Previous work suggests that radiation dosages in the range used in this study are unlikely to affect the mesophase significantly 31 2D diffraction images were recorded on a Pilatus 1M detector, which offers very low noise, a large dynamic range and rapid a b e c d data collection over a large active area. Dead space due to intermodule gaps is overcome by radial integration with the detector slightly offset to ensure complete data coverage. The obtained diffraction images were integrated into 1D diffraction spectra using the ScatterBrain IDL software developed in house by the research team of the Australian Synchrotron. The obtained 1D spectra were then analysed using origin, for both peak assignment and calculations of the phase structural parameters. SAXS measurements were also performed on a Bruker AXS Micro, with a microfocused X-ray source, operating at voltage and filament current of 50 kV and 1000 μA, respectively. The Cu Kα radiation (λCu Kα = 1.5418 Å) was collimated by a 2D Kratky collimator, and the data were collected by a 2D Pilatus 100K detector. The scattering vector Q = (4π/λ)sin θ, with 2θ being the scattering angle, was calibrated using silver behenate. Data were collected and azimuthally averaged using the Saxsgui software to yield 1D intensity vs. scattering vector Q, with a Q range from 0.004 to 0.5 Å -1 . For all measurements the samples were placed inside a stainless steel cell between two thin replaceable mica sheets and sealed by an Oring, with a sample volume of 10 μL and a thickness of ∼1 mm. Measurements were performed at 20°C, and samples were equilibrated for 15 min before measurements, whereas scattered intensity was collected over 20 min.
In addition, SAXS measurements were also performed at the X06DA PXIII beamline at the Swiss Light Source, Paul Scherrer Institute (Villigen, Switzerland), equipped with a Pilatus 2M detector (Dectris, Baden-Dättwil, Switzerland). The photon energy was set to 5.975 keV, the in-air sample-to-detector distance was 800 mm and the sample-to-beamstop distance was 65 mm. The beam size was 50 × 90 μm 2 , with a flux of 3.5 × 10 11 photons per second and exposure time of 10 s. Samples were loaded in quartz capillaries and measured at room temperature.
Expression and purification of intimin. Detailed information on the construction of the plasmids containing the intimin E. coli O157:H7 gene has been previously described 29 . The construct was kindly provided by Dr. Susan K. Buchanan from the National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD 20892, USA and used as received without any further modifications.
Briefly, the vector containing the E. coli O157:H7 gene was transformed into BL21 (DE3) cells (Novagen-Merck Millipore, Darmstadt, Germany) which were grown in TB media (50 µg mL −1 kanamycin) at 20°C for 2-3 days while shaking at 220 rpm until they reached a terminal OD of 15-20. The cells lysed using a probe sonicator (Misonix S4000), in 30 s bursts at 60% amplitude. Membranes containing the desired protein were harvested by ultra-centrifugation (160,000×g, 60 min, 4°C ). Membrane proteins were solubilized by resuspension in solubilization buffer (50 mM Tris pH 8.0, 200 mM NaCl, 20 mM Imidazole, 5% Elugent (Calbiochem)) and left stirring O/N at 4°C. The next morning, the sample underwent ultracentrifugation (250,000×g, 60 min, 4°C) to remove insoluble material. The protein was purified using a combination of affinity and ion-exchange chromatography. Fractions containing protein were then buffer exchanged in low-salt buffer (20 mM Tris pH 8.0, 25 mM NaCl, 2% OG) and concentrated in a YM30 Amicon Ultra concentrator (Millipore) to prepare for crystallization experiments.
Expression and purification of GLIC. GLIC was expressed from a pET20 plasmid, kindly provided by Marc Delarue (Pasteur Institute, CNRS URA 2185, Paris, France), as a fusion protein with maltose-binding protein (MBP), essentially as described previously 30 . The expression coding sequence contained an N-terminal signal peptide, followed by MBP and a thrombin cleavage site, preceding GLIC. Briefly, BL21(DE3) cells (Novagen-Merck Millipore, Darmstadt, Germany), harboring the pET20-MBP-thrombin-GLIC plasmid were grown at 37°C in terrific broth containing 100 µg mL −1 ampicillin, to an optical density at 600 nm of 1.6. The culture was induced with 0.1 mM IPTG and growth continued for a further 16 h at 20°C. Cells were harvested by centrifugation and lysed by three passes through an Emulsiflex C5 homogenizer (Avestin, Ottawa, Canada) at 15,000 psi in buffer A (20 mM Tris pH 7.6, 300 mM NaCl, with "complete protease inhibitor cocktail (Roche)". Lysate was clarified by centrifugation (7000×g, 20 min, 4°C) and membranes pelleted from the supernatant by ultra-centrifugation (100,000×g, 60 min, 4°C). Membrane proteins were solubilized from the membrane pellet in buffer A with 2% n-Dodecyl-β-D-Maltopyranoside (DDM, Anatrace, Maumee, OH, USA) at 4°C overnight, then clarified by ultra-centrifugation (100,000×g, 60 min, 4°C). GLIC was purified from the supernatant by amylose affinity chromatography, with elution in 20 mM Maltose and pooled fractions further purified by size-exclusion chromatography (Superdex-200 10/300 GL, GE Life Sciences) in buffer A with 0.02% DDM. Fractions corresponding to MBP-GLIC pentamer were concentrated and digested with thrombin (Merck, KGaA, Darmstadt, Germany) at room temperature overnight. MBP and thrombin were removed by a further round of sizeexclusion chromatography and the GLIC pentamer concentrated to 10.0 mg mL −1 and exchanged into 50 mM NaCl, 20 mM Tris pH 7.6, using an Amicon Ultra-2 mL 30 K concentrator (Millipore), to prepare for crystallization.
Crystallization. The protein solution mixed in a 50:50 (volume:volume) ratio with molten MP or in a 80:20 (volume:volume) ratio with either of the MP:DSPG mixtures before being dispensed in 200 nL aliquots onto the surface of a LCP glass plate with double-sided tape spacer. One microliter of crystallant was dispensed on top, and the experiment sealed with a 200 µm plastic seal. All dispensing was performed with a Mosquito LCP machine equipped with a humidity chamber (TTPLabtech, UK). The completed plates were incubated at 20°C and imaged in a Minstrel HT/UV imaging system (Rigaku).
GLIC crystals were grown in meso from well F11 (0.2 M (NH 4 )SO 4 , 0.02 M NaCl, 0.02 M Na Act 4 pH, 33% v/v PEG200) of the Mem Gold Screen (Molecular Dimensions, Newmarket, UK) in all DSPG/MP lipidic mixtures (5,8, and 10 wt%) prepared. PEG200 was selected since low molecular weight PEG (<200 g/mol), is known to preserve the symmetry of cubic phases, including those based on MP 32 . All these wells, containing GLIC protein crystals, were fished using mesh cryoloops and stored in liquid nitrogen until beamtime without any cryo-protectants. The crystal diffraction data obtained in this study come from the only microcrystals that were successfully fished from a well prepared using 10%DSPG/ MP (Fig. 4).
Diffraction data collection and processing. The data were collected at the X06SA-PXI beamline at the Swiss Light Source (Villigen, Switzerland), using a 10 × 10 μm 2 beam with photon energy of 12.39 keV, at the full flux of 3×10 11 photons per second. Images were recorded with an EIGER 16M detector (Dectris, Switzerland) placed at 400 mm distance with exposure time of 0.1 s for a rotation of 0.1°per frame. Crystals of 10-30 μm in size diffracting to low resolution (about 6 Å) were found on the mesh mounts using a systematic rastering procedure 32 followed by in-house developed automatic data collection for microcrystals (CY+ protocol) over a total range of 15°on each crystal. The data completeness was maximized by collecting new ranges of 15°from a different orientation on the best diffracting crystals, until radiation damage caused a significant decrease of the diffraction signal. The data were processed with XDS 33 , scaled and merged with XSCALE 34 , using an in-house script provided by Shibom Basu. A complete dataset was obtained by merging the best ten wedges of 15°, however the nominal resolution was of only 6.0 Å (using a resolution cutoff of I/σ(I) = 1). The space group, confirmed by POINTLESS 35 , was C222 1 with unit cell parameters a = 75.94Å, b = 208.22Å, c = 255.29Å, α = β = γ = 90°. Complete data collection statistics are presented in Table 1 and Supplementary Table 1.
Structure solution and refinement. The structure was solved by molecular replacement (MR) and refined with the Phenix suite 36 . The PDB entries 4HFI (open channel state) and 4NPQ (resting/locally-closed state) were used as search model for MR and reference model for constraints in the low-resolution structure refinement. The other options used in phenix.refine for refinement of the lowresolution structure were rigid-body refinement in the first round, NCS constraints, group B-factor refinement, and secondary structure constraints. As the resolution achieved was low, mainly secondary structure elements were identifiably visible in the electron density map, such as α-helixes in the transmembrane domain, βsheets, and loops in the extracellular domain (Fig. 5). When initially using the open channel state as MR and restraints model, characteristic residual difference densities in the F o −F c fourier difference map were observed in the inner ring of transmembrane α-helixes upper half, near the extracellular domain (Supplementary Figure 3), which clearly pointed to a predominantly closed form of the channel 30 . The difference densities were not present when the locally-closed state was used as MR and restraints model, and the refinement R work and R free values decreased significantly compared to the open state case, reflecting the improved agreement of the model and data. Final values of R work /R free were 0.28/0.32. The Ramachandran statistics were 94% favored, 5.3% allowed and 0.32% outliers, and there were 1.2% rotamer outliers. Complete refinement statistics are presented in Table 1 and Supplementary Table 1. The solvent content was analyzed using the program RWCONTENTS in the CCP4 suite 35 Evaluation of structural parameters. To determine the evolution of the structural parameters with hydration level, i.e., the size of the water channels, SAXS data information on the lattice were combined with the composition of the samples. To calculate the diameter of the water channel for the three bicontinuous cubic phases (Ia3d, Pn3m, and Im3m), triply periodic minimal surfaces arguments were used and the following equation from Tuner et al. 37 was applied: where a is the lattice parameter as measured by SAXS, φ is the lipid volume fraction, which can be obtained knowing the water content and the density of MP (ρ = 0.982 g cm −3 ), l is the length of the lipid chains, A 0 and χ are respectively the ratio of the area of the minimal surface in a unit cell to (unit cell volume) 2/3 and the Euler-Poincaire characteristic, which have the following values depending on the specific cubic phase: A 0 = 3.091 and χ = −8 for Ia3d; A 0 = 1.919 and χ = −2 for Pn3m; A 0 = 2.345 and χ = −4 for Im3m. Following Briggs et al. 38 , we derive the radius of the water channels by: Pn3m ð Þr ¼ 0:391a À l ð2bÞ Data availability. Data supporting the findings of this manuscript are available from the corresponding author upon reasonable request. The GLIC structure is deposited in the PDB under the accession code 6F7A. | 2018-04-03T02:30:17.612Z | 2018-02-07T00:00:00.000 | {
"year": 2018,
"sha1": "ea8d5bb8bd017bb3a86d568c6f567306ee4a6079",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-02996-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4eb4679e482ce08831e8cf8dafa7cda9ba96da3e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
237595917 | pes2o/s2orc | v3-fos-license | Neonatal Piglets Are Protected from Clostridioides difficile Infection by Age-Dependent Increase in Intestinal Microbial Diversity
ABSTRACT While Clostridioides difficile is recognized as an important human pathogen, it is also a significant cause of gastroenteritis and associated diarrhea in neonatal pigs. Since clinical disease is rarely diagnosed in piglets older than 1 week of age, it is hypothesized that natural resistance is associated with the increased complexity of the intestinal microbiota as the animals age. To test this, piglets were challenged with C. difficile (ribotype 078/toxinotype V) at times ranging from 2 to 14 days of age, and the severity of disease and microbial diversity of the cecal microbiota were assessed. Half of the piglets that were challenged with C. difficile at 2 and 4 days of age developed clinical signs of disease. The incidence of disease decreased rapidly as the piglets aged, to a point where none of the animals challenged after 10 days of age showed clinical signs. The cecal microbial community compositions of the piglets also clustered by age, with those of animals 2 to 4 days old showing closer relationships to one another than to those of older piglets (8 to 14 days). This clustering occurred across litters from 4 different sows, providing further evidence that the resistance to C. difficile disease in piglets greater than 1 week old is directly related to the diversity and complexity of the intestinal microbiota. IMPORTANCE C. difficile is an important bacterial pathogen that is the most common cause of infections associated with health care in the United States. It also causes significant morbidity and mortality in neonatal pigs, and currently there are no preventative treatments available to livestock producers. This study determined the age-related susceptibility of piglets to C. difficile over the first 2 weeks of life, along with documenting the natural age-related changes that occurred in the intestinal microbiota over the same time period in a controlled environment. We observed that the populations of intestinal bacteria within individual animals of the same age, regardless of litter, showed the highest degree of similarity. Identifying bacterial species associated with the acquisition of natural resistance observed in older pigs could lead to the development of new strategies to prevent and or treat disease caused by C. difficile infection.
the neonatal period, generally 2 to 5 days after birth (6)(7)(8)(9). Since C. difficile can be cultured from both healthy animals and those with diarrhea, diagnosis of CDI requires demonstration of toxins TcdA and/or TcdB, as well as observation of macroscopic and histopathologic intestinal lesions in the spiral colon (3,10). The majority of neonate piglets are culture positive for C. difficile, but the intestinal population of the pathogen appears to decline over the first 2 months of life (8,11). Mechanistically, it is unknown why C. difficile-associated disease is confined to neonate piglets. One hypothesis is that host resistance to CDI results directly from an increasing diversity of the GI microbiota, which may provide protection by competitive exclusion through a reduction in available niches, nutrient availability, or production of antimicrobial metabolites (12).
The mammalian GI tract is colonized by a complex community of bacterial taxa that imparts several benefits to the host, including nutrient acquisition, colonization resistance, and immunomodulation (13). The microbial community also changes in response to diet, age, and host health status. Age-related changes in the microbiota occur over the lifetime of the host and are more prominent in younger animals soon after birth and again at weaning (14,15). A succession of increasing levels of microbial diversity of the intestinal microbiota begins as neonatal animals are exposed to the maternal microbiota and to microbes present in their environment (12,16). Intestinal microbial diversity and complexity continues to change from the time of weaning as the diets of young animals shift to solid feed that includes complex carbohydrates (14).
Previous studies to profile the microbial diversity in pigs have included characterization of piglets at weaning (3 weeks of age) and older (12,(17)(18)(19), but fewer studies have investigated C. difficile disease and the microbiota longitudinally in young, nursing piglets (15,20,21). Since colonization of pigs by C. difficile declines within a relatively short window of time as the animals age (8), it is important to identify host factors that contribute to colonization resistance against the pathogen. To better understand how the host gut microbiota is associated with C. difficile colonization and disease, we used 16S rRNA gene amplicon sequencing to track the changes in the taxonomic composition of the GI microbiota in neonatal piglets during the first 14 days of life in a controlled environment. We further associated the changes in the microbiota composition with the emergence of natural resistance to CDI as the pigs aged. Table 1 summarizes the design of the two separate experiments conducted in this study, as described in Materials and Methods. In each experiment, neonatal piglets were challenged with C. difficile (ribotype O78) spores at 2, 4, 6, 8, 10, 12, or 14 days of age. At 2 days postinfection, challenged piglets and one nonchallenged control piglet of the same age were euthanized. Cecal contents were collected for 16S rRNA gene amplicon sequencing, and contents from the spiral colon were collected for TcdA/TcdB toxin detection. Additionally, tissue from the cecum and spiral colon was taken for histological examination.
RESULTS
We used the presence of edema and microscopic lesions of mesocolonic tissue as indicators of disease, along with the presence of toxin, since lesions do not occur in the absence of toxin during the course of the disease. The results of this analysis are shown in Table 2. At necropsy, signs of disease and/or toxin were observed exclusively in piglets challenged at the earliest ages, which was consistent with previous studies (22). Lesions can be segmental, and since we did dissect multiple sections of tissue, it Nutrient source for control animals Piglets Challenged piglets 1 1 and 2 24 19 10 3 5 Nursing 2 3 and 4 28 10 5 Â 10 4 3 Milk replacer 15 10 6 is likely that additional lesions were missed by chance alone. Regardless, all of the piglets #6 days of age that were toxin positive were symptomatic. Specifically, half of the piglets (6/12) that received the spore challenge at either 2 or 4 days of age displayed evidence of disease (Table 2), including classic histopathologic lesions and the presence of toxins A and B within cecal contents (10,23). In contrast, only a minority of piglets challenged at 6 or 8 days of age showed evidence of disease (3/16), and none of the piglets challenged at $10 days of age displayed clinical signs associated with CDI (0/14). None of the unchallenged control piglets was symptomatic, and they had unremarkable histologic examination. Furthermore, no toxins were detected in the cecal contents (Table 2). Culturing directly from cecal contents yielded toxigenic C. difficile from four challenged piglets, all inoculated at 6 days or younger. In contrast, C. difficile was not recovered from the older piglets or control piglets. Because CDI rarely occurs in piglets older than 1 week, we grouped piglets into younger (days 2 to 8) and older (days 10 to 14) ages for statistical analysis of the presence of CDI. Fisher's exact test yielded a P value of 0.0186.
We next assessed the abundances of individual bacterial taxa from the cecal contents by 16S rRNA gene amplicon sequencing. For this, we analyzed microbial abundance data from piglets grouped by the age at which they were inoculated to identify any changes in the composition of the gut microbiota over time. While we initially compared the results from experiment 1 to those of experiment 2 (Table 1), we found no significant differences in abundance; therefore, the data were combined to increase statistical rigor for further analysis. Although diet is known to impact the gut microbiota, we failed to observe significant differences between the samples recovered from the control animals, regardless of diet (Table 1); therefore, the data from these animals were also combined. Since all of the piglets received colostrum before being challenged or removed from the sow, there was apparently insufficient time for changes between the two groups of control animals to be detected using 16S rRNA gene amplicon sequencing (Table 1).
Collectively, we found no significant differences in bacterial diversity or taxon abundance between piglets that were challenged with C. difficile spores and the negative controls (data not shown). While the numbers of sequences that matched the V3-V4 variable region of the C. difficile 16S rRNA gene were highly variable (Fig. 1), the results revealed the presence of the pathogen in approximately half of the piglets and primarily in younger animals. C. difficile sequence signatures were observed in 4/5 piglets challenged at 2 days of age, 6/6 challenged at 4 days of age, 6/8 challenged at 6 days, and 5/24 challenged at a later age. C. difficile sequence signatures were also observed in 3/8 unchallenged control animals. Since the control piglets showed no signs of disease, it is likely that the animals acquired C. difficile from the sow during parturition, since approximately 40% of sows shed C. difficile (8,22,24,25). Figure 2 summarizes the taxonomic abundances for piglets at each age postinoculation at the phylum (Fig. 2a) and genus ( Fig. 2b) levels, as well as for the feed provided to the sows (Fig. 2c). As shown, the microbiota of the piglets was dominated by Bacteroidetes (43.5 to 56.2%) and Firmicutes (16.3 to 37.85%), which is consistent with previous reports (26,27). Also, Bacteroidetes decreased while Firmicutes increased as the pigs aged. While these shifts were not considered significant for the short duration of this experiment, they suggest a trend in which the microbiota compositions of the piglets would have become more similar to those of the sows (22.1% for Bacteroidetes and 54.7% for Firmicutes) had the animals been allowed to mature (Fig. 2a). Within the phylum Bacteroidetes, Bacteroidia (43.5 to 55.9%) was the major class observed in the piglets, but at a reduced level by 14 days of age. Members of the Bacilli (4.3 to 12.6%), Erysipelotrichia (2.0 to 3.1%), and Clostridia (9.6 to 28.6%) were the dominant classes in the phylum Firmicutes. Bacteroidia and Bacilli also decreased as the piglets aged, along with an increase in the Clostridia (data not shown). The most notable differences occurred between the piglets and sows or between very young piglets and older piglets. This trend was further observed at the genus level (Fig. 2b), as Bacteroides decreased and genera in the phylum Firmicutes increased in the older piglets to a point where the 14-day-old piglets appeared more similar to the sows than did the 2-day-old piglets.
Alpha and beta diversity. Alpha diversity (Fig. 3) was calculated using QIIME and compared using the two-sample t test with P values calculated using Monte Carlo permutations and Bonferroni correction for multiple comparisons. Faith's phylogenetic diversity ( Fig. 3a) revealed increasing microbial diversity as the piglets aged, with significant differences between days 2 and 6, 2 and 8, and 4 and 10 (P = 0.036, for each increment). Although the microbial diversity in piglets at 2 and 4 days of age appeared to be distinct from that of the sows, the differences were not considered significant (P = 0.072, for both). However, piglets at ages 6, 8, 10, and 12 days were significantly different from the sows (P = 0.036, for each increment). The observed-OTU metric (Fig. 3b) followed a similar pattern, with increasing numbers of observed OTUs in the older piglets compared to the numbers in younger piglets, with day 2 being FIG 1 Read counts that match the C. difficile 16S rRNA gene sequence. Sequences matching C. difficile were found primarily in young piglets. While a few control animals also had C. difficile sequences, these animals showed no signs of disease and likely acquired C. difficile from the sows. significantly different from days 6 and 10 (P = 0.036, for both) and piglets at ages 2, 4, 6, 8, and 10 days being significantly different from the sows (P = 0.036, for all). Shannon diversity (Fig. 3c) also showed an increase as piglets aged, with piglets at ages 2 and 4 days being significantly different from the sow (P = 0.036, for both).
Unweighted UniFrac analysis indicated that the microbial community from the cecum of individual piglets clustered by age and shifted toward that of the sows (Fig. 4a), with an analysis of similarity (ANOSIM) test statistic of 0.65 and P = 0.001. The microbial communities of piglets challenged at 2 and 4 days of age were more similar to one another than to those of older piglets. Piglets that were 6 days of age at challenge clustered with 8-day-old piglets. Older piglets challenged at 10, 12, and 14 days of age clustered together. This clustering occurred across 4 different litters of piglets.
UniFrac analysis (Fig. 4) was conducted to reveal patterns of similarity in microbial composition with the age of the piglets. Both unweighted (Fig. 4a) and weighted (Fig. 4b) UniFrac analyses showed similar clustering, with the latter showing a less distinct clustering than unweighted (ANOSIM statistic of 0.41 and P = 0.001). Both metrics showed that piglets challenged at 2 and 4 days of age clustered more closely together than they did to the older piglets. The older piglets challenged at 6, 8, 10, 12, and 14 days of age also clustered more closely together. The more distinct clustering of same-aged piglets in the unweighted UniFrac compared to the weighted UniFrac suggests greater variability in the abundance of the microbiota between piglets of the same age. It is interesting to note that the one 8-day-old piglet that showed disease clustered with the 2-day-old piglets.
Pairwise ANOSIM statistical tests were conducted between age groups to determine which ages were different from each other (Table S1 in the supplemental material). Piglets of similar ages were more alike than the more distant younger or older pigs. For example, comparing piglets at 2 and 4 days of age, the unweighted ANOSIM test statistic was 0.18 with a P value of 0.032 and the weighted test statistic was 0.29 with a P value of 0.028. Comparing piglets aged 2 versus 14 days of age, however, yielded an unweighted ANOSIM test statistic of 0.92 with a P value of 0.004, with a weighted test statistic of 0.97 and a P value of 0.008. All of the age groups were significantly different from the sows. old. The taxonomic compositions of the microbiota from the feed and sows are also shown for comparison. OTUs making up less than 1% in each sample were binned as "Other." Taxa are shown by the key on the right for phylum (a), for genus (b), and for feed at the genus level (c). At the phylum level (a), Bacteroidetes was the most abundant phylum in the piglets, followed by Firmicutes. Firmicutes was the most abundant phylum in the sows, followed by Bacteroidetes. At the genus level (b), the numbers of taxa appeared to increase as the piglets aged, to more closely resemble the sows. C. difficile was not present in the feed sample (c), indicating that the only exposure the piglets received was through experimental infection. Predicted microbiome functional analysis with PICRUSt. The predicted functional capacity of the microbiota of the piglets was determined using PICRUSt, and functional differences among age groups were tested using statistical analysis of metagenomic profiles (STAMP) (28,29). This analysis yielded several categories of microbiome/host functions that were predicted to differ significantly between the experimental animal groups based on the composition of the microbiota (Table S1). Specifically, broad categories of carbohydrate, nitrogen, energy and amino acid metabolism, secondary metabolite biosynthesis, and cell signaling pathways were found to differ between animals grouped as young or old. Differences in more specialized functional categories were also noted, including functions associated with bacterial toxins and glycosaminoglycan degradation, which showed elevated activity in the younger piglets (P = 0.017, P = 0.01 and 0.012, respectively). Likewise, signatures of sphingolipid metabolism (P = 0.008) and steroid hormone biosynthesis (P = 0.007) were also higher in younger piglets than in older piglets and decreased with increasing age.
DISCUSSION
To better replicate current animal production standards, we designed the experiments to maintain piglets with the sows for at least 48 h until beginning C. difficile challenges. At that time, some of the controls were fed milk replacer, while others were left to nurse from the sow until necropsy (Table 1). However, no significant differences in taxonomic abundance were observed over the short time course of this study. For the symptomatic piglets, we defined CDI as the presence of lesions or the presence of toxin. While it is well documented in human neonates that toxin can be present in asymptomatic individuals (3,30), we are not aware of any studies that demonstrate the presence of toxin in asymptomatic piglets. Given this, we include the presence of toxin in symptomatic neonatal piglets as an indicator of CDI, since other major causes of diarrhea had been eliminated through vaccination of the sows.
Under these experimental conditions, half of the piglets (6/12) challenged at 2 or 4 days of age showed evidence of disease. The disease waned, however, by 6 and 8 days of age, as only 3/16 animals showed clinical signs. Evidence of disease became nonexistent by 10 days of age. While CDI has been observed in 10-day-old piglets, it requires specific experimental conditions where the piglets are removed from the sow within a few hours of birth and fed solely milk replacer, with minimal colostrum (20,22). These results suggest a role of the developing gut microbiota in establishing colonization resistance to C. difficile in pigs and are consistent with other published studies showing that clinical disease due to CDI is highly age dependent and limited to neonate piglets (7,8,11).
Characterization of the gut microbiota over time in the growing piglets revealed that Bacteroides, Fusobacterium, Enterobacteriaceae, and Sutterella were the dominant microorganisms in younger animals and their abundances decreased with age. Around 1 week of age, Prevotella increased to become the dominant organism in the older piglets. This is in agreement with other studies where Bacteroides was more abundant in very young piglets and Prevotella was the dominant organism in the cecum and distal GI in piglets 7 days and older (19,31). Specifically, while 16S rRNA gene sequence signatures matching Prevotella made up less than five percent of the relative abundances in animals at day 6, they increased to 20% in the older pigs. Prevotella has also been shown to associate negatively with the abundance of C. difficile in young piglets (21) and could represent a taxon that contributes directly to the resistance of older piglets to CDI.
In contrast, Fusobacterium is a microorganism associated with disease and inflammation (32). Piglets with neonatal porcine diarrhea had twice as much Fusobacterium as their healthy counterparts (33). Yang, et al. also found Sutterella to be more abundant in neonatal piglets with diarrhea (34). Here, we saw that both Fusobacterium and Sutterella were more abundant in younger piglets and decreased with age and decreased susceptibility to CDI.
We also note parallels between the prevalence of CDI in humans and the pigs studied here. For example, it has been shown that several bacteria in the orders Clostridiales and Erysipelotrichales in non-CDI patients appeared to confer resistance to C. difficile compared to infected patients (35). In the study reported here, several genera from the order Clostridiales increased in abundance around 6 to 8 days of age, including unknown genera in the family Ruminococcaceae, as well as the genera Ruminococcus and Oscillospira. While the abundances of these taxa at 6 or 8 days of age were not significantly different from earlier time points, they suggest a trend toward increasing levels of the bacteria as the animals age. The absence or decreased abundance of Prevotella was also associated with CDI in humans (36).
While the roles of specific host and environmental factors affecting colonization resistance are not completely understood, the host microbiota clearly plays a significant role (37). In general, increased microbial diversity correlates with pathogen colonization resistance (37), including against C. difficile in pigs (21). This trend is also observed in humans and in animal models, as antibiotic treatment is typically required to establish C. difficile colonization in older and experimental animals with a more diverse microbiota (38,39) and CDI is associated with antibiotic therapies in humans (40). Antibiotics can deplete specific taxa of the microbiota and decrease the overall diversity of the microbial community, and their use is associated with histological changes in the GI tract (41,42).
The role of the microbiota in colonization resistance is multifactorial, and possible explanations include the possibility that a decrease in microbial abundance and complexity increases the availability of nutritional or spatial niches. Correspondingly, a decrease in complexity can reduce the levels of antimicrobials produced by members of the microbiota that may otherwise inhibit the germination or growth of C. difficile (43). CDI in young piglets could also be influenced by anatomical and epithelial host factors associated with early development that in turn could also affect the composition of the GI microbiota. Consistent with these explanations, we observed a significant change in alpha diversity as the piglets grew from 2 to 10 days (Fig. 4).
To further take advantage of the taxonomic abundance data, we conducted PICRUSt analysis to identify functional features of the microbiota that potentially contribute to disease resistance (Table S2). We highlight a few observations from this analysis that could help explain how differences in microbiota composition may influence disease. For example, we observed that younger piglets had a higher relative percentage of microorganisms predicted to degrade glycosaminoglycan than did older piglets, and the younger piglets experienced C. difficile-associated enterocolitis while the older piglets did not. The ability to degrade glycosaminoglycans is associated with colitis and may be a contributor to disease severity (44). Similarly, steroid hormone biosynthesis capability was predicted to be decreased in older animals. These hormones, including glucocorticoids, are associated with physiological stress, such as the immunological stress of colitis (45,46). Sphingolipids are part of the host cell plasma membranes; their metabolism has been associated with intestinal inflammation and polyps in humans, and higher predicted levels of sphingolipids correlate with increased host epithelial cell damage in the younger piglets (47). Finally, we note that the predicted prevalence of the coding capability for bacterial toxins was higher in younger pigs, which appears to reflect the lower diversity with a higher proportion of potentially pathogenic Enterobacterales (data not shown).
The results presented here can also be considered in the light of improved strategies to bolster colonization resistance to C. difficile in young pigs. Because the incidence of CDI in piglets decreases with age, it stands to reason that manipulation of the piglets' GI microbiota to increase diversity and promote GI morphological changes to resemble those of more mature animals could potentially reduce C. difficile colonization. Towards this, pre-and probiotics have been tested in postweaned animals. A study using both a Lactobacillus strain and swine-specific Pediococcus showed improved average daily gain in both growing and finishing phases, as well as increased crypt depth and villus height in the jejunum of the probiotic-treated animals compared to the crypt depth and villus height in controls (48). Lactobacillus fermentum, a swine-specific strain, was also shown to be protective against Escherichia coli infection, apparently by modulation of the immune system in newly weaned piglets, and also increased weight and feed conversion of the piglets (49). In addition, nontoxigenic strains of C. difficile administered to young piglets prior to oral challenge with virulent C. difficile resulted in lower prevalence of CDI in a controlled, experimental setting (50,51). While preliminary studies using selected probiotics show promise in protecting preweaned piglets from disease, more studies are needed to assess the roles of other members of the gut microbiota in colonization and disease resistance. This can also include identifying autochthonous microorganisms present around 8 to 10 days of age that could prove useful in young animals to aid in prevention of C. difficile colonization through competitive exclusion. In addition, rearing strategies have been shown to influence microbiota diversity, with increases in diversity observed in animals who were reared in isolation and fed milk replacer (19). Interestingly, dietary exposure to soil to mimic the outdoor environment also accelerated the acquisition of microbial diversity, including Prevotella (26), which correlated positively with increased disease resistance in the study reported here (Fig. 2b).
We note too that C. difficile has been isolated from healthy animals of different ages and stages of production (8), and DNA sequences matching C. difficile were recovered from several control animals in the current study. These control animals were toxin negative, showed no mesocolonic edema, and did not have histologic lesions or diarrhea, suggesting the piglets likely acquired C. difficile from the sows.
In conclusion, the microbial diversity of the cecal contents increased with the age of the piglets. The clustering of the piglets by age in the UniFrac analyses is consistent with the hypothesis that resistance to C. difficile disease in animals greater than 1 week of age can be explained by the increased diversity and complexity of the intestinal microbiota. Despite the high animal welfare and economic impact associated with CDI in neonate piglets, no commercial vaccine or approved antibiotic treatments are commercially available. The identification of bacterial species or groups of bacteria associated with the development of natural resistance in older pigs could be the key to the development of new alternatives to prevent and/or treat disease.
MATERIALS AND METHODS
Animals. Four pregnant sows vaccinated against E. coli and rotavirus (Merck Prosystems RCE) were obtained from a commercial source and farrowed in biosafety level 2 (BSL-2) large animal facilities. All animal experiments were conducted in accordance with policies of the Iowa State University Institutional Animal Care and Use Committee (IACUC). Two separate experiments were conducted, as summarized in Table 1. After birth, piglets were allowed to nurse ad libitum until challenged. At 2, 4, 6, 8, 10, 12, or 14 days of age, two piglets were randomly selected and inoculated with 10 3 to 10 6 heat-activated spores of C. difficile strain ISU 15454-1 using an intragastric tube. This strain belongs to ribotype O78, toxinotype V, and produces both TcdA and TcdB (22). Once challenged, piglets were housed in clean 18-gallon plastic tubs and fed milk replacer (Esbilac, 10 ml 3 times/day) for the remainder of the experiment. Uninoculated control animals either continued to be nursed by the sow (experiment 1) or were removed from the sow and fed milk replacer (experiment 2). At 72 h postinoculation, each pair of C. difficile-challenged piglets plus one unchallenged control piglet was euthanized and necropsied.
Necropsy. At necropsy, gross pathological changes were recorded and contents from the cecum were collected for bacterial culturing. Contents from the spiral colon were also collected for toxin analysis by enzyme-linked immunosorbent assay (ELISA) (experiment 1 and 2) as described previously (10) or for Vero cell assay (experiment 2) (10, 23). Cecal contents for 16S rRNA gene amplicon sequencing were frozen at 280°C until processing. In addition, sections of cecum and spiral colon were fixed in 10% formalin, processed for standard histological evaluation using hematoxylin and eosin staining, and scored by a veterinary pathologist (P.A.A.) blinded to the experimental design.
DNA isolation and library preparation. Total genomic DNA from piglet cecal contents, sow feces, and the sows' feed was extracted using the MoBio PowerSoil DNA isolation kit (MoBio Laboratories, Carlsbad, CA, USA). PCR amplification of the V4 variable region of the 16S rRNA gene using primers 515F and 806R and amplicon sequencing were performed on an Illumina MiSeq by the Biosciences Division Environmental Sample Preparation and Sequencing Facility (ESPSF) at Argonne National Laboratory (Lamont, IL).
Sequence analysis. Sequences were analyzed using QIIME (Quantitative Insights into Microbial Ecology) and R (R Project) (19,20). Sequences were first demultiplexed and quality filtered using the default parameters, apart from a minimum Phred quality score of 25. Operational taxonomic units (OTUs) were chosen using uclust and the closed reference OTU picking method in QIIME, with 95% similarity (52). Taxonomic assignments were chosen by using PyNAST and aligning to the Greengenes database (13_8) (53,54). Alpha diversity and UniFrac beta diversity, as well as the respective statistical analyses, were completed using QIIME (55). Taxonomic abundances were compared using a custom R script provided by the Institute for Genome Science, University of Maryland.
PICRUSt analysis. Predictive metagenomic functionality was determined by analyzing 16S rRNA gene sequences using PICRUSt (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) and STAMP (Statistical Analysis of Taxonomic and Functional Profiles) (28,29). Because all but one of the piglets that showed evidence of disease were 6 days old or younger, piglets were grouped as younger (2 to 6 days) or older (8 to 14 days). Statistical analysis for two-group comparisons was completed using White's nonparametric t test with Benjamini-Hochberg false discovery rate (FDR) multiple-test corrections.
Data availability. Data are publicly available in the NCBI SRA under accession number PRJNA730839.
ACKNOWLEDGMENTS
We acknowledge the valuable input and suggestions from Glenn Songer. We also appreciate the technical assistance of Chandra Tangudu and the animal handling of both Jerry Synder and Dirk Barrett.
This work was supported in part by a grant from the National Pork Board (NPB number A.P. performed experiments, analyzed data, and contributed to the manuscript. P.A.A. performed experiments and analyzed data. K.K. performed experiments. N.A.C. conceived, designed, and performed experiments, analyzed data, and drafted the manuscript. G.J.P. interpreted data and contributed to the manuscript. C.W. and S.M. analyzed data and contributed to the manuscript.
We declare no conflicts of interest. | 2021-09-23T06:23:26.189Z | 2021-09-22T00:00:00.000 | {
"year": 2021,
"sha1": "dd9297ac85e1618744bb58bcaa3a83d6689e4daf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/spectrum.01243-21",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8db22391d81ae0e2c9aecec8f64dba9131c30697",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236944888 | pes2o/s2orc | v3-fos-license | Development of narratives in Tamil-speaking preschool children: A task comparison study
‘Narrative’ can be simply defined as a spoken or written account of connected events or experiences. The present study records the development of microstructure elements of narratives in 200 typically developing Tamil-speaking children aged between three years and six years and eleven months. It then compares their narrative productivity across two elicitation contexts: story retelling (SR) and story generation (SG). The samples thus obtained are analyzed for three narrative microstructure parameters, namely total number of words (TNW) in the narrative, mean length of utterances (MLU) and the number of utterances. The results reveal an increasing trend in all three microstructure parameters across both contexts. All three parameters are found to be quantitatively high in SR than in SG. Variation in the performance in these narrative tasks has been explained with behavioural observations from literature, cognitive architecture and a working memory model. It was found that gender differences do not follow a uniform pattern across age groups and elicitation contexts. Since the study has generated normative data for microstructure parameters of narratives, the observations can be used to analyze language deviance and help plan the narrative intervention protocol for language therapy.
Introduction
'Narrative' is the earliest form of a monologic discourse used as a means to report, analyze and regulate daily activities (Ukrainetz, 2006). As a form of storytelling, narratives is an integral part of human tradition and culture, passed down through generations since historic times. Feagans and Short (1984) argue that narrative is one of the most critical dialect capacities for school success. Narratives are useful in understanding the development of oral language and conceptual development and have a predictive function in assessing literacy and academic success in children (Stadler and Ward, 2006). Narrative skills are universal to all languages and cultural groups. Language-based anthropological studies, examining the role of narratives in a socio-cultural context, suggest the need to evaluate narratives in every language across all genres (Heath, 1986). There are often cultural differences in storytelling across languages groups. Certain language groups exhibit a monologic style, while others incorporate conversational ways. Sethuraman and Smith (2010) conducted a cross-linguistic study analyzing the verb argument in scene description task using pictures and movies that depicted Tamil-and English-cultures in Tamil-and English-speaking adults and children. They report that children speaking Tamil often leave out certain objects and verb arguments while describing scenes or pictures depicting the English culture.
The genres and style of storytelling across language groups vary. Thus, it is crucial to analyze and record the norms for every language to account for this variability (Johnson, 1995). Although extensive research on multiple parameters of narrative microstructures has been conducted for languages such as Finnish, Swedish, and English, the generalizability of these studies is questionable in clinical assessments (Pankratz et al., 2007). Variations are reported in the presentation of narrative microstructures across different languages and it is crucial to develop normative data in a language for the purpose of assessment and intervention planning (Justice et al., 2006;Rollins et al., 2000).
Narratives are culturally and linguistically sensitive in that the pattern of a narrative organization could vary between languages. Narrative discourses occur in all societies and reflect the teller's culture; it also has certain universal characteristics (Ukrainetz, 2006). Researchers suggest that the narratives of children vary based on their culture and language (McCabe, 1997). These variations mark the need to establish language-specific norms for narrative measures of neurotypical children.
1.1. Narrative elicitation procedures (NEP): story generation versus story retelling 'Narratives' are complex organized verbal recounts that start from a highly contextualized environment and reach decontextualized generated events. The literature review below explains the narratives types, narrative elicitation procedures and the microstructure measure of narratives.
The narrative skills of children originate from their recounting of daily life events. The literature proposes three major ways of eliciting narratives, namely personal narratives, story retelling (SR), and story generation (SG). Despite the variety of narrative forms, the way story is elicited has a significant impact on its structure, productivity, and complexity (Duinmeijer et al., 2012). Two major narrative elicitation procedures used in the literature are SR and SG (O'Neill et al., 2004;Westerveld and Gillon, 2010b). Both the procedures use fictionalized narratives as stimuli. Fictionalized narratives have been of interest in elicitation as it potentially reveals formal narrative performance as compared to informal conversational narratives (Hughes, 2001;Ukrainetz, 2006). Using structured fictionalized narratives in evaluation shows variability in performance across age groups (Ukrainetz, 2006). Allen et al. (1994) noted that children expressed a lot of content and action sequences in fictional stories. Terry et al. (2013) additionally found statistically important variations in personal and fictional narratives' microstructure measures. Thus, children's storytelling skills dissent across narrative genres. Shiro (2003) argues that different genres of narratives develop at different paces. In personal narratives, the context would be restricted to the observer and tend to make the scoring and analysis tough. Younger age groups might make personal narratives conversational to sustain the task. Fictional stories presented within a stimulative context would elicit context-sensitive narratives from them. Schneider and Dub e (2005) insist that there may be variability in the narrative performance of the child according to the presentation of the stimuli.
SR is one method to elicit a narrative that involves telling a story and having him/her retell the same story in his/her own words. It comprises recollection of a story where the topic, matter, and discourse length vary across different individuals as they must extract from their lexical and linguistic skills for SR. It is seen as the best predictor of language development delays in young children as it reflects their capability to deduce and reconstruct a sequential narrative (Gazella and Stockman, 2003).
SG requires the narrator to develop a story schema in his/her own words. For a child to generate a story, he/she must produce the story sequence from scratch, with a baseline story schema, from their experiences upon seeing pictures or hearing auditory stems. The narrator must be unique in developing his/her narrative, as SG for the first-time requires the interplay of both cognitive and linguistic skills.
SR and SG require linguistic (syntactical and semantical) and pragmatic (contextual usage) skills that are fluently interwoven together (M€ akinen et al., 2014;Wood et al., 2018;Wofford and Wood, 2019). The oral narrative quality and length depend on the elicitation procedure (Schneider and Watkins, 1996). Hesketh (2004) also found evidence supporting the assertion that retelling a previously heard story is easier than creating an original, novel one. This could be because retelling a story is a comprehension-based task whereas SG is a creative task (Hansen, 1978); however, SG reflects better reflect narrative organization skills.
The quality and quantity of narratives are often organized and analyzed as two major components: (a) macrostructure components (qualitative) that describe the overall structure and content of the narrative and (b) microstructure components (quantitative) that focus on language productivity and internal linguistic elements such as clauses, conjunctions, verb forms, and nouns. The microstructure of utterances reveals syntactic and semantic productivity, complexity, and exactness of the words required to maintain cohesion. The microstructures of narratives are often calculated using the total number of words (TNW), mean length of utterances (MLU), number of different words (NDW), number of utterances, number of communication units or T-units, and type-token ratios (Reese et al., 2012). These microstructure measures are often used in language sample analysis and in other indices developed for assessing narratives. TNW in a narrative signifies the length of the story, use of vocabulary and also reflects the overall verbal fluency of the children (Leadholm and Miller, 1994). Within a narrative, the TNW and NDW distinguish children with high and low language abilities (Muñoz et al., 2003). MLU reveals the syntactic organization, with the typical number of words used to make an utterance. MLU is indeed a good indicator of a child's language development (Ranalli, 2012). MLU calculated in words/morphemes indicates a children's linguistic growth and helps us monitor the grammatical complexity in their narrative performance (Gillam and Gillam, 2016;Muñoz et al., 2003). The number of utterances is a parameter used for narrative analysis, and it increases with age (Crookes, 1990;Hoffman, 2009). NDW and TNW are measures of lexical diversity in children's narrative production, whereas the number of utterances and MLU measure their syntactic complexity (Justice et al., 2006). A composite evaluation of these microstructure measures often considered being the best predictor of age-appropriate language development in children. These narrative productivity measures are used to distinguish children with language deficiency from neurotypical children. Even though microstructure measures tend to increase with age, a quantifiable increase across age should be profiled in order to evaluate the children's narratives quality (Hoffman, 2009). Various narrative microstructure indices have been developed and standardized in English, as shown in Table 1.
Based on the narrative elicitation procedure used children's narrative performance varies, the differences and similarities are discussed below.
M€ akinen et al. (2018) followed up on the relationship of narrative and reading skills in neurotypical Finnish children in a three-year longitudinal study. Twenty children were tested on narrative retell and SG tasks twice, at the ages of five and eight. These children formed longer stories in retelling task than in the SG task. Such differences notwithstanding, both procedures aim at accessing the highest language organization abilities.
DiSegna Merritt and Liles (1989) evaluated the narratives produced by language-impaired and non-impaired children aged nine to eleven years and four months in both SR and SG tasks. In the retelling task, both groups produced longer narratives, more story components, and complete episodes. In the SG task, clause length was shorter, and episode completion was less frequent. They also found the SR task to be more compliant for clinical use. A large amount of detailing and deviation from the context of stimuli in SG task makes scoring the sample more difficult and less reliable. Given the fact that SR is often regarded as an easier task than SG, the study suggested that both were effective gauges of narrative ability and activated a cognitive organization consistent with the storey schema. Westerveld and Gillon (2008) studied the impact of narrative elicitation on the performance of children's oral language. A group of eleven children (aged seven years eleven months to nine years and three months) with reading disabilities and a control group (age-matched) of equal number of children with age-appropriate reading skills constructed narratives in different contexts: SR, SG, and personal narratives. Microstructure measures of semantic diversity, verbal productivity and morphosyntax were examined in the study. The findings revealed no significant interactions between both the groups implying that the children responded similarly to the elicitation contexts. They reported presentation of longer narratives in SR than in SG tasks and the results showed that SR was more reliable and yielding in eliciting narratives. Schneider and Dub e (2005) suggest that SR contexts are more effective than SG contexts in bringing about a longer and more extensive narrative sample from young children. Ton er (2019) compared SR and SG tasks among 431 typically developing Swedish children aged between three years and six years and four months. The comparison revealed that SG had more morphosyntactic correctness and syntactic complexity than SR. It also led to longer stories compared to SR conditions. However, SR was found to have more story content than SG did.
Tamil and children's narratives
Tamil is an ancient Dravidian language spoken in India and Sri Lanka. There are just two works published on narratives of typically developing Tamil-speaking children: Priyadharshini (2017) and Ravichandran et al. (2020). Priyadharshini (2017) analyzed the development of story grammar elements (macrostructure) in the narratives of Tamil-speaking children between five and eight years, with 15 children in each age group. The study was carried out using the "Frog, where are you?" story, which was normed with the English-speaking population. Ravichandran et al. (2020) analyzed the syntactic and semantic diversity in self-narratives and SR among 30 Tamil-speaking children from first and second grade. Ravichandran et al. (2020) analyzed narrative development, gender difference, and task variation in the microstructure parameters, namely TNW, MLU, NDW, and type-token ratio. Children tend to narrate as early as two years of age. The early development of narrative has a considerable influence on children's subsequent language and literacy development (Heilmann et al., 2010;Justice et al., 2006).
Studies on emerging narratives in typically developing Tamilspeaking children have not been carried out yet. The tasks and materials used in these two studies aimed at assessing different genres of narratives. The narrative parameters they evaluated varied between the studies, and the sample size of thirty in each study was inadequate to generalize the findings.
Venkatraman and Vijayarangam (2020) compared the microstructure measures, NDW and MLU, of typical developing Tamil-speaking children and verbal children with autism, in the age range of six to eight years. The SR task they employed to elicit narratives reflected a reduced NDW and MLU in children with autism than those with typical development. Although the parameters reflect an inadequacy in narratives, no standard protocol and norms have been established for Tamil in order to quantify the inadequacy in narratives. Narrative measures that are time efficient, simple, and easier to calculate, score, and interpret must be established for regular clinical evaluation (Hoffman, 2009). MLU, TNW, and utterances are often used in regular language assessment too, so it would be easy to carry out the narrative assessment time-efficiently. Since these measures are common to narrative and language assessment, they would be easy for clinicians to use.
The procedural influence in elicitation of narratives has had mixed results. There is no consensus regarding appropriate procedures for eliciting children's narratives because each study reported different outcomes measures of the narratives used (Adani and Cepanec, 2019). The literature reports mixed findings on gender differences in narrative productivity, studies are reporting no significant differences between males and females (Muñoz et al., 2003;Ravichandran et al., 2020;Safwat et al., 2013), while few report females outperform males (Kaderavek et al., 2004). An insight into developmental changes in narrative elicitation procedures would streamline the evaluation and intervention protocols for children with language disability. As the inadequacy of established normative data for microstructures of narratives in Tamil-speaking preschool children is lacking as mentioned above, this study takes into account three empirical microstructure parameters which are regularly used in clinical language evaluation.
Objectives 1. To record the developmental trends in SR and SG tasks across four groups of Tamil-speaking children aged three years to six years and eleven months (Table 2). 2. To compare the effect of elicitation context on the microstructure elements of narratives across these age groups. 3. To find whether there are any gender differences in microstructure parameters in the SR and SG contexts.
Participants
The sample consisted of 200 typically developing three years to six years and eleven months old preschool as well as primary school children from eight schools in Chennai who used Tamil as the primary language. Participants were assessed for speech, language skills, and hearing ability. Children with speech delays, sensory difficulties, or late language development were screened out. The participants were recruited based on convenient sampling and demographic/personal data were collected for every participant. The children were classified into four age groups, with an identical number of boys and girls in each group, as shown in Table 2. The study was conducted according to the established ethical guidelines of the Annamalai University, Chidambaram. Prior to participation, each participant's parents were explained about the study thoroughly and informed consent from them is secured.
Material
The stimulus used for the SR and SG tasks was evaluated for its content which was checked by two linguists, five preschool teachers, and a counsellor. A pilot study on the familiarity of stimulus for both the task was conducted with ten children from the first two groups. The stimulus used for SR was selected from storyweaver.org, 1 a website where stories are categorized age-wise. "My fish, no fish" was the most familiar story to the children and was used as the stimulus. The story had colourful pictures and Tamil text.
SG was conducted using "What is nextlevel 1" from creative educational aids (Appendix A). The original material had eight sets of picture-sequencing cards, with four cards in each set. Out of the eight sets, five were used as test stimuli, two were used to demonstrate the task, and one was removed due to unfamiliarity. The lower age group cannot (2017) Age range 3 years to 6 years 4 years to 9 years 4 years to 12 years 8 months 5 years to 12 years 3 years to 7 years 11 months generate stories if there is too much structural complexity in the study; therefore, the task was simplified based on the pilot study. The time allotted to narrate the stimuli selected for both SR and SG tasks was equal. The content of both stimuli could not be equated as there are no standard scripts for SG.
Story elicitation
The study participants were instructed to look at the colourful pictures, pay close attention to the researcher's narration of the story, and repeat it when asked after a two-minute break while viewing the storybook. A sample story sequence was demonstrated to elicit SG. They were encouraged to generate stories after seeing the prearranged sequence of four cards. Speech samples for both tasks were audio-and video recorded and transcribed verbatim. The recording duration for both the tasks was approximately three minutes. Before beginning the tasks, a rapport was created with each participant. If the child had difficulty narrating during either of the tasks, a maximum of five neutral prompts were provided such as afterwards, anything more, like, that is … After completing the task, participants were given a toffee as a reward.
Analysis and transcription
For the analysis of microstructural elements, all utterances were included, except the researcher's neutral prompts, mazes, false starts, and repeated utterances of the children. The transcriptions were first marked for utterances and then analysed for the constituent microstructural elements for both SR and SG tasks, namely MLU, TNW, and the number of utterances.
1. TNW was computed by counting the number of words in each sample after removing mazes, false starts, and repeated utterances (Justice et al., 2010). 2. The number of utterances (see: Crookes, 1990).was calculated by demarcating the utterance and counting them. 3. MLU was computed by dividing the number of utterances by TNW (Baixauli et al., 2016).
A quarter of the sample was randomly selected to measure the interrater reliability of the data's parameters. The reliability measured using Cohen's Kappa reflected 92% inter-rater reliability between the two raters. The results were tabularized in MS-Excel and grouped age-wise. The tabulated data were then analysed using SPSS (version 21) to examine the effect of the narrative context, developmental trends, and gender differences across the groups.
Developmental trends in the narrative context
The developmental trends in SR and SG of narratives in Tamilspeaking children between three years and six years and eleven months were analysed in terms of the microstructural components of narratives. Three parameters were assessed to measure language productivity, namely, TNU, MLU, and number of utterances. The age-wise descriptive data of the three parameters were calculated across four age groups. The mean and standard deviation of the parameters are described in Table 3. The results revealed a significant increase in all three parameters of microstructure elements in both the SR and SG contexts.
ANOVA was used to analyse the development of narratives across the four age groups for the three parameters ( Owing to significant differences in ANOVA, a post-hoc test for multiple comparisons was used to assess the difference betweenthe groups on parameters TNW, MLU, and number of utterances in SR and SG (Table 5). The Benforni post-hoc pair-wise comparison indicated a significant difference between the groups in TNW with the p-value <0.001 in SR and SG. The results revealed an increasing trend in this parameter as the age increased in both the SR and SG contexts. MLU also reflected a significant difference across the four age groups, with the p-value < 0.001 in SR the two contexts. MLU showed a steady increase across the groups in both SR and SG. There is a significant difference in the number of utterances in both the elicitation contexts across the groups, with the p-value < 0.001. However, no significant differences were noticed among three-and fiveyear-old children in SR, as the p-value (1.00) is greater than 0.05.
Story retelling versus story generation
The variability in narrative performance in SR and SG was assessed across the four groups. An independent sample t-test was conducted to . Therefore, the independentsample t-test (Table 6) revealed more productivity in the SR context in all three parameters of the microstructural elements than in the SG context.
Gender differences
Gender differences in the narrative performance for both elicitation tasks and microstructural parameters were analysed for each age group. Mann-Whitney U-test was employed to compare 'girls' and boys' performances in each group (Table 7).
The Mann-Whitney U-test indicated that three-year-old boys and girls showed no difference in story length, as TNW was not significantly different in SR (U ¼ 259, p-value ¼ .290) and in SG (U ¼ 301.5, p-value ¼ .826). The U score and the level of significance obtained for MLU in SR was U ¼ 312.5, with the p-value of 1.000, while in SG, it was U ¼ 225, with the p-value of .032. The number of utterances revealed the U-score of 269.5 with a p-value of .392 in SR and 196.500 with the p-value of .014 in SG. SR did not reveal a significant gender difference for all three parameters. MLU and the number of utterances in SG reflected that girls performed better than boys.
TNW of the four-year-old girl children reflected a better narrative performance, with the U score of 210.5 and p-value of 0.046 in SR. Fouryear-old girls produced more words in than four-year-old boys in SR.
Compared to five-year-old boys, five-year-old girls showed a higher MLU, with a score of U ¼ 531 and a p-value of 0.017, and number of utterances with a score of U ¼ 195 and a p-value of 0.017 in the SR.
Six-year-old girls display increased scores on MLU (U ¼ 183.5, pvalue ¼ .003) and number of utterances (U ¼ 61.000, p-value ¼ 0.029) in the SR context compared to six-year-old boys.
Developmental trends
The increase in TNW with an increase in age was similar to that obtained by Khan et al. (2016), who examined the narratives of 386 English-speaking children between three and six years. Tilstra and McMaster (2007) measured the TNW produced by 45 kindergarteners, first-graders, and third-graders, and found that TNW increased significantly with age.
The increase in TNW could be attributed to the children's ability to make hierarchical relationships between events in a complex narrative production as age increases (Heilmann et al., 2010). TNW signifies the story length, which becomes longer and richer as children can evaluate their own stories into their verbal performance (Muñoz et al., 2003). The richness in the TNW is related to the acquisition of new vocabulary through repeated exposure to narrative forms through the preschool and young school-age (Heilmann et al., 2010). The gradual increase in TNW is coherent with the typical language development. The rate of acquisition of vocabulary is higher in younger ages, peaking at school-going ages (N oro and Mota, 2019). The rapid increase in the gaining of new vocabulary is reflected in the narrative ability of children across both the elicitation tasks.
The ability to create a longer chain of events and recall the events coherently has implications on story length. As a result, this microstructure metric could reveal information about a child's overall narrative productivity. As a measure of story length, TNW appears to reflect language output and improves with both language skill and chronological age (Justice et al., 2006). MLU has been a valuable measure in assessing language development (Rice et al., 2010). It is an indicator of the syntactic complexity involved in children's utterances as it shows a steady increase with age and reflects age-related syntactic complexity. The mean length of utterance showed a steady increase with an increase in age and thereby reflected age-related syntactic complexity. The current study reiterates the existing findings that MLU is an index of linguistic maturity and grammatical development (Ranalli, 2012). The semantic and syntactic role of MLU, as evaluated by Brown (1973), clearly denotes that when children reach an MLU of more than three morphemes, they tend to have coordinated sentences. Children predominantly use content words at two years of age, and from three to five years old, they gradually add functional and grammatically complex words to form longer sentences (N oro and Mota, 2019). Almost every specific component of linguistic knowledge that children acquire lengthens their uttered sentences. Therefore, the acquisition of words or vocabulary is a requisite and a critical aspect of syntactic development.
The current study substantiates this assertion as TNW and MLU both show concomitant increases across the four age groups in both narrative elicitation tasks (N oro and Mota, 2019). The number of words used in a sentence also signifies the children's vocabulary and its access from the semantic memory. Ravichandran et al. (2020) reported similar trends in six to eight-year-old, neuro-typical urban Tamil speaking children in self-narration and SR context. The number of utterances as an index of language productivity showed a gradual increase across the four age groups in this study. The outcomes of this work reinforce the observations of Muoz et al. (2003), which reveal that three-year-old can create two or three events in a narrative, four-year-old can create distinct sentences connected to the story, and five-year-old can create interrelated sentences. Children's ability to use subordinate clauses develops from their preschool years and continues throughout their school years (Heilmann et al., 2010).
However, there was no significant difference in the number of utterances between the age groups three years and five years in the SR context. Children depend on their vocabulary skills before they acquire complex syntax to organize narrative production. The acquisition of vocabulary and its use in narrative production is evident from the steady increase in the parameters of TNW and MLU with age. However, the number of utterances parameter which is related to the syntactic development could reflect an overlap in developing narratives of children (Heilmann et al., 2010). In terms of semantic and syntactic patterns of narratives, there is a range of overlap between grades, and a similar pattern of narration may be noticed between children of different grades (Johnson, 1995). The narrative turns sophisticated and gains complexity from five years of age. The children at this age describe and express almost 73 % of story structures in their narratives (Hedberg and Stoel-Gammon, 1986). The cognitive load to match story structures of the adult in a retell might restrict the utterances number. This variation was not noted in the SG task as it reflects the genuine narrative skills of the children. As the complexity of the macrostructures increases, there is a tendency for the microstructure elements to decline in quantity, the trade-off mentioned in the literature was observed in the present study (Justice et al., 2006). However, the six-year-olds have more utterances; than five-year-olds, this reflects the sophistication in narrative skills acquired by this age.
The developmental trends tend to be evident in both tasks, despite that it taps two different aspects of narratives, that is SR is more of a comprehension process while SG is an expressive process. The developmental change could be due to factors such as familiarity with the content, experiential knowledge of elements in the story, story complexity, and interpretative skills of children. Familiarity of the content is better in the older age group than the lower age one, which is consistent with the developmental changes observed in this study. The complexity of both the generated as well as retold story could also be attributed to these changes. The length and number of episodes added to a story impacts retelling and generation of the narrative. The lower age group show a limited capacity to handle a chain of events in a story. The comprehension skills of a child are important in retelling the story. Preschoolers are not mature enough to interpret and name all the elements of the story, while the older age group is sensitive to story settings and personal motivation of the characters in the story (Berman, 1995).
Story retelling versus story generation
Language productivity is observed to be more in SR than in SG. This finding is coherent with the studies quoted in the literature (M€ akinen et al., 2014;Westerveld and Gillon, 2010a).
All three parameters of narratives used in the study reflected more productivity in SR than in SG. This difference in performance between the two tasks could be attributed to the conceptual development that occurs due to socialization, which in turn accelerates the child's internalization from a Vygotskian view (Schneider and Watkins, 1996). The narrator's pre-modelled narration creates an internalization of the story schema, thereby increasing the productivity in the SR context. SG from scratch from a picture or from an auditory stimulus is a difficult and demanding task (Westerveld et al., 2004). Peterson and McCabe (1994) argue that children tend to be confused and have difficulty finding words as they create stories based on picture stimuli.
The narrative task seems to employ integration of cognition and memory in the most logical order. Thus, the performance difference in SR and SG has to be explained from cognitive and memory correlates of language alongside the behavioural understanding. The SR task is often thought to be a comprehension process, as a similar model of the story is given to the children. Retelling is a top-down process that occurs in recognizing matching narrative patterns. In contrast, generations happen by a bottom-up process of evaluating the received sensory input from the stimuli and framing that in a story schema (Anderson, 2015).
The bucket theory argues about performance trade-offs across distinct language tasks and explains the variance in performance across two narrative elicitation tasks based on cognitive loads (Crystal, 1987). When retelling a story, children find it easier to use structural support provided by the narrator's model, which is evident from the improved performance in all three microstructure parameters when compared to the generation task. The need for creation and planning of a fictional story in SG tasks may require a greater cognitive strain on children than SR task. As the complexity of the language task increases, there is a reduction in the microstructural parameters of narratives in children.
Le on (2016) also explained a cognitive architecture for narratives and considered the term narrative memory to be a subset of episodic memory and semantic memory. Narrative units are stored as chunks and retrieved from the narrative memory, which reflects the integration of episodic and semantic memory. The ability to chunk the information could be the reason for the developmental increase in narrative parameters and which increases with age, reflecting a gradual change in narrative tasks' observed performance.
Baddeley's model of working memory could be applied to explain the performance difference in narrative tasks. The primary components of the model are a phonological loop that stores verbal information from the ongoing speech, a visuospatial sketch pad that processes visual and spatial information, an episodic buffer that corresponds to the sequential organization of events, and a central executive structure that collates the information between these components and long-term memory (Figure 1). SR involves the narrator to verbalize the storyline, describe the picture sequence to the children, and later ask them to describe the picture sequence. In contrast, SG task is elicited by the mere presentation of visual stimuli like the picture sequence used in the present study. Therefore, it could be attributed that the retelling task activates visuospatial sketch pad and phonological loop, making it an auditory-verbal input. However, SG that exclusively involves visual stimuli presentation cannot tap into the phonological loop directly. Hence, the SG task could activate only the visuospatial sketch pad component to process the visual stimuli. The activation of these two components simultaneously could be the reason for improved performance in a SR than in a SG. The Baddeley (2000) model of working memory also suggests that a task with dual-modality stimulation tends to be more efficient than a single modality. When a narrative elicitation task has a dual-modality of stimulus presentation, the narratives tend to be dense, as seen in the SR task.
The use of the auditory-verbal mode of stimuli would be crucial while assessing young children, as they might apply the cues of one modality to other modalities to complete the task. SG seems to reflect the complexity involved in processing unisensory stimuli, reflecting the children's genuine ability to produce a self-made narrative.
SG task gets more sophisticated from the age of six years onwards. Children by this age can retell and comprehend canonical stories well. Childrens' increased complex cognitive capability enables them to express and comprehend more complex story material (Miles and Chapman, 2002). The story schema to construct narratives seems to get more comprehensive from six years of age.
The present study tries to understand the difference in performance between narrative elicitation tasks based on the intrinsic cognitive and memory processing that happens on exposure to narrative stimuli. The results highlight the expected performance variation in the elicitation procedure used during the narrative assessment of children with language deficiency and the intervention process for language therapy. The results suggest that while assessing pre-schoolers, the choice of eliciting narratives should be a SR condition. However, the complexity of assessing a narrative could be augmented by a SG task for children older than five years to find their ability to construct a narrative (Hoffman, 2009;(Pavelko and Owens, 2017); Shiro, 2003). This suggestion is made with the notion that comprehension precedes expression since retelling is a comprehension-based task younger children would be able to perform narratives, therefore avoiding underestimation of their narrative skill.
Gender difference in narratives
Gender differences were not uniformly observed across the parameters in the elicitation task in this study. Three-year-olds exhibited differences in MLU and the number of utterances in the SG task. Four-yearolds showed differences in TNW in a SR task, while five and six-year-olds showed differences in MLU and the number of utterances in the SR task. This difference from the Mann-Whitney U-test shows better performance by girls in certain parameters and contexts as compared to the boys.
The early onset of clauses in the spontaneous speech of girl children could explain the differences in MLU and the number of utterances in three-year-olds on SG and SR (Adani and Cepanec, 2019). In the initial few years of their life, girls' lexical and grammatical development tend to be more rapid. Boys produce word combinations three months later than girls, according to Adani and Cepanec (2019), which can also explain the difference in MLU and the number of utterances between boys and girls. The gender differences in certain parameters could be due to the early acquisition of language, innate rapid vocabulary acquisition, and its presentation in social communication context possessed by girl children Adani and Cepanec, 2019. The literature suggests that not only gender differences but also variables such as age and nature of task influence linguistic performance (Justice et al., 2006). Ravichandran et al. (2020) observed no significant gender difference in the narrative performance of Tamil speaking children in the age range six years to eight years in the microstructure parameters evaluated. Also, the typical development of narratives in Arabic-speaking European children between two and six years revealed no significant gender differences (Safwat et al., 2013). However, the outcomes of the study cannot be generalized as a gender difference due to the inconsistency in its presentation across age groups and elicitation tasks.
Conclusion
The present study aimed at identifying the developmental trends, effect of narrative elicitation context, and effect of gender difference on the narratives by Tamil-speaking children. Although there are many microstructure parameters, this study focused on TNW, MLU, and the number of utterances because these conventional measures can be calculated by hand. These measures make narrative assessment convenient as the parameters are regularly used in clinical language assessment. These parameters measure the semantic and syntactic complexity of the narrative (Justice et al., 2006).
Although there are several language sample analysis methods, the practical problem with its clinical usage is time constraints and multiple parameters for the calculation. An age-wise criterion-referenced measure developed with conventional language assessment measure would solve the time constraint issue faced during a narrative assessment (Pavelko and Owens, 2017). These metrics could help us understand the baseline narrative skill of a child with language disorder and set goals during language therapy for children with a language disorder. As the narrative skill requires cognition, memory and language, even after a continuous narrative intervention, if the child does not show any progress in these metrics, it would direct to evaluate detailed cognitive and memory skills to address the language inadequacy.
Several studies show a similarity in the acquisition of vocabulary in children across different language groups. Although there are similarities in the acquisition pattern in these narrative micro measures, the quantitative measure varies across languages and tasks (Shiro, 2003). Therefore, it is important to develop normative data for every language. The literature comparing narratives of neurotypical children, children with autism, ADHD, specific language impairment, Down's syndrome, and reading disability consistently report a quantitative decrease in the microstructural measures like TNW, MLU and number of utterances (Baixauli et al., 2016;Feagans and Appelbaum, 1986). These studies also emphasise performance variation in SR and SG tasks. Results from this study also support the finding that SR is more productive than SG. The interplay of working memory and cognitive abilities alongside linguistic abilities has been noted and explained as the reason for differences in narrative performance between typically developing children and those with language disabilities.
This study also has implications in language therapies for children with language disorders. Even after they start speaking, children with language impairment tend to exhibit inadequate narrative skills. The quality of the narratives they produce can be evaluated and goals could be framed to improve the inadequacy in their language. Gender differences could not be generalized as there are inconsistencies in its presentation across the age groups, parameters, and contexts. Narrative analysis in the literature shows that language sample analysis is more time consuming and often not practised in a clinical scenario (Pavelko and Owens, 2017). Although several indices are used to measure narrative productivity in typically developing children in various languages, there are no such data for Tamil.
This study addresses the prime need to establish normative data in typically developing Tamil-speaking children. It considered convention measures common to language evaluation and microstructures of narratives to make the data clinically useful. These measures reflect a quantitative difference in typical narrative development with age and progression in language skills. The current study helps to identify ageappropriate narrative behaviors in Tamil speaking children and provide directives to avoid incorrect attributions of the developmental process. They would help in setting various criteria during language therapy and also in monitoring the progress. Further studies on other semantic and syntactic categories like nouns, pronouns, tenses, and adjectives and their distributions would help us relate the development of the internal construct of narratives in children.
Declarations
Author contribution statement Krupa Venkatraman: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
V. Thiruvalluvan: Contributed reagents, materials, analysis tools or data.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data will be made available on request. | 2021-08-08T05:25:13.502Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "ec4f88de2012963e5d6b65af0255ff7ed88e6c88",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844021017448/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec4f88de2012963e5d6b65af0255ff7ed88e6c88",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216109402 | pes2o/s2orc | v3-fos-license | Multiscale Molecular Modeling in G Protein-Coupled Receptor (GPCR)-Ligand Studies.
G protein-coupled receptors (GPCRs) are major drug targets due to their ability to facilitate signal transduction across cell membranes, a process that is vital for many physiological functions to occur. The development of computational technology provides modern tools that permit accurate studies of the structures and properties of large chemical systems, such as enzymes and GPCRs, at the molecular level. The advent of multiscale molecular modeling permits the implementation of multiple levels of theories on a system of interest, for instance, assigning chemically relevant regions to high quantum mechanics (QM) level of theory while treating the rest of the system using classical force field (molecular mechanics (MM) potential). Multiscale QM/MM molecular modeling have far-reaching applications in the rational design of GPCR drugs/ligands by affording precise ligand binding configurations through the consideration of conformational plasticity. This enables the identification of key binding site residues that could be targeted to manipulate GPCR function. This review will focus on recent applications of multiscale QM/MM molecular simulations in GPCR studies that could boost the efficiency of future structure-based drug design (SBDD) strategies.
Introduction
G protein-coupled receptors (GPCRs) have been known as the largest family of human membrane protein that plays crucial roles in many biological processes such as vision, sensing, and neurotransmission [1][2][3][4][5][6][7][8]. Signal transmission through GPCRs is initiated by the binding of extracellular ligands including drugs, hormones, and other stimuli. The structural dynamics of GPCR as a consequence of ligand binding is considerably complex and contribute to its physiological functions. At the orthosteric binding site, GPCR ligands can be roughly divided into agonists and antagonists in which the first one activates the GPCR activity, whereas the latter act as blockers. Besides this, GPCR ligands can also bind to the allosteric site and indirectly affect the agonistic activity of GPCRs. Therefore, examining for ligand-receptor interactions that are vital in effectuating desirable GPCR functions has been the mainstream focus of most GPCR studies. To date, GPCRs are a target of more than 30% of approved drugs [9,10]. The understanding of the effects of ligands on GPCR properties has become the main task in rational drug and ligand designs. The importance of GPCR as viable drug targets was portrayed by the conferment of the Nobel Prize in Chemistry to Lefkowitz and Kobilka in 2012 for their groundbreaking discoveries on GPCRs [11]. The increase in structural data and the exploration of GPCR dynamics revealed the flexibility of its binding pockets as well as the tendency of GPCRs to adopt distinct conformations at different states. Therefore, computational simulation could serve as a complementary tool that could help in the discovery and design of GPCR ligands with desirable effects by permitting the scrutiny of GPCR dynamics.
Molecular Docking Development using QM/MM Approach
The docking approach is a promising tool that is commonly utilized in SBDD efforts as a means to predict binding conformation of small molecules in the binding sites of target proteins. A number of docking software is available, for example, AutoDock [20], GOLD [21], GLIDE [22][23][24], and SwissDock [25], thus enabling facile determination of potential binding poses of ligands in proteins. The principle of docking simulations is based on the "lock-and-key" hypothesis, wherein the protein (host) is the lock while the ligand (guest) is the key, hence implying the specificity of a ligand to bind to a certain protein. During docking simulations, the orientation of guest(s) in the host is optimized and the accuracy of this process depends on two major components namely searching algorithm and scoring function [26]. Additionally, consideration of protein and ligand flexibility has been shown to play a crucial role in improving the accuracy of predicting ligand binding affinity to target protein.
Thus, the efficiency of the docking method will be improved by including protein flexibility using induced-fit docking (IFD) or utilizing more than one host structure by acquiring a protein ensemble through MD simulations [27][28][29][30]. Recently, the QM/MM method was employed to improve the quality of docking simulations. The advantage of this approach is that the flexible protein environment is considered during the QM/MM calculation through simple geometry optimization of
Molecular Docking Development using QM/MM Approach
The docking approach is a promising tool that is commonly utilized in SBDD efforts as a means to predict binding conformation of small molecules in the binding sites of target proteins. A number of docking software is available, for example, AutoDock [20], GOLD [21], GLIDE [22][23][24], and SwissDock [25], thus enabling facile determination of potential binding poses of ligands in proteins. The principle of docking simulations is based on the "lock-and-key" hypothesis, wherein the protein (host) is the lock while the ligand (guest) is the key, hence implying the specificity of a ligand to bind to a certain protein. During docking simulations, the orientation of guest(s) in the host is optimized and the accuracy of this process depends on two major components namely searching algorithm and scoring function [26]. Additionally, consideration of protein and ligand flexibility has been shown to play a crucial role in improving the accuracy of predicting ligand binding affinity to target protein.
Thus, the efficiency of the docking method will be improved by including protein flexibility using induced-fit docking (IFD) or utilizing more than one host structure by acquiring a protein ensemble through MD simulations [27][28][29][30]. Recently, the QM/MM method was employed to improve the quality of docking simulations. The advantage of this approach is that the flexible protein environment is considered during the QM/MM calculation through simple geometry optimization of
Molecular Docking Development using QM/MM Approach
The docking approach is a promising tool that is commonly utilized in SBDD efforts as a means to predict binding conformation of small molecules in the binding sites of target proteins. A number of docking software is available, for example, AutoDock [20], GOLD [21], GLIDE [22][23][24], and SwissDock [25], thus enabling facile determination of potential binding poses of ligands in proteins. The principle of docking simulations is based on the "lock-and-key" hypothesis, wherein the protein (host) is the lock while the ligand (guest) is the key, hence implying the specificity of a ligand to bind to a certain protein. During docking simulations, the orientation of guest(s) in the host is optimized and the accuracy of this process depends on two major components namely searching algorithm and scoring function [26]. Additionally, consideration of protein and ligand flexibility has been shown to play a crucial role in improving the accuracy of predicting ligand binding affinity to target protein.
Thus, the efficiency of the docking method will be improved by including protein flexibility using induced-fit docking (IFD) or utilizing more than one host structure by acquiring a protein ensemble through MD simulations [27][28][29][30]. Recently, the QM/MM method was employed to improve the quality of docking simulations. The advantage of this approach is that the flexible protein environment is considered during the QM/MM calculation through simple geometry optimization of the assigned QM region in the presence of the free energy surface of the surrounding protein, which was constructed based on MM potential. The accuracy of the protocol was validated using a set of known protein-ligand structure complexes, which was taken from experimental data.
The implementation of QM/MM into docking protocol has been developed by many research groups to improve the accuracy for use in structure refinement and routine analysis involved in docking studies. The binding conformation of the ligand within the binding pocket of olfactory receptor MOR244-3 was studied [31]. The ligand-binding site contains a Cu(I) ion, which is responsible for the binding of the organosulfur odorant. In the study, the ligand-protein and ligand-Cu interactions are well characterized by the QM/MM description in which the QM region covers all important residues in the binding pocket. The calculated results are consistent with the mutagenesis studies of the receptor activation, which showed that the binding site consists of the Cu ion coordinating with His105, Cys109, and Asn202. Additional analyses performed using various ligands revealed that the thioether group is a significant part of the ligand-binding mechanism. The obtained results could be applied as a case study for other mammalian olfaction investigation. Recently, the activation of human odorant receptors, OR5AN1 and OR1A1, was studied to compare the calculated binding energies of (R)-muscone and other related compounds [32]. The theoretical results are in good agreement with the experimental results that indicate the preference for (R)-over (S)-enantiomer. Structural observation revealed that the ligand is stabilized by forming a hydrogen bond with Tyr260 and hydrophobic interactions with surrounding aromatic residues. This valuable finding may lead to the instructive development of the quantitative structure-activity relationship (QSAR) model. QM/MM simulation has also been utilized to improve the quality of the docking results of human dopamine D3 receptor (D3R), which has been identified as an antipsychotic drug target for schizophrenia treatment [33,34]. The well-known atypical antipsychotic (AAP) drugs include risperidone, aripiprazole, ziprasidone, clozapine, olanzapine, and quetiapine. All of these have been prescribed to treat various mental conditions [35]. The QM/MM minimization was performed on the selected docking poses. Only the ligand (haloperidol) was placed in the QM region, while the rest of the system was considered as the MM region. Accuracy of the interaction energy was shown to be dependent on the radius of the binding site that was included in the QM region during the calculation. It was due to the long-range interactions of distant charged residues that were included in the QM region. The interaction energy was calculated as −170.1 kcal/mol, which was larger than the other two classical methods used (−56.3 kcal/mol for classical mechanics minimization of all hydrogen atoms and haloperidol molecule, and −137.6 kcal/mol for only hydrogen atoms minimization). It indicated that the QM/MM refinement converged to the more stable conformation than classical minimization techniques. The combination of docking and QM/MM calculation revealed the important roles of surrounding amino acid residues in the binding pocket. Moreover, the hydroxyl group of haloperidol was identified as a major site that leads to stronger binding to dopamine receptors.
In 2005, the QM/MM simulation was incorporated into the docking algorithm, whereby the fixed charges of ligand assigned by MM force fields were replaced by partial charges fitted based on electrostatic potential of the ligand derived in the presence of protein environment during QM/MM calculation. Here, the ligand was the only molecule assigned as the QM region, while the rest of the system were described using the MM potential [36]. Cho et al. found that the use of polarized charges plays a significant role in improving the prediction of ligand binding mode and this leads to a new promising docking protocol for lead optimization in drug discovery. A subsequent study on metalloproteins suggested that the extension of the QM region to include metal ion(s) along with coordinated protein residues is important and leads to more reliable binding poses [37]. Due to the success of the incorporation of QM/MM into docking simulation, its applications in structure-based studies, including drug design, virtual screening, and lead optimization, have been investigated [38][39][40]. Current assessment of GPCR docking simulations without QM/MM showed that the success rate was over 70%. Docking error was evident especially for the docking of ZM241385 and XAC into Adenosine A 2A receptor [41,42]. In 2016, Kim and Cho incorporated QM and solvation effect into the docking simulation of GPCRs to improve the predicting accuracy. They proposed a new docking protocol that replaced the fixed force field charges of the ligand by partial charges calculated using QM/MM calculations with an extended QM region. This protocol was also used in re-docking simulations. The QM region used in the study included the ligand and surrounding amino acid residues within 5 Å of the ligand. The solvation effect was taken into account by solving the Poisson Boltzmann (PB) equation. Among a test set of 40 GPCR-ligand complexes, the QM/MM docking improved the success rate to 90% without solvation effect, which is better than the docking result from Glide with standard precision (Glide SP) and Glide with solvation effect included. The improvements in docking poses are shown in Figure 3. A possible issue of failed cases is the ligand containing solvent-exposed part(s). Therefore, integration of solvent effect into the QM/MM docking protocol by using an implicit solvent model was proposed. It demonstrated an excellent improvement with a success rate of 100%, portraying the importance of charge models in improving docking accuracy [43].
Biomolecules 2020, 10, x 5 of 12 protocol that replaced the fixed force field charges of the ligand by partial charges calculated using QM/MM calculations with an extended QM region. This protocol was also used in re-docking simulations. The QM region used in the study included the ligand and surrounding amino acid residues within 5 Å of the ligand. The solvation effect was taken into account by solving the Poisson Boltzmann (PB) equation. Among a test set of 40 GPCR-ligand complexes, the QM/MM docking improved the success rate to 90% without solvation effect, which is better than the docking result from Glide with standard precision (Glide SP) and Glide with solvation effect included. The improvements in docking poses are shown in Figure 3. A possible issue of failed cases is the ligand containing solvent-exposed part(s). Therefore, integration of solvent effect into the QM/MM docking protocol by using an implicit solvent model was proposed. It demonstrated an excellent improvement with a success rate of 100%, portraying the importance of charge models in improving docking accuracy [43].
Class A Rhodopsin Photoactivity Investigation
Class A rhodopsin receptor is responsible for many physiological functions, particularly lightsensitive responses [6]. The biological activity of rhodopsin is initiated by light (photon energy). As a result of photoisomerization, it is possible to use rhodopsin as an energy storage material [44]. Therefore, photochemical events are the main topic of interest in most studies related to this receptor. The complete picture of photochemical reactions could be achieved computationally. A suitable computational method for rhodopsin photoactivity investigations is the QM/MM method that has been applied to understand structure, spectral tuning, photoisomerization, and mutations. Photoexcitation calculations demand high computational resources. Thus, retinal chromophore that is covalently bound to activated rhodopsin has been studied using many small models of the ligand to understand the rapid photoisomerization process [45,46]. Even though the gas phase calculated excitation energies are in good agreement with experiment, the effect of protein on the photochemical reaction was not explained using these models, especially the steric effects on the -ionone ring.
Methods aimed at understanding the effect of the protein environment on the photochemical process occurring during rhodopsin activation have been developed. The availability of high-
Class A Rhodopsin Photoactivity Investigation
Class A rhodopsin receptor is responsible for many physiological functions, particularly light-sensitive responses [6]. The biological activity of rhodopsin is initiated by light (photon energy). As a result of photoisomerization, it is possible to use rhodopsin as an energy storage material [44]. Therefore, photochemical events are the main topic of interest in most studies related to this receptor. The complete picture of photochemical reactions could be achieved computationally. A suitable computational method for rhodopsin photoactivity investigations is the QM/MM method that has been applied to understand structure, spectral tuning, photoisomerization, and mutations. Photoexcitation calculations demand high computational resources. Thus, retinal chromophore that is covalently bound to activated rhodopsin has been studied using many small models of the ligand to understand the rapid photoisomerization process [45,46]. Even though the gas phase calculated excitation energies are in good agreement with experiment, the effect of protein on the photochemical reaction was not explained using these models, especially the steric effects on the -ionone ring.
Methods aimed at understanding the effect of the protein environment on the photochemical process occurring during rhodopsin activation have been developed. The availability of high-resolution structural data accelerates the theoretical studies involving the structure-function relationship of rhodopsin. The photoisomerization of 11-cis rhodopsin to all-trans bathorhodopsin is one of the most attractive properties that has been widely investigated. A variety of hybrid methods have been used ranging from simple to complicated QM calculations. The energy difference between the minimum energies of rhodopsin and bathorhodopsin yielded energy storage of 34.1 kcal/mol, as calculated using QM/MM method at the B3LYP/6-31G*:AMBER level of theory. The result is in excellent agreement with experimental data [47][48][49]. The energy decomposition analysis revealed that large energy storage is due to the electronic interaction of rhodopsin. The rotation of the C11-C12 dihedral angle from −11 • in 11-cis rhodopsin to −161 • in all-trans bathorhodopsin was driven by the steric interaction between Ala117 and the polyene chain at the C13 position. This steric interaction hindered the rotation of the C11-C12 dihedral angle toward positive angles, an occurrence which could not be observed in the gas phase model. This study indicated that Glu113 may act as a counterion. Moreover, they suggested that the salt bridge between NH of the Schiff base linkage and Glu113 may be an important factor that influenced the electrostatic contribution of the protein to the total energy storage. The polarized bond at the Schiff linkage of bathorhodopsin shifted away from the negative site of Glu113 as compared to rhodopsin. The electrostatic contribution analysis of nearby residues in the binding pocket also provided insights on individual interactions, revealing that Ala117, Ser186, and a water molecule may stabilize bathorhodopsin relative to rhodopsin. The electronic-excitation energy estimation was also improved due to the integration of the electrostatic contribution of the protein environment during energy calculations.
With the increment in the number of available experimental GPCR structures, subsequent theoretical studies on rhodopsin have been in the spotlight [50][51][52][53]. In 2010, the structure and properties of squid rhodopsin were investigated. Similar to bovine rhodopsin, it contains 11-cis rhodopsin covalently bonded to Lys305. However, Glu113 in bovine rhodopsin is replaced by a group of Asn87, Tyr111, Glu180, and a water molecule. At that time, the position of internal water molecules could not be determined by X-ray crystallographic studies. Therefore, the number and positions of internal water molecules were verified by QM/MM calculation. It was found that the calculated structure of two additional water molecules near the Schiff base region is in good agreement with the X-ray structure. The absorption wavelength of retinal-chromophore blue-shifted around 120 nm when protein polarizability was accounted during the calculation. The effect of particular residues within 4 Å of the retinal polyene chain (34 amino residues) toward photoactivity of 11-cis rhodopsin was calculated by turning off the charges of these residues, one at a time. Among these residues, Glu180 blue-shifted the absorption wavelength by around 100 nm and was identified as the main counterion in squid rhodopsin. They suggested that even though Glu180 is located further away from the retinal chromophore compared to Glu113 in bovine rhodopsin, the charge stabilization engendered by Glu180 still has a significant effect on the optical properties of squid rhodopsin.
The QM/MM calculation of Class A rhodopsin GPCRs also provided a new perspective on retinitis pigmentosa, a disease involving progressive retinal degradation [54,55]. Rhodopsin mutations have been identified as a major cause of this disease [56]. Therefore, many mutagenesis studies have been conducted to determine key residues that may contribute to the development of retinitis pigmentosa. However, the mechanisms and causes of mutation are not clear. Hernández-Rodríguez et al. studied two mutated human rhodopsins (S186W and M207R) and compared the mutated models to that of wild type. The protein models were solvated in water and phosphatidylcholine (POPC) lipid bilayer. A combination of various computational methods, namely MD simulations, density functional theory (DFT), and QM/MM, was applied. The results unveiled that a less stable counterion region could impair the whole protein in the mutated models. Moreover, the strong blue-shift resulting from the mutations leads to excess energy that could yield side reactions. The results of this study could be utilized to support the rational development of medical treatment.
Besides the effect of protein environment, the structure of the retinal ligand itself also plays an important role in photoisomerization. The cis-trans isomerization of rhodopsin and isorhodopsin was studied using a combination of QM/MM and MD simulations [57]. Isorhodopsin is a rhodopsin analog that has a 9-cis retinal chromophore instead of an 11-cis retinal chromophore. MD simulations suggested that isomerization is a fast and facile event in rhodopsin, while being a much more complicated phenomenon in isorhodopsin. The 9-cis position in the retinal ligand of isorhodopsin forms a steric hindrance within the narrow space inside the opsin, thus affording byproducts. QM/MM calculations simulating the photoactivity of both systems showed that isorhodopsin photoisomerization gave rise to alternative products such as the 9,11-di-cis isomer. This is in contrary to the straightforward bathorhodopsin-only pathway in rhodopsin isomerization. Therefore, rhodopsin is preferred in nature. According to the simulations, protein environments, counterion, and chromophore structures are key factors that governed the photoactivity of rhodopsin. Incorporation of QM/MM simulations would broaden the understanding of particular state of rhodopsin photoactivation related diseases. The obtained knowledge can be utilized in drug design that target to stabilize the degradation of rhodopsin [58].
Currently, many complicated simulations are accessible. As mentioned above, the function of rhodopsin depends on many factors and understanding the protein-ligand interactions of rhodopsin is vital for the rational design of novel ligands and biomimicking molecules. The automatic rhodopsin modeling (ARM) method was proposed to study and predict the optical properties of class A rhodopsin system [59]. The protocol of this theoretical tool is as follows; (i) chromophore cavity definition, (ii) protonation state of amino acid residues, (iii) counterion position, and (iv) appropriate generation of mutation residue(s) for further parallel studies. Based on their benchmark test set, the computational maximum absorption wavelength (λ a max ) showed excellent agreement with observed experimental data. As a result, automatic a-ARM provides high reproducibility (user-independent). Moreover, the utilization of ARM reduced the preparation time and also provided a practical simulation protocol for rhodopsin and other classes of GPCRs. The detailed structure-function and energetic analysis will provide a complete picture of the class A rhodopsin and also mutation-specific therapies.
The QM Approach in GPCR Studies
Recently, an approximate molecular orbital (MO) method called Fragment Molecular Orbital (FMO) was implemented into studies related to GPCR-ligand interactions [60][61][62][63][64][65][66][67][68]. FMO has been described in previous publications and review articles [60][61][62]. Therefore, only a brief introduction of this method is presented here. The modus operandi of FMO involves the division of the system into fragments followed by QM calculations of each fragment. This method reduces the amount of time required to conduct QM calculations of the whole system. The interaction between two fragments is characterized by electrostatics, exchange-repulsion, charge transfer, and dispersion interactions ( Figure 4). Therefore, the application of FMO on GPCR-ligand studies would yield reliable protein-ligand interactions that are important for biomolecular recognition. The information obtained is useful for SBDD. Weak interactions such as halogen bonds, cation-interactions, and non-classical hydrogen bonds that could not be explained by the MM force field could be achieved through the FMO method. These interactions have been shown to be key features in biological processes such as ligand recognition and protein folding. The theoretical characterization of ligand-binding recognition in GPCRs exhibited similar electrostatic and hydrophobic interactions across most GPCR complexes. In 2016, Heifetz and coworkers performed the FMO calculation on the complexes of agonist-orexin-2 receptor (OX 2 R) [64]. They considered all interactions with an absolute pair interaction energy (PIE) greater than or equal to 3.0 kcal/mol. A comparison of the interactions of two docking poses indicated that they shared similar interactions, and this was supported by site-directed mutagenesis studies. Subsequently, GPCR-ligand crystal structures were investigated [65]. They revealed the often omitted interactions contributed by surrounding residues, especially hydrophobic interaction and the involvement of backbone atoms. Comprehensive QM studies of protein-ligand interactions provide valuable information for rational SBDD. For instance, which ligand fragments could be targeted for modification to achieve desired properties [68]. Data on protein-ligand interactions acquired based on the FMO method have been published online (https://drugdesign.riken.jp/FMODB/) [69]. Currently, more than 980 unique PDB entries were identified. Moreover, an automated FMO calculation protocol was also developed in 2019 [70]. It is a valuable guideline for mutagenesis, interaction studies, and protein engineering.
Biomolecules 2020, 10, x 8 of 12 identified. Moreover, an automated FMO calculation protocol was also developed in 2019 [70]. It is a valuable guideline for mutagenesis, interaction studies, and protein engineering.
Conclusions and Outlooks
GPCRs are important membrane proteins that play key roles in numerous physiological processes. GPCR-ligand (drug) interactions are crucial in modulating GPCR activity. Thus, a detailed understanding of GPCR-ligand interaction is needed for the design and development of new GPCR therapeutics. Previously, most of SBDD efforts in GPCR studies employed classical molecular docking that allows researchers to achieve their goals effectively. However, the development of multiscale molecular modeling has resulted in the reduction of computational demand for a relatively high-level accuracy approach such as QM. The application of a sophisticated computational strategy to a large and complex GPCR system was made feasible through QM/MM method, thus providing a practical prediction method that offers new insights into the structure, interaction, dynamics, and kinetics of GPCRs. Furthermore, the incorporation of QM in calculations provides missing pieces of important weak protein-ligand interactions such as hydrogen bond, cation-π, and non-classical hydrogen bond, which could not be determined by classical MM methods. Thus, it will improve the current SBDD protocol, making it valuable for pharmaceutical research in the near future.
Conclusions and Outlooks
GPCRs are important membrane proteins that play key roles in numerous physiological processes. GPCR-ligand (drug) interactions are crucial in modulating GPCR activity. Thus, a detailed understanding of GPCR-ligand interaction is needed for the design and development of new GPCR therapeutics. Previously, most of SBDD efforts in GPCR studies employed classical molecular docking that allows researchers to achieve their goals effectively. However, the development of multiscale molecular modeling has resulted in the reduction of computational demand for a relatively high-level accuracy approach such as QM. The application of a sophisticated computational strategy to a large and complex GPCR system was made feasible through QM/MM method, thus providing a practical prediction method that offers new insights into the structure, interaction, dynamics, and kinetics of GPCRs. Furthermore, the incorporation of QM in calculations provides missing pieces of important weak protein-ligand interactions such as hydrogen bond, cation-π, and non-classical hydrogen bond, which could not be determined by classical MM methods. Thus, it will improve the current SBDD protocol, making it valuable for pharmaceutical research in the near future.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-04-23T09:14:39.368Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "42149033ef7dbbdf4cf599edb4d0860b4a31ede8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/10/4/631/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "269cdb40a7b926d57964174630820202bee1b6da",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
71281472 | pes2o/s2orc | v3-fos-license | Assessment of pulmonary toxicities in breast cancer patients undergoing treatment with anthracycline and taxane based chemotherapy and radiotherapy-a prospective study
Background: Anthracycline based regiments and/or taxanes and adjuvant radiotherapy; the main modalities of treatment for breast cancers are associated with deterioration of pulmonary functions and progressive pulmonary toxicities. Aim: Assessment of pulmonary toxicities and impact on pulmonary functions mainly in terms of decline of forced vital capacity (FVC) and the ratio of forced expiratory volume (FEV) in 1 Second and FEV1/FVC ratio with different treatment times and follow ups in carcinoma breast patients receiving anthracycline and/or taxane based chemotherapy and radiotherapy. Materials and methods: A prospective single institutional cohort study was performed with 58 breast cancer patients between January 2011 to July 2012 who received either anthracycline based (37 patients received 6 cycles FAC= 5 FU, Adriamycin, Cyclophosphamide regime) and radiotherapy or anthracycline and taxane based chemotherapy (21 patients received 4cycles AC= Adriamycin, Cyclophosphamide; followed by 4 cycles of T=Taxane) and radiotherapy. Assessment of pulmonary symptoms and signs, chest x-ray and pulmonary function tests were performed at baseline, midcycle, at end of chemotherapy, at end radiotherapy, at 1 and 6 months follow ups and compared. By means of a two-way analysis of variance (ANOVA) model, the course of lung parameters across the time points was compared. Results and Conclusion: Analysis of mean forced vital capacities at different points of study times showed definitive declining pattern, which is at statistically significant level at the end of 6 th month of follow up (p=0.032) .The FEV1/FVC ratio (in percentage) also revealed a definite decreasing pattern over different treatment times and at statistically significant level at 6th month follow up with p value 0.003. Separate analysis of mean FEV1/FVC ratios over time in anthracycline based chemotherapy and radiotherapy group as well as anthracycline and taxane based chemotherapy and radiotherapy group showed a similar declining pattern.
Introduction
Breast cancer is the leading cause of cancer death among women around the world. 1 It accounts for 26% of all malignancies in women and second most common cause of cancer death in women. 2 In India it shows mixed incidence pattern with breast cancer being second to cancer of the cervix in rural areas 1, 2, ; however, in metropolitan cities the incidence of breast cancer has crossed that of cervix. 2 Mathematical models suggest that both the adoption of screening mammography and the availability of adjuvant chemotherapy, radiotherapy and tamoxifen have contributed approximately equally to the improvement of breast cancer outcomes. 3 Adjuvant chemotherapy reduces local recurrence after radiation therapy in breast cancer. Anthracycline (doxorubicin)-based regimens (± taxanes for high-risk disease) have been associated with superior outcomes as compared to nonanthracycline containing regimens. Recent evidence suggests that disease free survival (DFS) and overall survival (OS) of breast cancers has been increased with taxane-based therapy as compared to anthracycline-based therapy. Neoadjuvant chemotherapy is considered standard of care in high-risk populations such as young patients and/or advanced stage disease, and has been evaluated in Stage II-IIIa breast cancer in randomized trials. 3,4 Paclitaxel (Taxol) is one of the most potent chemotherapeutic agents in the treatment of breast cancers has experienced widespread use over the past 5 years. . A limited number of case reports are available concluding that Taxol is associated with respiratory symptoms, including dyspnea, cough, wheezing and chest tightness. 5 Interstitial and reticulonodular infiltrates have been described on chest radiographic examination in few studies with paclitaxel in breast cancers. 6 Cases of transient pulmonary infiltrates and suspected interstitial pneumonitis have been reported, although the true incidence of lung toxicity that is directly related to paclitaxel is not well understood. A prospective study of lung function in 33 patients who received paclitaxel with carboplatin (an agent with little evidence for direct lung toxicity) for nonthoracic malignancy revealed an isolated decrease in diffusing capacity without other clinical or radiographic evidence of pulmonary toxicity . 7 Clinicians should be aware of the potential for paclitaxel to impair pulmonary function.
Randomized trials have found that addition of adjuvant therapy with taxane to an anthracycline-based chemotherapy regimen, compared with anthracycline-based chemotherapy alone, led to improved survival for high grade breast cancer patients. 8,9 Irradiation is also an important adjuvant therapy for breast cancer. Specifically, adjuvant radiation therapy for selected patients with breast cancer reduces locoregional recurrence and improves overall survival. 10,11 One serious potential risk of radiation therapy for breast cancer is symptomatic radiation pneumonitis. Fortunately, with modern irradiation techniques, the risk of radiation pneumonitis is low (5%), and its course is usually self-limited . 12 However, the risk of radiation pneumonitis has recently become a greater clinical concern because of reports suggesting that this risk may increase in patients who receive taxanes.
It is well known that post-operative adjuvant loco-regional radiotherapy in breast cancer is associated with pulmonary complications. The frequency and grade of pulmonary complications following radiotherapy for breast cancer are, however, still debated. Few investigators have quantified this problem by using objective methods such as pulmonary function tests (PFTs). 13,14 The PFTs have the advantage of being widely available and reproducible methods for detecting parenchymal lung damage, if they are performed under strict standardized conditions. 15 We performed a prospective study from January 2011 to August 2012 for pulmonary status and pulmonary toxicities assessment on 58 Breast cancer patients attended and registered Radiotherapy department of Medical College, Kolkata, who received either anthracycline based and/or taxane based chemotherapy followed by radiotherapy .The effects of anthracycline and taxane based chemotherapy and radiotherapy on pulmonary functions were assessed and variation of pulmonary toxicities with different treatment times and follow up were analysed.
Methods and Materials
The lung is one of the major dose-limiting organs for radiotherapy within the thorax. Therefore, the total dose that can safely be delivered to patients with malignant tumors like carcinoma breast has to be limited because of the risk of radiation pneumonitis (developing 1 to 6 months after treatment) and radiation fibrosis (developing from 6 months onward). Chemotherapy (regardless of the type of drug) primarily affected the diffusion capacity. So the aim of our study was to analyze whether and how much the course in pulmonary function changes over time, forced vital capacity [FVC], ratio of forced expiratory volume in 1second [FEV1/FVC ratio] varied between the different treatment regimens and whether recovery of early pulmonary damage occurred or not. In general, when chemotherapy and radiotherapy are combined, two different effects may occur as an interaction between both modalities, resulting in an enhancement of radiation-induced damage or an additional effect.
Study Area: Department of Radiotherapy, Medical College and Hospitals, Kolkata.
Study Population: All biopsy proven cases of left sided female carcinoma breast attending the Radiotherapy Out Patients' Department and conforming to inclusion /exclusion criteria mentioned herein.
Study Period: Case accrual started from January 2011 and it was divided into preparatory phase, data collection phase data compilation phase, data analysis phase and preparation phase.
Sample Size: 60 patients
Sample Design: All the population with carcinoma breast (female) who conform to the inclusion / exclusion criteria mentioned herein and gave consent to be included in the study. Patients were treated with Anthracycline and / or Taxane based chemotherapy in the Neo-Adjuvant or Adjuvant setting in combination with Modified Radical Mastectomy (Timing of surgery depending on operability). In addition all patients will receive adjuvant External Beam Radiotherapy (Dose, Portals etc depending on stage and tumour features). Hormone receptor positive patients will receive adjuvant Endocrine therapy for a minimum duration of five years.
All patients were subjected to chest X-ray and Pulmonary function tests (Clinical assessment, Chest X ray, FEV1, FVC, FEV1/FVC) at baseline, during and at the end of chemotherapy, at the end of radiation and during first month and sixth month follow up.Mid cycle chemotherapy means after completion of 3rd cycle of chemotherapy in FAC group and after completion of 4th cycle of chemotherapy in Anthracycline, Cyclophosphamide (AC) followed by Taxane (T group).
Statistical analysis:
Quantitative variables were compared between two groups using an unpaired t test for normally distributed variables or Wilcoxon two sample tests for skewed distributed variables.
Normally distributed variables are reported as mean, standard deviation, and variance. Skewed distributed variables are reported as median and range (minimum to maximum).
By means of a two-way ANOVA model, the course of lung parameters (decline of pulmonary function tests) across the time points was compared between patients. Correlations between variables were calculated using Pearson's or Spearman correlation coefficient test as appropriate. All P values were two-sided and P <0.05 was considered statistically significant. Medcalc and Vassarstats were used to perform the statistical analysis.
Case Accrual
Initially, 60 patients were selected for accrual. Of these, after careful scrutiny towards meeting of inclusion and exclusion criteria, 58 breast cancer patients were actually found suitable. All patients under study had left sided breast cancer. However, 4 patients were expired during and at the end of chemotherapy and were consequently excluded from the study. Ultimately 54 patients underwent the study till end. The baseline age distributions of patients under are depicted in Table 1. The Mean age is 51.5 yrs. Inclusion criteria of our study was lump size more than 5cm and / or node positive disease. All N0 patients must have T3 (tumour>5cm) according to our study inclusion criteria otherwise node positive.
Baseline Patient Characteristics
All the patients in the study population received anthracycline and/or taxane based chemotherapy and post chemotherapy radiation of chest flap alone or in combination with supraclavicular field radiotherapy or chest flap+supraclavicular field +axillary field radiation. All patients have undergone mastectomy as the very first treatment or initially received 2-3 cycles of neoadjuvant chemotherapy, downstaged and then undergone mastectomy; after that completed chemotherapy. After completion of chemotherapy they received radiation .At the end of chemotherapy the study population came down to 54.Two patients died after 4 cycles,1 patient after 5th cycle of chemotherapy and 1 patient after 2nd cycle of chemotherapy. So, 54 patients have undergone radiotherapy. Among the 58 patients, 37 patients (63.79%) received only anthracycline based (doxorubicin) chemotherapy and 21 patients (36.2%) received both anthracycline and taxane both based chemotherapy. Moreover, majority of the patients belonged to poor socio-economic status; received doxorubicin, which is supplied free of cost from government fund in our ward. So we have chosen doxorubicin not epirubicin as anthracycline regime. For those 37 patients who received only anthracycline based chemotherapy, the regime chosen was 6cycles of FAC every 21days(Inj 5FU 500mg/m2 iv D1;Inj Doxorubicin 50mg/m2 iv D1; Inj Cyclophosphamide 500mg/m2 iv D1).Rest 21 patients received both anthracycline and taxane based chemotherapy with 4 cycles of AC followed by 4 cycles of T(Inj Doxorubicin 60mg/m2 iv D1,Inj Cyclophosphamide 600mg/m2 iv D1 every 21 days 4 cycles followed by Paclitaxel 175 mg/m2 iv D1 every 21days 4 cycles. Among 58 patients, 42 patients received neoadjuvant chemotherapy 2-3 cycles, thereafter achieved complete or partial response, undergone MRM and axillary clearance, thereafter completed remaining cycles of chemotherapy. Remaining 16 patients have undergone MRM with axillary clearance first, thereafter received adjuvant chemotherapy.54 patients have undergone radiotherapy after completion of chemotherapy.
Evaluation of pulmonary toxicities following chemotherapy and radiotherapy in breast cancer patients
The baseline pulmonary function tests and chest x-ray including clinical features were within normal limits in all patients of the study population.
Assessment of respiratory symptoms:
Cough and respiratory distress were considered as respiratory symptoms. a) Respiratory distress: Total seven patients, three in the anthracycline based CT+RTgroup (one during third cycle of chemotherapy, two during first cycle of chemotherapy) complained of respiratory distress. Two patients in the anthracycline and taxane based CT+RT group developed respiratory distress during first cycle of chemotherapy; two patients during radiotherapy.1 patient during 1month follow up and 3 patients in 6th month of follow up complained of moderate respiratory distress. From symptoms only we couldn't distinguish that whether the distress was due to cardiological or pulmonary damage .Patients were urgently sent for immediate cardiological evaluation -and chest x-ray. None revealed any significant cardiological abnormality. The patient who developed respiratory distress during 1 month follow up revealed bilateral lung metastasis in chest x-ray and among those 3 who developed distress at 6th month follow up; 1 revealed lung metastasis and 2 showed overt features of radiation pneumonitis in chest x-ray. So, only 2 patients (3.44%) of the entire study group revealed radiation pneumonitis at 6th month of follow up. b) Cough: Total 9 patients in the entire study group developed cough. Two during 3 rd cycle chemotherapy, three during 1 month follow up and four during 6th month of follow ups. Investigations revealed lung metastasis in total 2 patients and 2 patients developed radiation pneumonitis.
Evaluation of respiratory signs: None of the patients revealed any overt respiratory signs.
Evaluation of pulmonary toxicities by study tools: All the patients have undergone chest x ray and pulmonary function tests(FVC and FEV1/FVC ratio) at baseline ,at mid cycle chemotherapy, at the end of chemotherapy,at completion of radiotherapy, at 1moth and 6th month of follow ups.
Interpretation of chest x-ray: Two patients in the entire study group revealed features of lung metastases in chest x-ray, one at 1st month of follow up and one at 6th month of follow up. Only 2 patients (3.44%) in the entire study group developed features of radiation pneumonitis on chest x-ray at 6th month of follow up. Both the patients received anthracycline and taxane based chemotherapy and radiotherapy. Analysis of mean forced vital capacities at different points of study time (baseline, mid cycle chemotherapy, end of chemotherapy, end of radiotherapy, 1 month and 6 months follow ups showed definitive declining pattern, which is at statistically significant level at the end of 6 th month of follow up (p=0.032) in the entire study population (Table 2, Table 3, and Figure 1). The FEV1/FVC ratio(in percentage) also revealed a definite decreasing pattern over different treatment time in the entire study group as evidenced from Table 4 and Figure 2,which declines at statistically significant level as compared to baseline at 6th month follow up with p value 0.003 ( Table 4). Separate analysis of mean FEV1/FVC ratios over time in anthracycline based chemotherapy and radiotherapy group (Table 5) as well as anthracycline and taxane based chemo-therapy and radiotherapy group (Table 6) showed a similar declining pattern, which are statistically significant at 6 months follow up as compared to baseline in both the groups with p values 0.02 and 0.001 respectively ( Table 7).
Discussion
Regarding pulmonary toxicities following treatment with chemotherapy and adjuvant radiotherapy in breast cancer patients several studies showed definitive influence of treatment on pulmonary function tests.
Tse-Kuan Yu et al. 16 in a phase 3 randomized study have shown that Patients with breast cancer treated with sequential paclitaxel, FAC, and radiation therapy appeared to have a very low rate of clinically relevant radiation pneumonitis that was no different from that of patients treated with FAC alone. But there was significant decrease in DLCO (diffusion capacity of lung for carbon monoxide) at long term (1 year) follow up.
Another study 12 have demonstrated that loco-regional radiotherapy in breast cancer results in reductions of DLCO, VC, FEV1 and RV. The slight reduction of FEV1 was completely explained by the decrease in VC, as the relation between FEV1 and VC was unchanged. Thus, no sign of obstructive disease was found. We suggest that the reduction of VC reflects decreased parenchymal elasticity in the irradiated part of the lung. The somewhat larger decrease in DLCO may also indicate an inflammatory reaction in the interstitial tissues.
In 1992, Marks et al. 17 reported severe pulmonary complications in 10% of the patients receiving loco-regional radiotherapy following high-dose chemotherapy (including carmustine) resulting in premature discontinuation of their radiotherapy. Dose-intensified chemotherapy (FEC-based) and loco-regional radiotherapy were not associated with increased pulmonary toxicity in our material and no course of radiotherapy treatment was prematurely discontinued. Gage et al. 18 have reported similar results to ours using the CTC high-dose chemotherapy. Thus, the results of Marks et al. 17 may have been influenced by delayed carmustine-induced pulmonary toxicity.
In our study, the breast cancer patients received either 6 cycles of FAC chemotherapy followed by adjuvant radiotherapy or 4 cycles of AC followed by taxane and adjuvant radiotherapy. Among total 58 study population only 2 patients developed overt radiological feature of radiation pneumonitis at 6 months follow up, definitive deterioration of pulmonary function tests were observed. Both the FVC (Forced vital capacity) and FEV1/FVC ratio have shown definitive declining trend with chemotherapy and radiotherapy and mean of those decreased to statistically significant level at 6 th month of follow up. FEV1/FVC ratio decreased below 70% in about 67% patients at 6 months follow up. Separate analysis of PFTs in two different chemotherapy groups has shown similar changes.
Breast cancer treated with taxane and anthracycline-based chemotherapy regimens followed by radiation though did not develop overt radiation pneumonitis at significant levels, but declining trends of PFTs warn to continuously monitor those at different treatment times and follow ups. Before starting chemotherapy and radiation the baseline PFT must be within normal limit. Because both of these treatments have the potential to increase survival in properly selected patients, our data provide evidence that both treatments can be given sequentially without a concern for potential interactions that could result in a very serious pulmonary complication, but they result in definitive pulmonary damage.
Conclusion
The conclusion of our study was following anthracycline and/ taxane based chemotherapy and radiotherapy the breast cancer patients have shown definitive decrease in forced vital capacities and FEV1/FVC ratios. The declines were statistically significant at 6 th month of follow up.FEV1/FVC ratio decreased below 70 % in about 67% of patients at 6 months of follow up. | 2019-03-08T14:24:38.369Z | 2013-11-22T00:00:00.000 | {
"year": 2013,
"sha1": "2b03fed6bfec0da15b2307a1f1e9b91e12240a65",
"oa_license": "CCBY",
"oa_url": "http://www.ijcto.org/index.php/IJCTO/article/download/Saha/ijcto.0102.1pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9ff1d072cebda805472a54367a7a8b65ffac26d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119438269 | pes2o/s2orc | v3-fos-license | Raman spectroscopy of optical phonon and incommensurate charge density wave modes in 2H-TaSe2 exfoliated flakes
2H-TaSe2 is a model transition metal dichalcogenide material that develops charge density waves (CDWs).Here we present variable-temperature Raman spectroscopy study on both incommensurate charge density waves (ICDW) and optical phonon modes of 2H-TaSe2 thin layers exfoliated onto SiO2 substrate. Raman scattering intensities of all modes reach a maximum when the sample thickness is about 11 nm. This phenomenon can be explained by optical interference effect between the sample and the substrate. The E2gICDW amplitude modes experience redshift as temperature increases. We extract ICDW transition temperature (TICDW) from temperature dependence of the frequency of E2gICDW mode. We find that TICDW increases in thinner flakes,which could be due to a result of significantly enhanced electron-phonon interactions. Our results open up a new window for search and control of CDW of two-dimensional matter.
Keywords: charge density waves, Raman spectroscopy, transition metal dichalcogenide, tantalum diselenide 2H-Tantalum diselenide (TaSe 2 ) is one of the most extensively studied transition metal dichalcogenide (TMD) materials that exhibit charge density wave (CDW) transitions at low temperatures. [1][2][3][4][5] CDW is a periodic modulation of conduction electron densities and is usually found in metallic TMDs. 2H-TaSe 2 in bulk has transition from normal (metallic) phase to the incommensurate charge-density-wave (ICDW) phase at 122 K, followed by the commensurate charge-density-wave (CCDW) phase transition at 90 K. [6][7][8][9] However, the question of whether the ICDW of 2H-TaSe 2 is enhanced or suppressed upon thinning the samples down to few layers thickness has not yet been resolved.
Raman spectroscopy is a convenient and noninvasive technique in probing optical phonons and CDW phase transitions in various TMD materials. [10][11][12][13][14] Therefore, in this Letter we report our studies of E 2g and A 1g optical phonon modes and the ICDW mode (E 2g ICDW ) using variabletemperature Raman spectroscopy. We find that all Raman modes exhibit the strongest intensities when the sample thickness is ~11 nm. We explain this observation using optical interference TaSe 2 single crystals used in this work were purchased from 2D Semiconductors Company.
Ultrathin layers were exfoliated onto Si substrates with 80 nm SiO 2 by the standard "Scotch-tape" method. 15 Thicknesses of exfoliated TaSe 2 ultrathin layers were determined by atomic force microscopy (AFM). A representative AFM image and height profile are shown in Fig. 1b.
Raman scattering was conducted on freshly cleaved samples that were mounted in a cryostat with a window for optical access. A Helium-Neon laser at 632.8 nm was used, and the laser power was kept below 0.2 mW to avoid heating effect. Two ultra-narrow band notch filters were used to suppress the Rayleigh scattered light. The scattered light was dispersed by a Horiba iHR550 spectrometer and detected by a liquid nitrogen cooled CCD detector. Temperature of the TaSe 2 samples was estimated by the ratio of Stokes and anti-Stokes Raman scattering intensities. 14,16,17 Variable temperature Raman spectra for each sample were taken during the warming process from ~10 to ~300 K. Fig. 1c. The unit cdl contains two molecular units and the lattice vibration is reduced to the following normal modes: 8 The A 1g , E 1g , and E 2g modes are Raman active. 3 In the low-temperature CCDW phase, a superlattice of 3a 0 ×3a 0 ×c 0 is formed. 18 The commensurate superlattice shown schematically in Moncton revealed the Kohn anomaly of the Ʃ 1 -symmetric LA-phonon branch on the Ʃ line by neutron scattering. 18,19 In 2H structure, the LA branch is degenerate with the Ʃ 1 rigid-layer mode in the largest part of the Ʃ line. The 12 Ʃ 1 modes reduced to 2A 1g + 2E 2g + 2B 1u +2E 2u modes in the CCDW phase and the two K 6 (E 2 ) modes, where Ta atoms are displaced in the basal plane, reduced to 2E 2g + 2E 1u .
in which x is the depth that light travels in the sample and it varies from 0 to d1 which is the thickness of the flake, F ex (x) and F sc (x) are the electric field amplitudes for the excitation light and the scattered light which can reach the surface, respectively. Detailed expressions for F ex (x) and F sc (x) are included in the SI. The calculated Raman scattering intensity is plotted as a function of thickness in Fig. 2c. The trend of intensity variation agrees with our experimental data, and the theoretically predicted maximum intensity occurs at around 10.5 nm, which agrees very well with our experimental value of ~11 nm.
On the other hand, The ratio of the integrated Raman intensities of the A 1g mode to the E 2g mode also shows distinctive behavior for different thickness samples (Fig. 2d). The origin of this difference is unclear, but it should not arise from the optical effects described above since they will affect both modes almost identically. Moreover, E 2g mode exhibits redshift when increasing the thickness (Fig. 2d). An anomalous behavior of the E 2g mode has been previously reported in few-layered MoS 2 and WS 2 films [23][24][25][26] and it might be caused by a stronger dielectric screening of the long-range Coulomb interactions between the effective charges in thicker samples. 27 A change in dielectric screening with thickness is also expected for TaSe 2 . Figure 3a shows Raman spectra from a 8-nm-thick flake at different temperatures. It is seen that with temperature increasing, E 2g ICDW mode show intensity weakening because the CDW lattice loses coherence as temperature approaching phase transition. Moreover, the E 2g ICDW amplitude modes experience redshift as temperature increases. We focus on the temperature range below 100 K to extract the frequencies of the E 2g ICDW since it is better defined in this temperature range. To quantify the ICDW transition temperature, peak positions of the E 2g ICDW mode of different TaSe 2 flakes were extracted from Lorentzian fitting of the data and plotted as a function of temperature in Fig. 3b. We choose the E 2g ICDW mode to characterize the transition temperature because this mode is well-defined and has a narrow linewidth.The phonon frequencies (peak positions) of the E 2g ICDW mode were fitted by the general power law expressed where (0), the phonon frequency at 0 K, and T ICDW are fitting parameters; is a scaling parameter. According to the mean-field theory about ICDW mode softening, should be 0.5.
However, many experiments show that has different values for different materials. 28 | 2019-04-13T20:01:45.419Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "17fce554739e99790da4844e74c37f3298257141",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.01514",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa3e9b765639a530216947009453242eb2c42ade",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
246825811 | pes2o/s2orc | v3-fos-license | Immune cell type and DNA methylation vary with reproductive status in women: possible pathways for costs of reproduction
Abstract Background Consistent with evolutionarily theorized costs of reproduction (CoR), reproductive history in women is associated with life expectancy and susceptibility to certain cancers, autoimmune disorders and metabolic disease. Immunological changes originating during reproduction may help explain some of these relationships. Methodology To explore the potential role of the immune system in female CoR, we characterized leukocyte composition and regulatory processes using DNA methylation (DNAm) in a cross-sectional cohort of young (20–22 years old) women differing in reproductive status. Results Compared to nulliparity, pregnancy was characterized by differential methylation at 828 sites, 96% of which were hypomethylated and enriched for genes associated with T-cell activation, innate immunity, pre-eclampsia and neoplasia. Breastfeeding was associated with differential methylation at 1107 sites (71% hypermethylated), enriched for genes involved in metabolism, immune self-recognition and neurogenesis. There were no significant differences in DNAm between nulliparous and parous women. However, compared to nullipara, pregnant women had lower proportions of B, CD4T, CD8T and natural killer (NK) cells, and higher proportions of granulocytes and monocytes. Monocyte counts were lower and NK counts higher among breastfeeding women, and remained so among parous women. Implications Our findings point to widespread differences in DNAm during pregnancy and lactation. These effects appear largely transient, but may accumulate with gravidity become detectable as women age. Nulliparous and parous women differed in leukocyte composition, consistent with more persistent effects of reproduction on cell type. These findings support transient (leukocyte DNAm) and persistent (cell composition) changes associated with reproduction in women, illuminating potential pathways contributing to CoR. Lay Summary: Evolutionary theory and epidemiology support costs of reproduction (CoR) to women’s health that may involve changes in immune function. We report differences in immune cell composition and gene regulation during pregnancy and breastfeeding. While many of these differences appear transient, immune cell composition may remain, suggesting mechanisms for female CoR.
INTRODUCTION
Evolutionary theory posits that resources devoted to reproduction will come at the expense of other functions, including somatic maintenance, accelerating senescence and age-related degenerative decline [1]. Such tradeoffs are expected to take the form of competing molecular or physiological functions that favor fertility and rearing over survival. Evidence for such 'costs of reproduction' (CoR) in humans come from both historical and contemporary epidemiological data [2][3][4], and are strongest among women, for whom the energetic and physiological demands related to reproduction are particularly high [5].
Tradeoffs between reproduction and women's health may be rooted in many of the core physiological and molecular adaptations to pregnancy and breastfeeding [14]. Characterizing the nature and timing of these adaptations could therefore uncover functional constraints between competing molecular and physiological processes, providing insights into the pathways that link reproductive history to women's health. Hemochorial placentation in humans puts the fetal chorion in direct contact with the maternal blood supply. This allows unimpeded transfer of glucose and other resources to the developing fetus, but can also give rise to fetal manipulation of endocrine signaling, metabolic regulation and hemodynamic control that can have long-term deleterious impacts on maternal health [15]. Highly invasive placentation also presents an immunological conundrum-during pregnancy, the maternal body must shift from acquired to innate immunity to retain immunocompetence against pathogens and infection while accommodating a semiallogenic conceptus [16]. Pregnancy is also accompanied by involution of the thymus, an organ responsible for the maturation of T-cells that are central to adaptive immunity, while implantation and labor are both pro-inflammatory events [17]. The shift from acquired to innate immunity that accompanies pregnancy increases inflammation, oxidative stress and cellular damage [14,16]. During breastfeeding, a high maternal metabolic burden is necessary to nourish the comparatively under-developed and energetically costly infant brain [18]. The maternal immune system during breastfeeding is also highly active, clearing the body of fetal cells and DNA absorbed during pregnancy (i.e. fetal microchimerism), while producing a select set of immunogenic compounds that are vital to infant development [19]. These immunological changes-along with the metabolic, circulatory and endocrine adaptations necessary for reproduction-may help explain why the risk for cardiovascular disease [20], kidney disease [21], cognitive decline [22] and cancer [23] are elevated with increasing parity, and may be an important bridge between reproduction and women's long-term health [14,24,25].
A role for the maternal immune system in long-term health costs related to reproduction could include (i) changes to immune cell composition in the maternal circulation during pregnancy and/or breastfeeding, (ii) changes in gene regulation in the immune cells themselves during these reproductive stages or (iii) both. Flow cytometry has demonstrated differences in the proportions of memory T-cells [26], monocytes [27], granulocytes and natural killer (NK) cells [28] during pregnancy, supporting broad immunological shifts with reproduction. Other work has documented differences in inflammatory biomarkers among pregnant women [29], consistent with regulatory changes in cytokine production that accompany a shift from adaptive to innate immunity. Some of these immunological effects may be cumulative and persistent [24,30,31], but there remain considerable gaps in our understanding of how alterations in immune profiles and regulatory control might be tied to CoR in women.
As with much research on the biology of reproduction in women, the study of maternal immunity and reproduction is often framed in relation to effects on infant health and risk for pregnancy complications. As a result, research on immunity during pregnancy has tended to focus on the fetomaternal interface [32], with changes in gene regulation within immune cells receiving less attention [33][34][35]. Breastfeeding may also be linked to autoinflammatory processes and disease [36], yet studies examining the potential long-term effects of breastfeeding on maternal immunological disorders are rare. Work in this area has also been conducted almost exclusively in relatively affluent western nations-where reproductive effort may be low and exposure to environmental pathogens and microbes limited-despite evidence that these socioecological contexts can affect the maternal immune response to reproduction [37].
To clarify the immunological processes associated with reproduction, we used the Illumina BeadChip 450k Array to examine genome-wide blood leukocyte DNA methylation (DNAm) in 394 women who were similar in age (20-21 years), but varied in reproductive status (nulliparous, pregnant, breastfeeding and parous) at the time of measurement. The women in this study are participants in the Cebu Longitudinal Health and Nutrition Survey, a long-term study of health and life histories in the Metropolitan Cebu Area, Philippines [38,39]. DNAm is a biochemical process that reflects chromatin accessibility and transcriptional activity, providing a tool for studying gene-environment interactions that often underlie development, aging and disease [40]. When applied to blood, DNAm can also be used to bioinformatically impute proportions of circulating leukocytes [41]. We capitalized on these attributes of DNAm to explore systemic differences in immune function by looking at cell composition, as well as more targeted changes in regulatory activity within the immune cells themselves. Previous work in this [24] and other [42] populations has documented accelerated DNAm and telomere-based measures of cellular aging with gravidity and parity. DNAm-based measures of cellular aging have themselves been associated with mortality and disease risk [43], suggesting that changes in DNAm at certain loci may be a link or causal marker of the fundamental processes connecting reproductive history and women's long-term health. However, the changes in DNAm that accompany pregnancy and breastfeeding, and how they relate to each other, are still not well-characterized.
We hypothesized that reproduction in this sample would be associated with differences in immune function at both the systemic and molecular level, consistent with a possible role of immune changes to CoR. During pregnancy, we expected shifts in immune cell composition and DNAm to reflect the documented reprioritization of innate over acquired immunity, as well as widespread hypomethylation with pregnancy, consistent with previously reported increases in gene expression throughout gestation [33]. We anticipated differences in DNAm during breastfeeding to be reflective of the higher metabolic demands of lactation, and changes reflective of the positive effect of breastfeeding on breast cancer risk. Finally, given long-term effects of reproductive history on women's health, we expected a subset of differences in DNAm and cell composition to exist between nulliparous and parous women, consistent with a persistent biological cost of reproduction in women.
Participants and study design
Data come from the Cebu Longitudinal Health and Nutrition Survey (CLHNS), a birth cohort study in Metropolitan Cebu, Philippines that began with enrollment of 3327 pregnant mothers in 1983-84. This study focuses on the offspring, who were 20-22 years of age in 2005 when blood for DNAm was collected. Rates of refusal during initial recruitment were low (<4%), and attrition in the CLHNS is due primarily to factors related to out-migration [38]. Written informed consent was obtained from all participants with oversight by the Institutional Review Boards of the University of North Carolina at Chapel Hill and Northwestern University.
A total of 392 women were included in this study. These women were drawn from a subsample of 1759 women who provided a blood sample in 2005 and later participated in a pregnancy tracking study. Reproductive histories were based on an in-home survey administered by a trained interviewer in 2007. The survey included questions about each known pregnancy, its duration, prenatal care, birth outcome (e.g. live birth, miscarriage, stillbirth and twins) and breastfeeding initiation and termination. Date of conception was inferred based on pregnancy duration and date of pregnancy termination (i.e. birth, miscarriage, etc.). When participants could not recall the day of pregnancy termination, the 15th of the month was used. Based on these records, women were classified as pregnant, breastfeeding, parous (but not breastfeeding or pregnant) and nulliparous. Women were classified as 'pregnant' when the blood sample date fell between the date of conception and the date of pregnancy termination; as 'breastfeeding' when blood sample date fell between the initiation of breastfeeding and the termination of breastfeeding. Women with pregnancies prior to the date of blood sample, but who were not otherwise breastfeeding or pregnant were classified as 'parous'. Women who reported never having been pregnant for any duration up to and during the time of the blood sample were classified as 'nulliparous'. Two women who were simultaneously pregnant and breastfeeding were classified as 'pregnant'.
DNAm and statistical analysis
Blood collection, DNA extraction and DNAm analysis were conducted using methods described previously [24] and in more detail in the supplementary material. Briefly, DNAm was measured on the Illumina HumanMethylation450 Bead Chip (Illumina Inc., San Diego, CA), and run through standard preprocessing and quality control procedures in Genome Studio and R. Immune cell composition was imputed using DNAm based on Reference [41], and DNAm associated with immune cell variance was removed for genome-wide analyses using a linear regression approach. A total of 434 728 probes passed quality control procedures. Invariable sites were filtered out to maximize statistical power, leaving a subset of 110 631 probes for analysis. Models were fit using linear regression, and false discovery rate was controlled for using the method of Benjamini and Hochberg. The following contrasts were made: nulliparous-pregnant, nulliparous-breastfeeding, parouspregnant, parous-breastfeeding and nulliparous-parous. To control for unmeasured environmental and genetic factors, all models included smoking status, two principal components of genetic variation (genetic PC-scores) based on multidimensional scaling using Euclidean distance and a composite measure of socioeconomic status (SES) (see the supplementary material for more references and details on the derivation of these measures). We further examined differences between reproductive status groups using a 'bumphunting' approach to detect differentially methylated regions [44]. The parameters used and results of these methods are described in detail in the supplementary material.
Gene ranking and functional annotation
Each probe was annotated using UCSC_RefGene_Name column from the Illumina annotation file. Genes were then ranked using average standardized Àlog10 P-value and log10 absolute delta-beta values for each gene. The resulting rank was used for Table 2 and the functional enrichment analysis using gene ontology (GO). Gene functions were determined using openly accessible compendia and curated databases. A total of 17 303 annotated genes associated with the variable probes were used as the enrichment background list. Enrichment of GO terms in the ranked list of differentially methylated genes was tested using the receiver operator characteristic (ROC) method in ErmineJ [45]. Because ROC is based on the relative ranking of genes, significant enrichment for biological pathways is possible even when there are no differentially methylated sites within a given gene. Networks were constructed using EnrichmentMap in Cytoscape based on ErmineJ output. Additional references and details on the parameters used for enrichment and network construction are provided in the supplementary material.
Descriptive statistics
Women of different reproductive statuses did not differ in age or genetic PC-scores 1 and 2 (P ¼ 0.43 and 0.68, respectively), but did differ by smoking status (P ¼ 0.007) and SES (Table 1). While no pregnant or breastfeeding women smoked, three nulliparous women and seven parous women reported smoking. SES was higher among nulliparous women compared to other reproductive categories (F 3,388 ¼ 3.63, P ¼ 0.0132). Hierarchical clustering by Euclidian distance did not reveal any grouping by SES quartiles, smoking or genetic PC-score quartiles, suggesting that these covariates were not confounding in our analysis of reproductive status.
Immune cell composition by reproductive status
We used reference-based deconvolution methods to infer celltype proportions from DNAm [41], allowing us to quantify differences in the composition of circulating leukocytes with reproductive status. Controlling for smoking status, SES, PC-scores of genetic variation and age at blood draw, reproductive status predicted cell proportions across all measured cell types ( Fig. 1 and Supplementary Table S1). Compared to nulliparous women, the proportions of B-cells, CD4T, CD8T and NK cells were lower during pregnancy, while the proportions of granulocytes and monocytes were higher ( Fig. 1 and Supplementary Table S1). Similar differences were observed when parous women were used as the reference group (Supplementary Table S2). Most of the differences in cell composition associated with pregnancy appear to be resolved during breastfeeding: the proportion of B-cells, CD4T, CD8T and granulocytes were similar among nulliparous and parous women ( Fig. 1 and Supplementary Tables S1 and S2). However, the proportions of monocytes and NK cells were lower and higher, respectively, among breastfeeding women compared to nulliparous women, and remained so for parous women, suggesting potentially persistent changes in these cell types that accompany reproduction (Fig. 1). To test whether these differences were persistent, we examined whether cell composition varied in relation to time since parturition among parous women. We found no evidence that cell composition differed among women varying in time since parturition, up to 5.5 years after the end of pregnancy, supporting the interpretation that differences in monocytes and NK cell counts among parous women were persistent (Supplementary Fig. S1 and Table S3). Including a polynomial term for nonlinear changes in cell type with time since parturition did not change these findings (Supplementary Table S4).
More details on the derivation of the socioeconomic status composite score (SES-score) and the genetic principal components (genetic PC-scores 1 and 2) can be found in the supplementary material. a Linear model ANOVA. Fisher's exact test for count data.
Genome-wide DNAm by reproductive status
To further explore the potential regulatory changes within immune cells themselves during pregnancy and breastfeeding, we looked at differences in DNAm at 110 631 CpG sites across the genome, correcting for blood cell composition. Compared to nulliparous women, differential methylation among pregnant women was observed in a total of 828 CpG loci spanning 533 annotated genes (CpG/gene-range: (Fig. 3). All of these were concordant for direction of methylation differences found for pregnancy (Fig. 3). Relative to nulliparous women, differential methylation among currently breastfeeding women was observed in a total of 1107 CpG loci in 849 annotated genes (CpG/gene-range: 1-6, median ¼1). Only 8% (90/1107) of DMPs found among breastfeeding women overlapped with DMPs noted above as associated with pregnancy, with 77% (69/90) of these discordant in direction of methylation compared to nulliparity (Fig. 3). In contrast with pregnancy, breastfeeding was associated with greater methylation relative to nulliparity, with 71% (787/1107) of DMPs being more methylated among breastfeeding women (Fig. 2C). A comparison between parous and breastfeeding women only revealed one DMP (Fig. 2D). This site (cg07549715) is located in the gonadotropin releasing hormone 2 (GNRH2) gene and was one of the 1107 DMPs found between nulliparity and breastfeeding. We did not detect statistically significant DMPs between nulliparous and parous women after correcting for false discovery rate. Plots and descriptions of the differences in DNAm between reproductive group for the top-4 CpG sites (ranked by absolute delta-b values) for each comparison are available in Supplementary Figs S2-S6.
Gene ranking, functional enrichment and network analysis
Genes were ranked based on the sum of maximum standardized Àlog10 P-values and absolute delta-b for each gene, such that the highest ranked genes are those for which both the P-values were lowest and differences between groups were highest. The top 10 ranked genes for each comparison of reproductive status are provided in Table 2. Nulliparous-pregnant and parouspregnant overlapped in 7 of their top 10 genes, mirroring the large overlap in DMPs between nulliparous-pregnant and parous-pregnant women ( Table 2). Among these were CLEC2D (a gene involved in innate immunity through the NK cell C-type lectin receptor), TNFSF10 (a cytokine tied to apoptosis), CUEDC1 (a widely expressed gene that is also tied to cervical cancer and pre-eclampsia), SBNO2 (a transcriptional coregulator that counteracts the inflammatory action of IL-10) and ZEB2 (a master regulator of the epithelial-mesenchymal transition, key to embryo implantation and tissue regeneration, fibrosis and neoplasia). A large number of these genes (8/10 and 9/10 comparing pregnancy to nulliparity and parity, respectively) overlapped with genes containing pregnancy-associated DMPs described by Gruzieva et al. [34] (Table 2). These patterns were partly reflected in enrichment networks comparing pregnant women with nulliparous and parous women, which showed evidence of differences in T-cell activation and cell-mediated cytotoxicity ( Supplementary Figs S7 and S8).
The top 10 genes listed for breastfeeding did not overlap with the pregnancy-associated genes. These genes were DNAH10 (associated with gamma-delta T-cells, thought to operate at the interface between the innate and adaptive immune response), FAM193B (linked with immune tolerance to self and autoimmunity), MLNR (the motilin receptor, expressed in the gastrointestinal tract and thyroid, but not commonly found in immune cells), SLC38A10 (a sodium-coupled amino acid transport protein) and AMBRA1 (a protein involved in controlling regulatory T-cell differentiation and maintenance, and linked to tumor growth and multiple sclerosis). One gene, DNAH10 (enhanced in blood in gamma-delta and naïve CD8T cells and tied to pathways of neurodegeneration), appeared in the top 10 ranked annotated genes for both nulliparous-breastfeeding and parous-breastfeeding (Table 2). Breastfeeding networks were enriched for transmembrane transport and cell maturation ( Supplementary Figs S9 and S10). A differentially methylated region in breastfeeding compared to parous and nulliparous women, spanning 2379 basepairs and covering 49 CpG sites, contained the HOXA5 and HOXA6 genes (Supplementary Fig. S12 and Table S6). These highly conserved homeobox genes are integral to embryonic development, morphogenesis and cell differentiation. HOXA5 expression has also been widely-linked to breast cancer progression. None of the top-ranked genes between nulliparous and parous women overlapped with the other comparisons between reproductive groups. However, networks comparing nulliparous and parous women were enriched for pathways tied to neuron differentiation and axonogenesis ( Supplementary Fig. S11). The top differentially methylated regions for all comparisons are provided in Supplementary Table S6.
DISCUSSION
CoR are central to evolutionary theory and supported by epidemiological patterns of disease risk that accompany parity in women. To explore tradeoffs between reproduction and immune regulation, with potential implications for health, we examined how immune cell type and regulatory processes differ between women in different reproductive states. We find evidence that immune cell composition differs by reproductive status, with a strong shift from acquired to pro-inflammatory/ innate immunity during pregnancy. Most of the putative changes in cell composition seem to be resolved during and after breastfeeding. However, differences in both monocytes and NK cell proportions remain, ending up lower and higher, respectively, among breastfeeding and parous women compared to their nulliparous counterparts. There was no evidence that immune profiles differed in relation to time since parturition among post-parous women, pointing to potentially persistent changes in cell composition that accompany reproduction in women. These findings are broadly consistent with those identified using other methods [27,28] but suggest that differences may be more persistent, highlighting potential links between the immune system and CoR in women.
Immune cell composition with reproductive status
During pregnancy and decidualization, peripheral NK (pNK) cells migrate to the fetal-maternal interface and differentiate into uterine natural killer cells [46]. This process may reduce the relative proportion of pNK cells in pregnant women, helping to explain our observations, and may contribute to higher susceptibility of pregnant women to bacterial and viral infections [46]. In contrast, the apparent elevation in the proportion of pNK cells among breastfeeding and parous women in our study is consistent with elevated post-pregnancy immunosurveillance and 'clearance' of fetal-derived cells and cell-free DNA [19]. Beyond pregnancy, elevated pNK cell surveillance is protective against certain cancers, but can also increase the risk of demyelinating events that characterize multiple sclerosis [47]. To the extent that elevated pNK cells among parous women arise as a persistent effect of pregnancy, elevated pNK cell count might help explain reductions in certain cancers and elevated multiple sclerosis risk that accompany parity in women [3]. Persistent increases in pNK cells would also be consistent with long-term elevations in pro-inflammatory innate immune processes.
Monocytes are heterogeneous cell types that play a key role in pregnancy and placentation [27]. Circulating monocytes localize at the fetal-placental interface, where they are thought to be activated by contact with the syncytiotrophoblast of the placenta [27]. Locally, decidual monocytes establish immune balance between the uterus and placenta, regulating invasion of the extravillous trophoblast and remodeling of the uterine smooth muscle, glands and spiral arteries [48]. Elevated monocyte counts resulting from pregnancy have been proposed to explain the higher risk of demyelinating diseases, such as multiple sclerosis among childbearing women [49]. However, most inflammatory and autoimmune diseases that are exacerbated with parity are associated with elevated monocyte counts, and not the depressed levels we observed among breastfeeding and parous women, making the biological connection between lower monocyte counts and CoR unclear. Pro-inflammatory changes along a continuum of innate-acquired immunity may therefore contribute to long-term CoR, even if not all cell types are cleanly partitioned along these two axes.
DNAm among pregnant women
Accounting for differences in cell composition, we also documented differences in DNAm within immune cells between women in different reproductive states. Several of our high scoring differentially methylated genes (SBNO2 and CUEDC1) are associated with body mass index (BMI)/inflammation and cervical cancer. All but one of our top-ranked genes replicate recent work by Gruzieva et al. [34] and suggest that changes in DNAm could be important in understanding the association between reproductive history and obesity, inflammatory disease and cervical cancer in women [3,34]. Compared to nullipara (and to a lesser degree parous women), pregnancy was accompanied by marked hypomethylation. These findings are consistent with genomic responses to cellular and metabolic stress [50], and with decreases in DNAm and increases in gene transcription during pregnancy reported elsewhere [33][34][35]51].
Hypomethylation during pregnancy may also be related to the 'contamination' of maternal blood with placental or fetal nucleated red blood cells (fetal microchimerism), both of which are hypomethylated compared to the other cell types examined here [52]. Supporting this hypothesis is the observation that one gene in particular (MAP3K14)-shown elsewhere to be differentially methylated between maternal and fetal nucleated red blood cells [52]-differed between pregnant and nulliparous women at 4 out of 5 probes, although none of these differences passed our false discovery threshold. Fetal microchimerism is consistent with our previous work showing younger epigenetic age among pregnant women [24], which could have long-term effects on maternal health and provide a potential source of CoR in women [19]. These findings raise the intriguing possibility of using commonly available DNAm data to infer fetal microchimerism among pregnant women, and additional work aimed at deconvoluting the contribution of placental and fetal-derived nucleated red blood cells from maternal blood epigenome is warranted.
DNAm among breastfeeding women
The causes and consequences of global hypermethylation among breastfeeding women are less clear. Contrary to a model in which changes during pregnancy return to baseline during breastfeeding, only a small subset of the hypermethylated sites during breastfeeding overlapped with those that were hypomethylated during pregnancy. Differences in DNAm among breastfeeding women were more numerous, but also more heterogeneous than those among pregnant women, as indicated by sparser, less cohesive enrichment networks. These patterns may reflect individual heterogeneity in the frequency, duration and intensity in breastfeeding practices between women, or more developmentally contingent early life programing on lactation.
A number of the highest-ranking genes associated with breastfeeding are involved in energy storage and metabolism, consistent with the energetically taxing nature of lactation [53]. Although extrapolating our findings beyond immune cells is speculative, concordance between tissues at certain loci can be high [54]. Differences in DNAm in the GNRH2 gene during breastfeeding are consistent with the adaptive suppression of ovarian function during breastfeeding [55] and may be a target for research on the link between obesity, metabolism and infertility. Similarly, DNAH10, FAM193B and FAM13A all have robust relationships with BMI, triglyceride levels, waist-to-hip ratio, body-mass independent waist-to-hip ratio and insulin resistance [56]. FAM193B has also been linked to pronounced sex differences in adiposity, which could be especially relevant when studying the relationship between reproduction and obesity among women. Among post-reproductive women, BMI is lower among women who breastfeed compared to those who do not and decreases with time spent breastfeeding [57]. To the extent that our findings in DNAm in immune cells reflect biological processes in other tissues, the differences in DNAm described here point to the regulation of pathways of energy mobilization during breastfeeding that could affect adiposity and body mass later in life.
DNAm among parous women
In contrast to what appear to be persistent changes in cell composition that accompany reproduction, we did not see significant differences in cell-type corrected DNAm between nulliparous and parous women at individual CpG loci after accounting for false positive rates. These findings are most consistent with transient alterations to DNAm in individual immune cells themselves during pregnancy and breastfeeding, with a return to the original methylation state among parous women. Similar findings have been reported for changes in cytokine production and gene transcription associated with pregnancy, which are largely resolved 1-year after parturition [58]. Nevertheless, persistent changes in self-recognition and immunosurveillance triggered by pregnancy and/or breastfeeding could be small, cumulative with parity and difficult to quantify given the diversity of cell types examined and individual variability in immunoregulation. Our finding that DNAm differed significantly between parous and nulliparous women for certain biological processes despite the absence of differences in individual CpGs themselves may be attributable to the power of the non-parametric method, we used for quantifying enrichment (which does not employ strict cut-offs based on P-value), and/or to individual heterogeneity in these women's underlying biology and reproductive history. This model would be more consistent with historical and epidemiological evidence for CoR in women, in which the gradual erosion of regulatory processes during reproduction would manifest as small but cumulative effects on women's health over time [2,4].
Limitations and future directions
We leveraged genome-wide DNAm to study the link between reproduction and women's health from an evolutionary perspective. However, many of the differences between groups were small, with delta-betas below 0.1. One reason for these small effects may be that this subset of women is heterogeneous for numerous social and environmental exposures thought to affect DNAm, including nutrition and exposure to infectious disease and environmental pollutants [59]. Furthermore, our modest sample sizes for each group did not allow us to include parity, the duration of pregnancy, or the duration, intensity or frequency of breastfeeding at the time of sampling. This variation is likely to contribute to unmodeled variation in maternal DNAm within reproductive groups, weakening our statistical power for comparisons between groups [33,34]. Nevertheless, all but one of our top-ranking genes replicate those found in other studies [34], supporting the robustness of our analytic approach and the findings that were present.
Another limitation is that our analysis was restricted to a cross-sectional study of young women in different reproductive states, not individual women through time. This approach limits our ability to make definitive claims about 'changes' in the methylome throughout reproduction, and may fail to detect small, incremental effects that accrue as women age. The use of a prospective cohort should attenuate confounding for some factors; all women were enrolled prenatally and are the same age, reducing the influence of cohort bias, participation bias or secular changes in fertility. Many of these women still differ in health and access to resources, however, which could confound our findings by affecting reproductive decisions and the methylome. For example, nulliparous women had higher SES than women in the other reproductive states, which could generate false positives in our comparisons. We addressed this statistically by including a composite measure of SES for both the year the blood sample was taken as well as the year the woman was born. We also included comparisons between pregnant or breastfeeding women and parous women, who did not differ in SES from pregnant or breastfeeding women. These two measures-combined with the fact that there were no significant individual DMPs when comparing nulliparous to parous women-support the interpretation that many of our findings are a result of differences in reproductive status and not SES. Nevertheless, a longitudinal approach, following individual women over time and through reproductive transitions, will be vital to fully address these limitations. This may be particularly important given the relatively young age of participants and narrow range of parity in our study (0-5 pregnancies), and research in higher parity women at older ages may be necessary to detect small but cumulative changes in the methylome.
This study points to changes in cell composition and DNAm that may be important for understanding the relationship between reproduction and women's health. However, we still do not know if these differences ultimately lead to differences in health later in life. To confirm this requires longer-term studies where cell composition and DNAm during reproduction can be linked to health and aging later in life. A complementary approach would use the differentially methylated genes identified here as candidate loci for studying disease phenotypes that are linked to reproduction, such as multiple sclerosis, cancer or metabolic disorders. For example, our finding that a breastfeeding was strongly associated with a differentially methylated region that includes HOXA5, a gene tied to embryonic development and cellular differentiation, but also risk for breast cancer, provides a candidate gene linking breastfeeding and breast cancer risk that may merit further research [12]. While breast cancer and other diseases no doubt arise through a multitude of genetic and environmental factors, gaining insight into potentially modifiable genes and pathways that undergird them could shed light on the etiology of the diseases themselves or point to new diagnostic or treatment opportunities.
CONCLUSIONS
We document widespread transient differences in immune cell composition and DNAm during pregnancy and lactation, with evidence for more modest, persistent differences immune cell composition between nulliparous and parous women. These differences may relate to evolutionarily theorized CoR in women, and broader epidemiological patterns of disease risk that accompany women's reproductive histories. While crosssectional and observational in nature, this study highlights the potential utility of using DNAm from the widely available Illumina microarray platform for studying the role of immunity as a potential pathway for the CoR in humans.
acknowledgements
We are grateful to the Anthropological Epigenetics Journal Club and three anonymous reviewers, whose insightful comments greatly improved earlier drafts of the manuscript. As always, we are indebted to our study participants, whose ongoing involvement make our research possible.
supplementary data
Supplementary data is available at EMPH online. | 2022-02-16T05:15:17.246Z | 2022-02-02T00:00:00.000 | {
"year": 2022,
"sha1": "edd2806fee1caea71ea07890b29730b14ecce788",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/emph/article-pdf/10/1/47/42504415/eoac003.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edd2806fee1caea71ea07890b29730b14ecce788",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235653438 | pes2o/s2orc | v3-fos-license | The Influence of Drug–Polymer Solubility on Laser-Induced In Situ Drug Amorphization Using Photothermal Plasmonic Nanoparticles
In this study, laser-induced in situ amorphization (i.e., amorphization inside the final dosage form) of the model drug celecoxib (CCX) with six different polymers was investigated. The drug–polymer combinations were studied with regard to the influence of (i) the physicochemical properties of the polymer, e.g., the glass transition temperature (Tg) and (ii) the drug–polymer solubility on the rate and degree of in situ drug amorphization. Compacts were prepared containing 30 wt% CCX, 69.25 wt% polymer, 0.5 wt% lubricant, and 0.25 wt% plasmonic nanoparticles (PNs) and exposed to near-infrared laser radiation. Upon exposure to laser radiation, the PNs generated heat, which allowed drug dissolution into the polymer at temperatures above its Tg, yielding an amorphous solid dispersion. It was found that in situ drug amorphization was possible for drug–polymer combinations, where the temperature reached during exposure to laser radiation was above the onset temperature for a dissolution process of the drug into the polymer, i.e., TDStart. The findings of this study showed that the concept of laser-induced in situ drug amorphization is applicable to a range of polymers if the drug is soluble in the polymer and temperatures during the process are above TDStart.
Introduction
In situ drug amorphization is a drug delivery approach, where a crystalline drug is converted into its amorphous form, e.g., in the form of an amorphous solid dispersion, in the final dosage form. This conversion, i.e., the in situ drug amorphization, may take place immediately after the manufacturing of the final dosage form or directly before administration. Utilizing in situ drug amorphization, downstream manufacturing challenges of amorphous powder, e.g., poor flowability and/or stability issues during storage, such as amorphous-amorphous phase separation, can be circumvented [1][2][3][4][5][6][7].
Successful in situ drug amorphization has previously been described by various methods, such as water immersion [8] and the use of microwave radiation [1][2][3][4][5]9] and laser radiation [10]. The latter two methods utilize electromagnetic radiation sources and were reported to lead to complete amorphization of a compact containing 30 wt% celecoxib (CCX) and the polymer polyvinylpyrrolidone (PVP12) within relatively short time periods, i.e., 10 min of exposure to microwave radiation [2] and 3 min of exposure to laser radiation [10].
It has been suggested that microwave-induced in situ drug amorphization follows a dissolution process of the drug into the polymer at temperatures above the glass transition temperature (T g ) of the polymer. Thus, in accordance with the Noyes-Whitney equation, describing the dissolution rate of a solute into a solvent [11], a smaller drug particle size [2], a higher temperature reached during exposure to microwave radiation, and a lower viscosity of the polymer [12] have been demonstrated to be advantageous for in situ drug amorphization. Microwave-induced in situ drug amorphization is dependent on the presence of an enabling (dielectric) excipient inside the compact that absorbs the microwave radiation and consequently causes a temperature increase inside the compact [13]. So far, sorbed water, inorganic crystal hydrates, glycerol, and polyethylene glycol have been used as enabling excipients [2,3,9,12]. However, previous studies have shown that large amounts of these dielectric excipients are necessary inside the compact to enable complete microwave-induced in situ drug amorphization [2,3,9,12]. For example, approx. 20 wt% sorbed water was necessary to obtain complete amorphization of CCX in PVP12 [2]. In fact, the enabling excipient also functions as a plasticizer of the polymer, i.e., it lowers the polymer T g to temperatures that can be achieved upon exposure to microwave radiation (~100 • C). In connection with the T g , a relatively low molecular weight (M w ) of the polymer has also been shown to be necessary to achieve a high degree of in situ drug amorphization, e.g., the use of PVP12 (M w = 2500 g/mol) yielded a higher degree of amorphization compared to PVP17 (M w = 9000 g/mol) [5]. The limitations of the temperature reached upon exposure to microwave radiation in relation to the T g and M w of the polymer, combined with the need for a high amount of dielectric excipient, have so far led to only four reported cases of complete in situ drug amorphization upon exposure to microwave radiation, namely CCX in PVP12 using sorbed water or sodium dihydrogen phosphate mono-or dihydrate as an enabling excipient, CCX in polyethylene glycol 3000 and 4000 using polyethylene glycol as the enabling excipient, and indomethacin in Soluplus ® using glycerol as the enabling excipient [2,3,9,12].
With the concept of laser-induced in situ drug amorphization, it is possible to reduce the amount of enabling excipient needed inside the compact, as well as the total exposure time. Furthermore, higher temperatures (up to 150 • C) upon exposure have been reached compared to the use of microwave radiation [10], which can potentially enable the amorphization of more drug-polymer combinations. Using laser radiation, heating of the compacts is achieved by introducing silver plasmonic nanoparticles (PNs), which absorb laser radiation in the near-infrared (near-IR) spectrum. PNs exhibit photothermal properties, i.e., they convert light into heat [14]. The optical extinction of silver PNs was tuned to extend into the near-IR spectrum by adapting the interparticle distance of PNs using a dielectric spacer (SiO 2 ) [14,15].
In this study, the silver PNs were obtained by flame spray pyrolysis (FSP) [16][17][18]. Using PNs at 0.1 wt% or 0.25 wt%, laser-induced in situ drug amorphization was successfully obtained for CCX in combination with PVP12 [10]. In the before-mentioned study, it was shown that increasing laser intensity as well as increasing PN load led to a faster temperature increase and a higher maximum temperature, resulting in a faster rate and higher degree of amorphization [10]. This proof-of-concept study was, however, limited to a single polymer, namely PVP12, which has also been successfully amorphized using microwave radiation [2,9,10]. Compacts exposed to laser radiation became fully amorphous after only 3 min compared to exposure to microwave radiation, for which 10 min of exposure was needed to achieve complete amorphization.
It is still unclear whether the concept of in situ drug amorphization is applicable to different types of polymers, e.g., polymers with different drug solubilities as well as different M w and T g . Polymers with a high T g cannot be used for microwave-induced in situ drug amorphization, as the temperatures reached during exposure to microwave radiation are not (or not sufficiently) above the T g of the polymer. This is because the polymer is only mobile enough to allow for drug dissolution, within a reasonable timeframe, at temperatures above the T g of the polymer [5]. Here, the use of PNs can be beneficial Pharmaceutics 2021, 13, 917 3 of 12 to achieve sufficient heating: by using laser-induced in situ drug amorphization, higher temperatures can be reached [10]. It is important to show the applicability of laser-induced in situ drug amorphization for a range of polymers, as this would allow for widening the general approach of radiation-induced in situ amorphization as well as using specific polymers that are suitable for the drug candidate rather than choosing a suitable polymer for the in situ amorphization.
In this study, it was investigated whether the concept of laser-induced in situ drug amorphization is applicable to six different types of polymers commonly used as pharmaceutical excipients, namely; Soluplus ® (Soluplus), Kollidon ® VA64 (VA64), Shin-Etsu AQOAT ® (HPMCAS), Eudragit ® EPO (EPO), Eudragit ® EL 100 (EL100), and Parteck ® MXP (PVA). These polymers cover a range of properties, e.g., they have different T g , M w , and solubilities of the drug CCX. CCX was chosen as a model drug, as it was previously successfully used for microwave-and laser-induced in situ amorphization with the polymer PVP. This allowed studying the effect of the polymer type and the polymer properties on the laser-induced in situ drug amorphization, as well as the influence of the drug solubility in the polymer on the successful amorphization.
Ethanol (>99.7%, HPLC grade) was purchased from VWR International (Leuven, Belgium). Purified water used for the mobile phase in the HPLC experiments was prepared using a MilliQ water system from LabWater (Los Angeles, CA, USA). Silica gel with indicator (orange gel) as a granulate was purchased from Merck KGaA (Darmstadt, Germany). All chemicals were used as received.
Plasmonic Nanoparticle Synthesis
The silver-silicon dioxide PNs were synthesized by FSP [19] as introduced by Sotiriou et al. 2011 [18] with a target composition of 98 wt% Ag and 2 wt% SiO 2 . The detailed procedure can be found in [10]. In short, the dissolved precursors were dispersed at a rate of 5 mL/min into a fine spray by oxygen gas flowing at 5 L/min. This spray was ignited by a methane/oxygen annular support flamelet. The PNs were then collected on a filter above the flame.
Compact Preparation
Firstly, physical drug-polymer mixtures were prepared by mortar and pestle containing 30 wt% CCX, 69.25 wt% polymer, 0.25 wt% PNs, and 0.5 wt% magnesium stearate (lubricant). Using 50 ± 2 mg of the physical mixture, flat-faced compacts with a diameter of 6 mm were obtained by using an instrumented single punch tablet press GTP-1 from Gamlen Instruments (Nottingham, UK). The compaction pressure was set to 160 MPa Pharmaceutics 2021, 13, 917 4 of 12 using a 500 kg load cell (CT-500-022). The compacts were stored over dried silica until further use.
Exposure to Laser Radiation
Laser-induced in situ amorphization was conducted using laser radiation at a wavelength of 808 nm. Table 1 shows an overview of the exposure times used for the different compact compositions. On the laser outlet, a tophat diffuser with a squared profile from Thorlabs Inc. (Mölndal, Sweden) was mounted to evenly distribute the radiation over the compact. The laser output power was adjusted and controlled using a laser diode controller Model ADR 1860 from Shanghai Laser & Optics Century Co., Ltd. (Shanghai, China). Each compact was located on a cover glass slide and elevated from the bottom. The laser intensity used was 1.71 W/cm 2 distributed over an area of 1.54 cm 2 as measured at the glass coverslip. Additionally, a cover glass slide was placed on top of the compact to control the formation of a water gas bubble due to evaporation. The cover glass slide had no influence on the in situ drug amorphization (data not shown) (see also Hempel et al. (2021) for more information [10]). Using an IR thermal camera Testo 871 from Testo SE & CO. KGaA (Lenzkirch, Germany), surface temperature measurements of the compacts were performed during exposure to laser radiation. The IR thermal camera created thermal images, which were saved by the thermography app (version 2.7.0.1803, Testo SE & Co. KGaA, Lenzkirch, Germany) and analyzed using the Testo IRSoft Software (version 4.5, Testo SE & Co. KGaA, Lenzkirch, Germany). Approximately every 6th second, a thermal image was taken of the compact during exposure to laser radiation. Each experiment was conducted in triplicate (n = 3).
Water Content Determination
The water content was determined for pure compounds, physical mixtures for the compacts, and the powdered compacts (using mortar and pestle), before and after exposure to laser radiation. For this, a Discovery thermogravimetric analyzer 1 (TGA) from TA Instruments Inc. (New Castle, DE, USA) was used. The TGA experiments were performed under a nitrogen gas atmosphere for which the gas flow was set to 25 mL/min. The weight loss equivalent to the water content was determined using the TA Instruments TRIOS software (version 5.1.1, TA Instruments Inc., New Castle, DE, USA).
Using a heating rate from ambient temperature to 150 • C of 10 • C/min, the water content was determined. All experiments were performed as a duplicate (n = 2) apart from compacts exposed to laser radiation. For compacts exposed to laser radiation, for each exposure time and compact composition, the water content was determined in a single run (n = 1).
Thermal Analysis
Thermal analysis of samples was performed by differential scanning calorimetry (DSC) using a Discovery DSC from TA Instruments (New Castle, DE, USA). The experiments were performed under a nitrogen gas atmosphere achieved by a gas flow of 50 mL/min into the DSC cell. The data were analyzed using the TRIOS software (version 5.1.1, New Castle, DE, USA) from TA Instruments.
Determination of the Onset Temperature for the Dissolution Process
Using a mortar and pestle, 100 mg physical mixtures containing 30 wt% CCX in each polymer were prepared. Of each physical mixture, 2-4 mg was weighed into a Tzero aluminum pan and sealed with a perforated hermetic lid. The onset of dissolution was determined in the total heat flow using a modulated DSC (mDSC) run with a heating rate of 3 • C/min from 20 to 190 • C. The modulation had an amplitude of 1 • C/50 s (n = 2). The sample mass was corrected for the water content of the polymer (see Section 2.5.).
Determination of the Drug-Polymer Solubility
The solubility of CCX was determined in each polymer except for Soluplus, as raw data in that case was available in the literature from Knopp et al. (2016) [20]. For the solubility measurements, 100 mg physical drug-polymer mixtures were made for each drug-polymer combination with 70-90 wt% CCX in 5 wt% increments. Subsequently, 3-5 mg of each mixture and pure CCX were weighed into Tzero aluminum pans, which were sealed with a perforated hermetic lid. The samples were equilibrated at 20 • C for 2 min. Afterwards, a temperature ramp of 1 • C/min to 180 • C was applied (n = 2). Using the Flory-Huggins approach, the solubility of the drug in the polymer was calculated from the onset of the dissolution endotherm. The method is described in more detail in Knopp et al. (2015) [21]. The sample mass was corrected for the water content of the polymer (see Section 2.5.).
Glass Transition Temperature of the Polymers
The T g of the polymers was also determined by DSC. For each polymer, two T g s were determined: the T g of the bulk polymer (T g 1) and the water-free T g (T g 2). Of each polymer, 3-5 mg was weighed into Tzero aluminum pans with a hermetically sealing lid. A modulated DSC (mDSC) run was applied with an amplitude of 1 • C/50 s at a heating rate of 3 • C/min. For the determination of the T g 2, the lid was perforated, and the sample was first heated to 120 • C to allow the water to evaporate, followed by an isothermal period of 10 min before equilibrating to 20 • C. Depending on the polymer, the sample was heated to 140-180 • C for the determination of T g 2. For the determination of T g 1, the sample was not heated higher than 120 • C as described above (no perforation of the lid). All T g s were determined as the midpoint of the step change. Each experiment was conducted in duplicate (n = 2).
Solid-State Characteristics
Solid-state characteristics were determined by diffractometry. For this, X-ray powder diffraction (XRPD) was performed and used to determine the solid-state characteristics of the pure substances (data not shown), physical mixtures for the different compact compositions (data not shown), and compacts before and after exposure to laser radiation. XRPD was performed on a Rigaku MiniFlex from Rigaku Americas Holding Company Inc. (Austin, TX, USA), which was equipped with a Cu Kα radiation source. Approximately 5-10 mg of sample was used, which was then placed on a low background sample holder and scanned from 5-30 • 2theta at a speed of 5 • /min and no spin. The XRPD was set to a power output of 40 kV and 15 mA. The obtained diffractograms were visually analyzed using the MiniFlex guidance software (version 3.0.2.4, Rigaku Americas Holding Company Inc., Austin, TX, USA), and the raw data were exported to Origin for further analysis.
Quantification of Drug and Qualification of Degradation Using Liquid Chromatography
High-performance liquid chromatography (HPLC) was conducted to quantify the amount of CCX in the compacts before and after exposure to laser radiation. As a representative compact, only the compacts at the respective longest exposure time to laser radiation were measured by liquid chromatography. The HPLC experiments were conducted with a 1260 Infinity HPCL from Agilent Technologies, Inc (Santa Clara, CA, USA) using a reverse-phase Luna 5U C18(2) 100 A column (150 mm × 4.6 µm) from Phenomenex Ltd. (Aschaffenburg, Germany). The chromatography was performed at ambient temperature. The mobile phases were degassed before use.
The HPLC method used in this study was previously reported for the quantification of CCX by Hempel et al. [10]. The original published method was from Dhabu et al. [22] and modified by Hempel et al. [10]. The mobile phases were purified water and ethanol, which were eluted at a ratio of 3:7 (v/v) at a flow rate of 1 mL/min. From the HPLC vial containing the dissolved drug CCX, a sample volume of 10 µL was injected into the column. The UV detection of CCX was performed at an absorbance maximum at a wavelength of 251 nm. None of the polymers showed absorbance at the chosen wavelength, which was determined by UV spectroscopy prior to the HPLC experiments (data not shown). The retention time of CCX was experimentally found at 2.6 min. According to the literature by Dhabu et al., degradation products would elute at lower retention times than CCX [22].
The samples were prepared by dispersing an amount of the powdered compacts (before or after exposure to laser radiation) in the organic mobile phase ethanol to dissolve and extract CCX. After shaking, the dispersion was filtered using a nylon syringe filter Q-max ® RR 25 mm with a pore size of 0.45 µm from Frisenette Aps (Knebel, Denmark), and the first 1 mL was discharged. The sample mass was corrected by the water content of the compact or mixture (see Section 2.5.). The standard curve used to quantify the amount of CCX in the experiments is published in [10] and is usable at a concentration range from 2 to 12 µg/mL, i.e., the samples were diluted accordingly to lie in the concentration range of the standard curve.
Results and Discussion
Laser-induced in situ drug amorphization has previously been shown feasible for the drug-polymer combination of CCX and PVP12 [10]. Using the same drug, laser-induced in situ amorphization in the current work was attempted using six different polymers with different T g and M w , as well as different drug-polymer solubilities. By discussing the results in light of the rate and degree of amorphization, with respect to the temperature measured, as well as the drug-polymer solubility, conclusions regarding the suitability of certain types of polymers for laser-induced in situ drug amorphization were drawn.
Drug-Polymer Solubility
As described in the introduction, laser-induced in situ drug amorphization is a temperature-dependent process. The temperature reached during exposure to laser radiation limits the amount of drug that can dissolve into the mobile polymer. As the in situ drug amorphization will also be limited by the solubility of the drug in the polymer, it is important to determine the solubility of CCX in the six different polymers. To determine the solubility of CCX in the tested polymers, the "dissolution" method was used [23,24]. Figure 1 shows the solubility of CCX in the respective polymers from 20 • C to the melting point of CCX. Table S1 summarizes the predicted values including confidence intervals for the solubility of CCX in the respective polymers at 20 • C (room temperature).
As can be seen in Figure 1, CCX has the highest solubility at room temperature in VA64 and Soluplus (31.8 wt% and 22.5 wt%, respectively). The drug load in the compacts (30 wt%) was below the solubility in VA64 at room temperature and above the solubility in Soluplus at room temperature.
solubility of CCX in the tested polymers, the "dissolution" method was used [23,24]. Figure 1 shows the solubility of CCX in the respective polymers from 20 °C to the melting point of CCX. Table S1 summarizes the predicted values including confidence intervals for the solubility of CCX in the respective polymers at 20 °C (room temperature). As can be seen in Figure 1, CCX has the highest solubility at room temperature in VA64 and Soluplus (31.8 wt% and 22.5 wt%, respectively). The drug load in the compacts (30 wt%) was below the solubility in VA64 at room temperature and above the solubility in Soluplus at room temperature.
CCX has a low solubility at room temperature in HMCAS and EPO, with 5.3 wt% and 3.6 wt%, respectively, and negligible solubility in EL100 and PVA. As the solubility of CCX in EPO and HPMCAS increases with increasing temperature, it should, in theory, be possible to dissolve 30 wt% CCX in the polymers upon exposure to laser radiation depending on the temperatures reached during exposure.
Laser-Induced In Situ Drug Amorphization
Immediately after exposure to laser radiation, the compacts were analyzed by XRPD to follow the amorphization process qualitatively. Figure 2 shows the diffractograms for the compacts containing VA64 and EL100 (data for the remaining drug-polymer combinations are available in the Figure S1). As can be seen, upon increasing exposure to laser radiation, the crystalline peaks gradually disappear for the compacts containing VA64 until a fully amorphous halo was obtained after 180 s (Figure 2a). In contrast, the peak intensity of CCX did not decrease for compacts containing EL100, indicating little to no amorphization upon exposure to laser radiation for 600 s (Figure 2b). The exposure times to reach complete amorphization for all CCX-polymer combinations are summarized in Table 1. CCX has a low solubility at room temperature in HMCAS and EPO, with 5.3 wt% and 3.6 wt%, respectively, and negligible solubility in EL100 and PVA. As the solubility of CCX in EPO and HPMCAS increases with increasing temperature, it should, in theory, be possible to dissolve 30 wt% CCX in the polymers upon exposure to laser radiation depending on the temperatures reached during exposure.
Laser-Induced In Situ Drug Amorphization
Immediately after exposure to laser radiation, the compacts were analyzed by XRPD to follow the amorphization process qualitatively. Figure 2 shows the diffractograms for the compacts containing VA64 and EL100 (data for the remaining drug-polymer combinations are available in the Figure S1). As can be seen, upon increasing exposure to laser radiation, the crystalline peaks gradually disappear for the compacts containing VA64 until a fully amorphous halo was obtained after 180 s (Figure 2a). In contrast, the peak intensity of CCX did not decrease for compacts containing EL100, indicating little to no amorphization upon exposure to laser radiation for 600 s (Figure 2b). The exposure times to reach complete amorphization for all CCX-polymer combinations are summarized in Table 1. CCX could be amorphized with VA64 and Soluplus probably due to the high drug solubility in these two polymers. Using XRPD, complete amorphization was achieved for the compact compositions CCX in VA64 or Soluplus after 180 s and 420 s, respectively. Compacts containing CCX in VA64 showed the overall fastest rate of amorphization (Table 1 and Figure 2a). CCX displayed a low solubility at room temperature in HPMCAS, EPO, EL100, and PVA. However, the solubility of CCX in these polymers increased with increasing temperature, which in theory should allow in situ drug amorphization during laser exposure due to the elevated compact temperature during laser exposure. Indeed, CCX became fully amorphous in compacts containing HPMCAS and EPO after 420 s and 600 s, respectively (Table 1). However, no complete (or any) amorphization could be obtained for compacts containing CCX in EL100 and PVA even after 600 s of exposure to laser radiation (Table 1, Figure 2b and Figure S1). It should be noted that compacts containing HPMCAS, EPO, and Soluplus showed signs of recrystallization after 1.5-2 weeks, indicating the formation of a supersaturated ASD at room temperature (data not shown). CCX could be amorphized with VA64 and Soluplus probably due to the high drug solubility in these two polymers. Using XRPD, complete amorphization was achieved for the compact compositions CCX in VA64 or Soluplus after 180 s and 420 s, respectively. Compacts containing CCX in VA64 showed the overall fastest rate of amorphization (Table 1 and Figure 2a). CCX displayed a low solubility at room temperature in HPMCAS, EPO, EL100, and PVA. However, the solubility of CCX in these polymers increased with increasing temperature, which in theory should allow in situ drug amorphization during laser exposure due to the elevated compact temperature during laser exposure. Indeed, CCX became fully amorphous in compacts containing HPMCAS and EPO after 420 s and 600 s, respectively (Table 1). However, no complete (or any) amorphization could be obtained for compacts containing CCX in EL100 and PVA even after 600 s of exposure to laser radiation ( Table 1, Figures 2b and S1). It should be noted that compacts containing HPMCAS, EPO, and Soluplus showed signs of recrystallization after 1.5-2 weeks, indicating the formation of a supersaturated ASD at room temperature (data not shown).
Temperature Measurements during Laser Exposure
It can be seen in Figures 1 and 3 that different maximum compact temperatures were reached depending on the type of polymer utilized (Note: also after different exposure times). The individual temperature plots are shown in the Figure S2. The two polymers with the greatest difference in maximum compact temperature achieved during exposure to laser radiation were VA64 (Tmax = 155.7 ± 5.7 °C) and EL100 (Tmax = 85.4 ± 0.9 °C). The differences between the compact temperatures achieved upon exposure to laser radiation suggest that the compacts containing different polymers responded differently to the laser radiation.
From the maximum temperatures achieved during exposure to laser radiation and the solubility curves presented in Figure 1, it is theoretically possible to predict whether the maximum compact temperature obtained will allow a complete amorphization of the drug in the given polymer composition. The chosen drug load of 30 wt% CCX is clearly soluble in VA64 and Soluplus at the maximum compact temperatures, and complete amorphization can be expected. Due to the increased compact temperature upon exposure to laser radiation, the drug load of 30 wt% CCX can in theory also fully dissolve into HPM-CAS and EPO at the maximum compact temperatures obtained according to the solubility curves ( Figure 1). According to the CCX-polymer solubility, the temperature necessary to dissolve 30 wt% CCX in compacts containing HPMCAS is between 61 and 112 °C ( Figure 1). Similarly, the temperature necessary to dissolve 30 wt% CCX in EPO is between 64 and 117 °C. The maximum compact temperature reached for compacts containing HPMCAS
Temperature Measurements during Laser Exposure
It can be seen in Figures 1 and 3 that different maximum compact temperatures were reached depending on the type of polymer utilized (Note: also after different exposure times). The individual temperature plots are shown in the Figure S2. The two polymers with the greatest difference in maximum compact temperature achieved during exposure to laser radiation were VA64 (T max = 155.7 ± 5.7 • C) and EL100 (T max = 85.4 ± 0.9 • C). The differences between the compact temperatures achieved upon exposure to laser radiation suggest that the compacts containing different polymers responded differently to the laser radiation. Compacts containing the green polymers became fully amorphous upon exposure to laser radiation and compacts containing the red polymers did not become fully amorphous. Tg 1 is the temperature of the Tg for the polymer with bulk water. Tg 2 is the temperature of the Tg for the water-free polymer. Tmax is also shown in Figure 1. TDstart is determined from the drug-polymer solubility measurements. Mean ± SD (n = 2 for Tg 1, Tg 2, and TDstart, n = 3 for Tmax). Compacts containing the green polymers became fully amorphous upon exposure to laser radiation and compacts containing the red polymers did not become fully amorphous. T g 1 is the temperature of the T g for the polymer with bulk water. T g 2 is the temperature of the T g for the water-free polymer. T max is also shown in Figure 1. T Dstart is determined from the drug-polymer solubility measurements. Mean ± SD (n = 2 for T g 1, T g 2, and T Dstart , n = 3 for T max ).
From the maximum temperatures achieved during exposure to laser radiation and the solubility curves presented in Figure 1, it is theoretically possible to predict whether the maximum compact temperature obtained will allow a complete amorphization of the drug in the given polymer composition. The chosen drug load of 30 wt% CCX is clearly soluble in VA64 and Soluplus at the maximum compact temperatures, and complete amorphization can be expected. Due to the increased compact temperature upon exposure to laser radiation, the drug load of 30 wt% CCX can in theory also fully dissolve into HPMCAS and EPO at the maximum compact temperatures obtained according to the solubility curves (Figure 1). According to the CCX-polymer solubility, the temperature necessary to dissolve 30 wt% CCX in compacts containing HPMCAS is between 61 and 112 • C (Figure 1). Similarly, the temperature necessary to dissolve 30 wt% CCX in EPO is between 64 and 117 • C. The maximum compact temperature reached for compacts containing HPMCAS was 134.1 ± 1.1 • C (mean ± SD, n = 3). For compacts containing EPO, the maximum compact temperature was 122.8 ± 2.7 • C (mean ± SD, n = 3). Hence, the maximum compact temperatures for compacts containing HPMCAS and EPO were above the temperature necessary to dissolve 30 wt% CCX, and hence complete amorphization was obtained.
The chosen drug load of 30 wt% for EL100 compacts can, in theory, only be achieved at temperatures above 150-153 • C. Thus, at the maximum compact temperature achieved (T max = 85.4 ± 0.9 • C), only a drug load of 1.5-2.4 wt% can be dissolved according to Figure 1. In accordance with this, little to no amorphization was observed upon exposure to laser radiation of CCX in EL100 (confirmed in Figure 2b). For PVA, the temperature necessary to dissolve 30 wt% CCX is between 117 and 157 • C, which was only reached (T max = 135.9 ± 6.6 • C) at the longest exposure time (600 s). No complete amorphization was observed for CCX in PVA, possibly due to insufficient time at this temperature.
Not only did the different compact compositions reach different maximum compact temperatures, but the time of the initial heating rate and the time to reach the maximum compact temperatures were also significantly different ( Figure S2). Compacts containing VA64, Soluplus, HPMCAS, EL100, and PVA showed a fast initial heating rate within the first 60 s of exposure, followed by a slower heating rate or even a temperature plateau. For compacts containing VA64 and PVA, the compact temperature increased steadily after the fast initial heating rate in the first 60 s. Compacts containing EPO showed a fast initial heating rate in the first 180 s followed by a temperature plateau. Comparing compacts containing EPO with compacts containing Soluplus, it was seen that the initial heating rate for compacts containing EPO was slower, i.e., the same compact temperature was reached after 180 s, compared to 60 s for compacts containing EPO and Soluplus, respectively ( Figure S2).
With EL100, the maximum compact temperatures achieved were below 100 • C. It has previously been shown that an increase in PN load (from 0.1 wt% to 0.25 wt%) led to an increase in compact temperature [10]. In an attempt to reach a higher maximum compact temperature for compacts containing EL100 and PVA, the PN load was increased from 0.25 to 0.4 wt%. However, the maximum temperature reached upon exposure to laser radiation was not impacted with increasing PN load (data not shown). Thus, it seems that compacts with EL100 and PVA reached their maximum compact temperature at the laser intensity used, as all the light was already absorbed by 0.25 wt% PN. Figure 3 summarizes the effect of the maximum compact temperature obtained during exposure to laser radiation on the amorphization of CCX. The temperature for the onset of the amorphization was determined by DSC analysis (T Dstart ). As the amorphization follows a dissolution process, the dissolution of the drug is enhanced at the temperature of T Dstart , i.e., the viscosity has decreased enough to allow for drug dissolution in a measurable time frame. At temperatures below T Dstart , dissolution is possible if the drug is soluble at that temperature in the polymer; however, the dissolution rate will be so slow that it cannot be measured in the given time frame (kinetic hindrance due to high viscosity of the polymer). However, the dissolution process is a kinetic event, i.e., T Dstart will be heating-rate dependent and increase with increasing heating rate and is therefore only an approximation. Nevertheless, the compact temperature must be at temperatures above T Dstart to obtain a measurable drug dissolution in the given time frame. Furthermore, the temperature of T Dstart is always above the T g of the polymer. In fact, a significant decrease in viscosity of the polymer is often observed at approximately only 15-25 • C above the T g of the polymer (determined by DSC) [23,25], allowing a drug dissolution process. In other words, the temperature of T Dstart is at temperatures above approximately T g + 20 • C. As most polymers used in this study contain sorbed water, the plasticized T g (referred to as T g 1) is particularly of interest (during in situ drug amorphization, small amounts of water will evaporate; hence, the practically relevant T g will be somewhere between T g 1 and T g 2 (water-free T g )) (see Table 2). From Figure 3 it can be seen that for all compacts that became fully amorphous upon exposure to laser radiation, T max was above T Dstart , i.e., the reached maximum compact temperature allowed for fast dissolution of the drug into the mobile polymer. Conversely, for the compacts that did not become fully amorphous upon exposure to laser radiation, T Dstart was above T max . Therefore, no amorphization was possible in the given time frame. Even though CCX has a solubility of 34.0 wt% (t 2.5 = 15.3 wt%, t 97.5 = 53.8 wt%) at 140 • C in PVA (maximum compact temperature reached, see Figure 3), no complete amorphization was seen, as the T max was below T DStart ( Figure S2 and Figure 3). It is suggested that the nonconsistent decrease in peak intensity in the XRP-diffractograms (see Figure S1) originated from CCX degradation rather than amorphization (see also Section 3.4.).
HPLC Data
Firstly, the amount of CCX in the different compacts was determined prior to exposure to laser radiation. The amount incorporated inside the compacts was detected, i.e., the polymers did not interfere with the detection and quantification of CCX. It was possible to quantify the 30 wt% CCX in all compact compositions before exposure to laser radiation. Secondly, the CCX amount incorporated inside the compacts was also detected after exposure to laser radiation for compacts containing VA64, Soluplus, HPMCAS, EPO, and EL100 at the maximum exposure time to laser radiation (data not shown), i.e., 30 wt% CCX was detected inside the sample injected into the HPLC column. In contrast, for the compact containing PVA exposed for 600 s to laser radiation, CCX could only partly be detected (one sample showed only 11 wt% CCX), whilst others contained 28-30 wt% CCX (as incorporated). Visual inspection of the HPLC elution profiles showed a slight increase of the peak height and AUC of the degradation products of CCX in some cases, though not all. It remains unclear if degradation was the cause of the loss of CCX inside compacts containing PVA after exposure to laser radiation. PVA was, however, not a suitable polymer for laser-induced in situ amorphization of CCX due to the maximum temperature reached being below T DStart .
Conclusions
This study showed that successful in situ drug amorphization upon exposure to laser radiation was possible with a range of different pharmaceutically relevant polymers. Using low amounts of PNs (0.25 wt%) in compacts containing CCX and polymer, complete amorphization was possible for a drug load of 30 wt% in VA64, Soluplus, HPMCAS, and EPO. Complete amorphization was not achieved for CCX in EL100 and PVA. Different rates of amorphization, due to different heating rates and maximum compact temperatures, were obtained during exposure to laser radiation for the different polymers. It was found that for a successful laser-induced in situ drug amorphization, it is important to obtain temperatures above that of the onset of the dissolution (T Dstart ) of the respective drugpolymer composition. Hence, laser-induced in situ drug amorphization is suitable for polymers in which the drug is soluble and for which compact temperatures above T Dstart can be reached. | 2021-06-28T05:07:55.132Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "3b326a0b7afcbbc33557c9a6954a61b1aed7c5ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/13/6/917/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b326a0b7afcbbc33557c9a6954a61b1aed7c5ab",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265712068 | pes2o/s2orc | v3-fos-license | Prevalence and factors associated with pentavalent vaccination: a cross-sectional study in Southern China
Background Immunization is one of the most far-reaching and cost-effective strategies for promoting good health and saving lives. A complex immunization schedule, however, may be burdensome to parents and lead to reduced vaccine compliance and completion. Thus, it is critical to develop combination vaccines to reduce the number of injections and simplify the immunization schedule. This study aimed to investigate the current status of the pentavalent diphtheria-tetanus-acellular pertussis inactivated poliomyelitis and Haemophilus influenzae type B conjugate (DTaP-IPV/Hib) vaccination in Southern China as well as explore the factors in the general population associated with uptake and the differences between urban and rural populations. Methods A cross-sectional study was conducted with recently enrolled kindergarten students in Hainan Province between December 2022 and January 2023. The study employed a stratified multistage cluster random sampling method. Information regarding the demographic characteristics and factors that influence decisions were collected from the caregivers of children via an online questionnaire. Multivariate logistic regression was used to determine the factors associated with the status of DTap-IPV/Hib vaccinations. Results Of the 4818 valid responses, 95.3% of children were aged 3–4 years, and 2856 (59.3%) held rural hukou. Coverage rates of the DTaP-IPV/Hib vaccine, from 1 to 4 doses, were 24.4%, 20.7%, 18.5%, and 16.0%, respectively. Caregivers who are concerned about vaccine efficacy [adjusted odds ratio (aOR) = 1.53, 95% confidence interval (CI): 1.30–1.79], the manufacturer (aOR = 2.05, 95% CI: 1.69–2.49), and a simple immunization schedule (aOR = 1.26, 95% CI: 1.04–1.54) are factors associated with a higher likelihood of vaccinating children against DTaP-IPV/Hib. In addition, caregivers in urban areas showed more concern about the vaccine price (P = 0.010) and immunization schedule (P = 0.022) in regard to vaccinating children. Conclusions The DTaP-IPV/Hib vaccine coverage rate in Hainan Province remains low. Factors such as lower socioeconomic status, cultural beliefs, concerns about vaccine safety, and cost may hinder caregivers from vaccinating their children. Further measures, such as health education campaigns to raise knowledge and awareness, and encouragement of domestic vaccine innovation, which would reduce out-of-pocket costs, could be implemented to improve the coverage of DTap-IPV/Hib vaccination. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s40249-023-01134-8.
Background
Early child development (ECD) is an essential element of the Sustainable Development Goals and serves as the foundation of adult health and well-being [1].Although substantive progress has been made in tackling under-5 mortality, with a reduction in the number of childhood deaths from 5.9 million in 2015 to 5 million in 2021, many children who survive are not able to thrive due to the threat of infectious diseases [2].Among the five major areas of ECD, the area of health emphasizes childhood immunization.Vaccines annually prevent 2-3 million fatalities and safeguard millions more from disease and disability [3].As one of the most far-reaching health interventions, immunization is an incredibly cost-effective strategy for promoting good health and saving lives.A study in 94 low-and middle-income countries estimated that every US dollar (USD) 1 invested in immunization generates a return of USD 51.8 in broader societal benefits of people who live longer and healthier lives [4].
The World Health Organization (WHO) developed Immunization Agenda 2030 to reduce mortality and morbidity from vaccine-preventable diseases (VPDs) from the period of 2021 to 2030 [5].The current National Immunization Program (NIP) in China provides, at no cost, vaccines for eligible-aged children to prevent 12 VPDs and reduce by 99% the incidence of these diseases [6].In addition, the number of recommended vaccines during childhood has increased significantly.Currently, children can receive ~ 20 injections by the age of 2 years to complete their immunization schedule, although the number of injections may increase in the coming years due to an increasing number of new diseases and vaccines.Evidence suggests, however, that a complex immunization schedule may be burdensome to parents and healthcare providers and can even lead to reduced vaccine compliance and completion [7,8].
To address these concerns, many international organizations recommend that countries develop combination vaccines, which can be produced by grouping multiple antigens into one injection [9,10].In 2010, the Chinese National Medical Products Administration approved the pentavalent diphtheria-tetanus-acellular pertussis inactivated poliomyelitis and Haemophilus influenzae type B conjugate vaccine (DTaP-IPV/Hib) (Pentaxim ® , Sanofi Pasteur Limited, Marcy l'Etoile, France), which can prevent more than five high incidences of morbidity and mortality diseases [11].Until now, it was the highest degree of combination vaccines available in China.Numerous studies have demonstrated the good immunogenicity and safety profile of the DTaP-IPV/Hib combination vaccine, which is equal to the separately administered vaccine components [12,13].The DTaP-IPV/Hib combination vaccine offers a safe and effective alternative for reducing the number of injections from 12 to 4, which can reduce pain and discomfort and prevent potential side effects for children, save time and money, and reduce the loss of productivity for caregivers [14,15].Notably, China, as the sole WHO member country that has not incorporated the Hib vaccine into its NIP, exhibits a relatively low national coverage rate of only 33%, thereby experiencing a significant residual burden of Hib disease [16].The use of the DTaP-IPV/Hib combination vaccine could contribute to enhancing Hib coverage, which allows for better and wider protection against infectious diseases and decreases the cost of disease management [17].
The DTap-IPV/Hib vaccination rates in many developed countries are far higher and were 94.4% in England and over 93.7% in Canada [18,19].The vaccine is imported, optional, and self-paid (Category 2), and the DTap-IPV/Hib vaccination rate in China significantly varies across different regions, but the overall rates are low.According to previous studies, the DTap-IPV/Hib vaccine coverage exhibited variations, with rates that range from 18.51% in Hangzhou during 2017 to 6.28% in Chongqing during 2015 [20,21].Considering the financial responsibility of caregivers in China to fully cover the cost of the DTap-IPV/Hib vaccine through out-ofpocket payments, it becomes imperative to assess the actual immunization status and the factors that influence DTap-IPV/Hib vaccine uptake within the country.Nevertheless, it is unfortunate that there is a lack of comprehensive information available about the utilization of the DTap-IPV/Hib vaccine and the factors that influence its uptake within the Chinese context.
In 2018, the Chinese government made the strategic decision to establish Hainan Province as the nation's inaugural free trade port, operating under the socialist system.To facilitate the importation of pharmaceuticals and sanitary equipment, Hainan Free Trade Port (HFTP) has implemented a range of convenient and preferential laws and policies.This development is expected to enhance the accessibility of imported vaccines for local residents [22].Hence, this study aimed to investigate the current status of DTap-IPV/Hib vaccination in Hainan Province as an example and explore the potential influencing factors in the general population as well as the differences between urban and rural populations.The study also sought to provide recommendations for increasing the vaccination rate, including tailored preparation to address hesitancy, and build vaccine literacy in China.
Study design and ethic
This study is part of a larger cross-sectional survey on the intervention strategies for ECD, which includes, among others, immunization, responsive caregiving, and early learning.The survey was conducted with a population of newly enrolled kindergarteners in Hainan Province from December 12, 2022, to January 8, 2023.Although it is mandatory for all 3-year-old children who reside in Hainan Province to enroll in kindergarten, there may be variations in the actual age at which they are enrolled (95% of the children were 3-4 years old, but there were a few children who were 2 or 5 years old).Ethics approval was obtained from the Research Ethics Board of the Hainan Women and Children's Medical Center (2020-002).This paper includes data only from vaccination surveys and uses components of the cross-sectional questionnaire relevant to the aims of this paper.
Study participants and randomization
Newly enrolled kindergarteners in Hainan Province in 2022 were recruited, and those who had foreign nationality or studied in special education schools were excluded.A stratified multistage cluster random sampling approach was employed for the cross-sectional study.First, primary sampling units (PSUs) were set at the county-level administrative region.There are a total of 3 groups and 24 categories of PSUs, including 8 municipal districts, 6 county-level cities, and 10 counties/autonomous counties.Half of the units in each group were randomly selected.
We then defined secondary sampling units (SSUs) based on the kindergarten's ownership (public or private) and level (provincial/demonstration level; first, second, or third level in city/county; and unrated level).There are a total of 120 categories of SSUs.In each SSU, one or two kindergartens were randomly selected, and all the enrolled children in the junior grade were invited to participate in the survey (Fig. 1).Random sampling was conducted, using a list of random numbers, by an individual epidemiologist who was not involved in any other research activities of this survey.A total of 8478 children from 180 kindergartens were randomly selected as participants.All caregivers of children who participated in the study were informed about the intention of the study and gave their electronic informed consent at the beginning of the online survey.
Sample size and power analysis
The sample size was calculated with the following formula based on an error α = 0.05, Z 1−α/2 = 1.96, and allow- ance error δ = 0.02: Based on the whole ECD study, we adopted the early Human Capability Index to more comprehensively assess the development of children.In accordance with our preliminary pilot study, the estimated risk of poor development was found to be 18%, which is slightly lower than the average rate of 20%, observed within the Chinese population [23].Assuming a conservative estimate of 20% for the risks of poor development in Hainan, we determined that the calculated sample size required was 1537.After estimating 70% valid data, the total number of participants was expected to be 2196.
Data collection and quality control
Based on a review of the literature, we developed a structured questionnaire to collect data on demographic characteristics and factors that influence the choices of the DTaP-IPV/Hib vaccine.We implemented expert consultation to ensure the scientific validity and rationale of the questionnaire.We then conducted a pilot study with a random sample of 128 caregivers in two kindergartens to ensure the comprehension of the questionnaire.After the pilot study, a few modifications were made to ensure that the questions were comprehensible and interpreted as intended.The results of the pilot study were not included in the main study.
The data collection was carried out by the Maternal and Children Health Care System in Hainan Province, China.At the beginning of the survey, we provided standard training for the head of each PSU, who then provided training and guidance to the kindergartens within their jurisdiction to ensure that they carried out this survey, following the uniform process.The kindergarten representatives were responsible for checking the children's personal information and guiding parents or caregivers to finish the online survey within two weeks.
In addition, a web-based questionnaire and research management platform were set up.The selected kindergartens were requested to upload the properly formatted information about their children (including name, gender, kindergarten, class, and date of birth) to the questionnaire platform, and the questionnaire platform generated a unique login code for each child [24].Both the link to the research and the unique login code were distributed to parents through the Maternal and Children Health Care System and kindergarten teachers.The questionnaire collection process is strictly quality controlled by various levels of regulatory systems.Using the login code, all parents accessed the online questionnaire to double-check the child's personal information and gave informed consent to participate in the survey.
After collecting questionnaires, we excluded those with missing important and obvious logical errors.Valid data with complete basic information and DTaP-IPV/Hib vaccination status were included in the analysis.
Measures
Using the researcher-designed questionnaire, we obtained the general demographic characteristics of the participants, including children's age, gender, hukou (the location of registered residency of the child), and ethnic group; administrative division, rank, and type of kindergartens; premature delivery, basic medical insurance, commercial medical insurance, number of children in the family, and previous vaccination status in NIP; caregivers' relationship with the children, education level, and employment status; and annual household income.
Acceptance of vaccination is an outcome behavior that results from a complex decision-making process that can be potentially influenced by a wide range of factors.As caregivers play a key role in vaccination, we also assessed their influence using the "3Cs" model, which was first proposed to the WHO EURO Vaccine Communications Working Group in 2011.The "3Cs" model is a professionally validated theoretical framework for vaccination determinants, comprised of confidence, convenience, and complacency factors [25].We designed eight questions that were incorporated into the 3Cs in our study.To make subsequent analysis clearer, we categorized responses into two categories: "Yes" or "No."
Statistical analysis
All variables were categorical and represented as frequencies with percentages.The characteristics of participants who had previously been vaccinated for DTaP-IPV/ Hib and those who had not were compared using a chisquare test.
The relationship between the explanatory variables (demographic characteristics of caregivers and children) Fig. 1 Stratified sample units of kindergartens of different levels in Hainan.There are 4 prefecture-level cities, 5 county-level cities, and 10 counties/ autonomous counties in the Hainan administrative division.Among the 4 prefecture-level cities, there are 4 municipal districts each in Haikou and Sanya Cities.Danzhou City is taken as a county-level city, as it governs only streets and towns.We deleted Sansha City due to underpopulation.Thus, there is a total of 3 groups and 24 categories of primary sampling units, including 8 municipal districts, 6 county-level cities, and 10 counties/ autonomous counties and the outcome variable (vaccinating their children against DTaP-IPV/Hib) was examined by multivariate logistic regression.The outcome variable was dichotomized into "Vaccinated" (at least 1 dose) and "Unvaccinated." An adjusted odds ratio (aOR) with a 95% confidence interval (CI) for each variable were calculated.
A comprehensive non-responder analysis was conducted.The available data from the Hainan Women and Children's Medical Center system were used to conduct an analysis of non-response to evaluate whether the nonresponders differed systematically from the responders of the survey.Then, a subgroup analysis was performed, which examined differences in the variables among the groups.All statistics were managed by Microsoft Excel version 2010 (Microsoft Corporation, Redmond, WA, USA) and analyzed using SPSS version 24.0 (SPSS Inc., Chicago, IL, USA).Two-sided P-values < 0.05 were considered significant.
Demographic characteristics of respondents
A total of 4818 valid questionnaires were analyzed in this study, for a valid response rate of 56.8% (Fig. 2).Of the 4818 responses, most were aged ≤ 3 years (75.2%), the majority were from the Han population (80.4%), and 2856 (59.3%) held rural hukou.Almost two-thirds of the families had more than one child (66.1%), and the vast majority (92.5%) of children completed the immunization program of Hainan Province at the target age.Among the respondents, mothers predominated (75.3%), 70.0% of the caregivers were employed, and 31.6% had a 4-year college or associate's degree.With regard to non-NIP vaccine determinants, the safety (51.0%) and efficacy (44.1%) of the vaccine are the two core issues with which caregivers have always been concerned, and more than one-third (39.3%) of caregivers depend heavily on doctors' vaccination advice.Details are provided in Table 1.
Assessing non-response bias
The analyses showed that responders (n = 4818) were comparable to non-responders (n = 3660) with regard to gender and age.Responders, however, were significantly more often of Han ethnicity and were county or autonomous county-, provincial-, or demonstration-level kindergarteners, or public kindergarteners compared to non-responders (P < 0.001) (Table 3).The subgroup analysis results for variables with differences are displayed in Additional file 1: Tables S1-S4.
Factors associated with DTaP-IPV/Hib vaccination in urban and rural areas
In China, hukou represents the location of the registered residency of the children, which is approximately equal to the living residence.Our results showed that caregivers in both urban and rural groups are concerned about vaccine safety, efficacy, and the manufacturer (P < 0.001).Disparities were observed, however, in terms of the convenience dimension related to vaccine price and immunization schedule.Specifically, the urban group exhibited greater concerns regarding vaccine price (P = 0.010) and adherence to the immunization schedule (P = 0.022) in terms of vaccination against DTaP-IPV/Hib (Fig. 3).
Discussion
The vaccination of children, the main target population, can have far-reaching effects on general health and wellbeing, cognitive development, and economic productivity.More than 70 vaccines are available for use, and many more are expected to protect against multiple diseases, which will further increase the number of injections and office visits [8,26].Complex immunization schedules can result in missed or delayed dosing, especially for children under 2 years old.Thus, it is essential to develop combination vaccines to simplify the vaccine schedule.Although the DTaP-IPV/Hib vaccine is the highest degree of combination vaccines available in China, the coverage rates of this vaccine are still low.It is, thus, critical to explore the factors that affect the vaccination rate of DTaP-IPV/Hib.To our knowledge, this is the first large sample investigation of the immunization status and the influencing factors of the DTap-IPV/Hib vaccine in Hainan Province, which includes more than ten million permanent residents.Our findings show that the cumulative coverage rates of the DTap-IPV/Hib vaccine from 1 to 4 doses were 24.4%, 20.7%, 18.5%, and 16.0%, respectively, in Hainan Province, which was higher than other areas in China [20,21].
Consistent with other studies, Voo et al. found that caregivers with higher economic and cultural levels are more likely to vaccinate their children against DTap-IPV/ Hib [27].This could be explained by research that shows that caregivers with higher economic and cultural levels are inclined to accurately process the evidence regarding vaccination and to have access to more healthcare resources, such as choosing self-paid vaccines [28].The cost of an imported DTaP-IPV/Hib vaccine per fully immunized child is estimated to be 2488 Chinese Yuan (CNY), and it is paid out-of-pocket, without any subsidy or insurance coverage.This higher cost may impose a financial burden on families with a lower income, potentially limiting their access to the vaccine and reducing the likelihood of full compliance and completion (Additional file 1: Table S5).Previous research in China and Japan has found that a subsidy would reduce the out-of-pocket price and increase the coverage of vaccination [29,30].
Currently, the advancement of combined vaccines in China is impeded by numerous technical challenges.These include the absence of the crucial component IPV in the market and the presence of thiomersal in the copurification process utilized for manufacturing DTaP vaccines, which can adversely affect the immunogenicity of the IPV antigen [31].To address these concerns, the government should not only develop innovative vaccine pricing mechanisms and increase financing options but also provide an incentive for domestic manufacturers to research and develop DTap-IPV/Hib vaccines.Moreover, region-specific strategies should be developed based on their disease burden and fiscal capacity [32].
Our findings reveal that the efficacy and safety of the DTap-IPV/Hib vaccine have played a significant role in influencing its uptake within the general population.Studies in five countries in South America have revealed Fig. 3 Factors associated with DTaP-IPV/Hib vaccination in urban and rural areas.We adjusted the socioeconomic and demographic characteristics of respondents.aOR adjusted odds ratio; CI confidence interval that safety and efficacy were the two most important factors for caregivers to decide whether to vaccinate their children [33].Accurate information about vaccines is vital for caregivers, as they often lack a complete understanding of how vaccines function and struggle to make well-informed decisions about vaccination.Research conducted by Boerner et al. has shown that insufficient information about vaccination or conflicting information from various sources can decrease an individual's willingness to vaccinate [34].Therefore, it is of the utmost importance to communicate information about vaccines in a clear and easily comprehensible manner to overcome barriers to vaccination [35].Healthcare providers play a crucial role as trusted sources of information for caregivers, enabling them to enhance their understanding and awareness.France's implementation experience highlights the effectiveness of health education campaigns led by reputable medical institutions.These campaigns serve as valuable strategies to provide credible and reliable information about the safety and efficacy of vaccines.The ultimate goal is to empower individuals to make informed decisions regarding vaccination and ensure accessible and comprehensive vaccine information and knowledge [36].
The findings of this study demonstrated a notable disparity in vaccination rates between urban (35.7%) and rural populations (16.6%).In addition, the study revealed that caregivers who expressed concerns about the immunization schedule were more inclined to vaccinate their children against DTaP-IPV/Hib, particularly among those who resided in urban areas.As a combination vaccine, the DTap-IPV/Hib vaccine could simplify the immunization schedule and reduce the total number of required office visits [37].Our preliminary research found that the Hib vaccination coverage rate in Hainan Province is 39.7%.Among children who received the Hib vaccine, 61.5% opted for direct vaccination with the DTap-IPV/Hib vaccine (data have not been published).Due to conflicts between routine vaccination times and parents' working hours, parents in urban areas prefer to pay higher fees to buy time.Time loss related to the number of office visits may prevent parents from completing the immunization schedule on time and result in missed or delayed dosing.Pellissier et al. provided evidence that reducing the number of office visits can lead to time savings and potentially lower indirect costs associated with parental work loss [15].Overall, although combination vaccines may cost slightly more than the total cost of their component vaccines, the benefits of vaccination timeliness and compliance and a simplified schedule may outweigh the cost.
This study has several limitations.First, there is a non-response bias in the study results due to the lower response rate.Responders and non-responders may differ in their vaccination status.Thus, we collected the available data to conduct an analysis of non-responders and then conducted a subgroup analysis.Second, the confirmation of vaccination status was based on the caregivers' self-reports, which rely on memory rather than medical records.Hence, the information may not accurately reflect the DTaP-IPV/Hib vaccine coverage rate and may be subject to recall bias.Because the newly enrolled children are required to provide vaccination records upon admission to kindergarten in September, however, the probability is less that their parents do not remember or are uncertain about the DTaP-IPV/Hib vaccination status.Third, the sample was selected from one geographic area.The specific context of Hainan Province, which might not represent the whole population in China, could limit the generalizability of the findings.Further research should be undertaken to extend the scope to widely evaluate the vaccination rate and influencing factors in China.Despite the above limitations, this study provides important evidence by which to evaluate the vaccination status and popularization proposals of the DTaP-IPV/Hib vaccine in China.
Conclusions
Our study provides important evidence of the prevalence and determining factors of the DTaP-IPV/Hib vaccination in Hainan Province, China.The coverage rate of the DTaP-IPV/Hib vaccine in Hainan Province remains at a low level but is slightly higher than that found in previous studies conducted in China.Caregivers may be hesitant to vaccinate their children against DTaP-IPV/ Hib due to concerns about the vaccine's safety and price.Thus, more effective health education campaigns should be conducted to publicize and promote access to DTaP-IPV/Hib vaccine knowledge and awareness.Further, the government should provide an incentive for domestic manufacturers to research and develop DTap-IPV/Hib vaccines as well as provide innovative vaccine pricing mechanisms and increase financing options to address the cost concern.
Table 1
DTaP-IPV/Hib vaccination status among respondents by characteristics
Table 1
(continued) Values are shown as n (%).P-values were derived from Chi-square tests CNY Chinese Yuan a Hukou represents the location of registered residency of the child (urban or rural)
Table 2
Multivariate analysis of the factors that influence DTaP-IPV/Hib vaccination (N = 4818)
Table 2
(continued)Adjusted odds ratio and 95% confidence intervals are presented aOR Adjusted odd ratio; CI Confidential interval, CNY Chinese Yuan a Model 2 was adjusted for socioeconomic and demographic characteristics.b represents the location of registered residency of the child (urban or rural)
Table 3
Demographics of responders and non-respondersValues are shown as n (%).P-values were derived from Chi-square tests | 2023-09-15T13:48:25.750Z | 2023-09-15T00:00:00.000 | {
"year": 2023,
"sha1": "f4935eabdfff9e694a3fa6d7a41fceb123841728",
"oa_license": "CCBY",
"oa_url": "https://idpjournal.biomedcentral.com/counter/pdf/10.1186/s40249-023-01134-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4935eabdfff9e694a3fa6d7a41fceb123841728",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7094952 | pes2o/s2orc | v3-fos-license | Focal Retrograde Amnesia: Voxel-Based Morphometry Findings in a Case without MRI Lesions
Focal retrograde amnesia (FRA) is a rare neurocognitive disorder presenting with an isolated loss of retrograde memory. In the absence of detectable brain lesions, a differentiation of FRA from psychogenic causes is difficult. Here we report a case study of persisting FRA after an epileptic seizure. A thorough neuropsychological assessment confirmed severe retrograde memory deficits while anterograde memory abilities were completely normal. Neurological and psychiatric examination were unremarkable and high-resolution MRI showed no neuroradiologically apparent lesion. However, voxel-based morphometry (VBM)-comparing the MRI to an education-, age-and sex-matched control group (n = 20) disclosed distinct gray matter decreases in left temporopolar cortex and a region between right posterior parahippocampal and lingual cortex. Although the results of VBM-based comparisons between a single case and a healthy control group are generally susceptible to differences unrelated to the specific symptoms of the case, we believe that our data suggest a causal role of the cortical areas detected since the retrograde memory deficit is the preeminent neuropsychological difference between patient and controls. This was paralleled by grey matter differences in central nodes of the retrograde memory network. We therefore suggest that these subtle alterations represent structural correlates of the focal retrograde amnesia in our patient. Beyond the implications for the diagnosis and etiology of FRA, our results advocate the use of VBM in conditions that do not show abnormalities in clinical radiological assessment, but show distinct neuropsychological deficits.
Introduction
The loss of retrograde memory contrasted with normal learning of new information constitutes a rare memory disorder termed ''focal retrograde amnesia'' (FRA). Case studies demonstrated this syndrome to be associated with various neurological disorders such as traumatic brain injury, encephalitis, hypoxia or epilepsy [1][2][3]. Size and localization of related lesions vary and include neocortical, limbic and brain stem structures [1][2][3]. However, based on scientific evidence medial temporal, temporopolar and frontal cortices play a key role in remote memory functions and should thus be involved in the pathophysiology of FRA [4].
In patients without detectable brain lesions, the differentiation of a psychogenic cause is difficult. Here, the term ''functional retrograde amnesia'' has been coined [5,6], leaving it controversial whether the syndrome is of purely psychogenic nature or whether subtle structural/metabolic changes may account for the memory impairment [5,7]. Merging both concepts, Kopelman proposed that both organic and functional/psychogenic factors interactively contribute in the presentation of the deficit [8].
Here we report a case of persistent FRA after an epileptic seizure. Neurological examination was unremarkable and highresolution MRI showed no neuroradiologically apparent lesion. However, voxel-based morphometry (VBM)-comparing the pa-tient's MRI to a control group of male subjects of the same age and educational background-disclosed distinct gray matter decreases in left temporopolar and right posterior parahippocampal/ lingual cortices.
Case report
The study adhered to the declaration of Helsinki and was approved by the ethics committee of the University of Leipzig. Both patient and healthy control subjects gave informed written consent to the participation in the study and the publication of this report.
In February 2009 a 24-year old male engineering student suffered from a sudden unwitnessed loss of consciousness. A prolonged postictal state and bitemporal sharp-wave-complexes in the initial EEG supported the diagnosis of an epileptic seizure. All other paraclinical measures (including CSF) were within normal limits. Under Lamotrigine therapy, follow-up standard EEGs and EEG-videomonitoring did not reveal any epileptic activity. Clinically the patient reported frequent déjà-vues (4 times per week) prior to the start of the effective anticonvulsant therapy.
Immediately after the seizure, the patient experienced a profound retrograde amnesia covering his entire prior life, while anterograde memory functions were unaffected. Initially this persisting FRA was considered dissociative in origin. Six months after the event the patient was admitted to our clinic. The patient and his parents reported that at first the amnestic symptoms were severe. The patient only recognized his parents but not his fiancée. He had been disoriented with regard to time, space and personal identity. Since then, symptoms had improved, but he still had extensive memory gaps regarding personal memories of e.g. places and persons extending back to his childhood. On neurological and psychiatric examination the right-handed patient was completely unremarkable.
Control subjects 20 male subjects (age 24.261.9 SD) served as controls for the VBM analysis. All subjects were healthy and had no history of any neurological or psychiatric disease, which was assessed by a certified neurologist. To attenuate the risk of accidental differences of memory unrelated cognitive abilities between the single patient and the control group, we carefully selected the controls with regard to the educational status. Therefore only age-and sexmatched volunteers, who had passed their final high-school exam ('Abitur') with comparable success and were currently university students of comparable status were included.
Pre-processing of T1-weighted images was performed using SPM5 (Wellcome Trust Centre for Neuroimaging, UCL, London; UK; http://www.fil.ion.ucl.ac.uk/spm) implemented in the VBM Toolbox 5.1 (Christian Gaser, Department of Psychiatry, University of Jena, Germany; http://dbm.neuro.uni-jena.de/ vbm.html) under MatLab 7.7 (The MathWorks Inc., Sherborn, MA, USA). Standard routines and default parameters of the VBM 5.1 toolbox were applied. Images were bias corrected, preregistered to standardized Montreal Neurological Institute (MNI) space using rigid-body transformation (with translation and rotation only) and segmented using the ''unified segmentation'' approach [23]. Segmentation in SPM5 was based on a modified gaussian mixture model to avoid misclassification. Information was combined from the intensity distribution of the image and prior information for all tissue classes by using prior probability maps that were derived from a large number of subjects. The prior probability maps were warped to the data to minimize the impact of template and priors. To remove isolated voxels of one tissue class within a cluster of voxels belonging to a different tissue class, a hidden Markov random field model with adaptive weighting was used. The warping to MNI space was performed using both, linear and non-linear transformations. Spatial normalization expands and contracts some brain regions. Grey matter segments were therefore modulated (i.e., scaled) by the Jacobian determinants of the deformations to account for local expansion and compression introduced by non-linear transformation. Finally, the grey matter images were smoothed with an 8-mm full-width at half-maximum (FWHM) isotropic gaussian kernel. This was done to reduce errors related to intersubject variability in local anatomy and to render the imaging data more normally distributed. For statistical analysis, voxel-wise gray values of the patient were compared to those of the control group. We assumed a tdistribution of the control group data and used a 2-sample t-test (group 1: patient, group 2: controls). A covariate accounted for subjects' age. Data were analyzed on the whole-brain level and were corrected for multiple comparisons using the Family Wise Error correction (FWE) with p,.01. Results were corrected for non-isotropic smoothness [24]. Stereotactic coordinates are reported in MNI space.
Neuropsychological test results
The neuropsychological assessment revealed average to aboveaverage anterograde, but defective retrograde memory. A pronounced degradation of the episodic memory was found throughout his premorbid life (Table 1 details the memory test results). Due to the patient's young age, a number of items of retrograde memory tests did not apply (e.g. wedding, children, former hospital visits). Thus the time period of 5 years before onset of the amnesia was not fully covered by the test. Therefore we additionally conducted a detailed interview, which confirmed severe episodic memory deficits. Personal semantic memory revealed borderline results, since relevant personal information had been reacquired. The patient was able to describe episodes of his life according to what parents and friends had told him, but did fail to provide details as are characteristic of personal memories. General semantic memory showed no abnormality. No clear temporal gradient of the amnestic syndrome was found.
Apart from the mnestic deficit, tests for verbal fluency and processing speed were below average. Otherwise cognitive testing was unremarkable (Table 2).
With regard to a potential psychodynamic cause of the memory deficit, repeated anamnestic exploration and clinical observation throughout several weeks of therapy did not reveal any plausible psychogenic cause. This was supported by interviews with his relatives. The patient showed symptoms of mood disturbance and was insecure in social interaction, both clearly reactions to his memory impairment. In order to avoid potentially embarrassing situations, when meeting people he had formerly known, he avoided social contact. He also was anxious about his university career. These symptoms corresponded with the results of a formal evaluation of psychopathological symptoms [21] (see Table 2). No symptoms of anxiety or depression had been apparent prior to the amnesia. Neuropsychological therapy focused on helping him overcome social avoidance. Strategies were developed how to react when meeting people he could not remember. Regarding his work, mostly consisting of computer programming, he had largely preserved semantic and procedural knowledge. Information that he did not remember was easily relearnt with the help of his colleagues.
Imaging results
High-resolution MRI showed no lesion or obvious morphological abnormality as confirmed by an experienced neuroradiologist. VBM was used to compare the patient's MRI to a group of 20 agematched male control subjects ( Figure 1). Here, the patient showed highly significant gray matter volume decreases within portions of the left temporopolar cortex (
Discussion
To our knowledge this is the first report on VBM-based demonstration of subtle changes in the memory network in a patient suffering from FRA without neuroradiologically detectable MRI-abnormalities. The aetiology of these changes remains intricate, but left temporopolar and right posterior parahippocampal/lingual gray matter decreases indicate a subtle abnormality in key areas of episodic memory and may allow the differentiation of FRA from purely psychogenic amnesia in this patient whose neurological and neuropsychological assessment was otherwise unremarkable. We are aware of the potential risk that variability between individuals (irrespective of specific pathological symptoms) may confound single-case versus group comparisons. However, 'perfect matching' with regard to all cognitive factors is impossible due to the large number of neuro-cognitive domains. We believe that using a control group that is carefully matched not only for age and gender but also for educational background and current status attenuates the potential of contingent findings. Therefore we propose that VBM-based approaches have considerable potential in patients with neuropsychological disorders lacking a clear lesion in clinical MRI and who lack a previous history of neurological, psychiatric or psychogenic disorders. Due to the potential confound induced by the comparison between a single case with a group of controls, a correspondence between the affected areas and areas which have been attributed to the impaired neuro-cognitive function are necessary to corroborate the assumed causal relationship.
It may be argued, that (sub)clinical seizure-activity lead to a functional disruption of neocortical networks involved in storage or retrieval of remote memories [3]. However, antiepileptic treatment clearly improved déjà-vues and yielded normal EEGfindings while the FRA persisted. Since seizures were not witnessed, unrecognized head trauma might have caused FRA in our patient, as previously reported [7]. Yet, MR-imaging including T2* did not show any signs of a structural lesion. Thus, the case fulfils all criteria of a ''functional retrograde amnesia'' [5]. It is still controversial, whether this concept is of organic or/and psychogenic etiology [6]. In our patient, neither a detailed psychiatric interview nor psychometric testing revealed any predisposing factors for a psychogenic amnesia [8]. On the contrary, VBM-analysis corroborates that FRA was not purely a psychogenic memory loss, as it disclosed grey matter abnormalities in left temporopolar and right posterior parahippocampal/lingual cortex.
It is controversial which brain structures underlie the retrieval of past memories. In a meta-analysis on neuroimaging studies of autobiographic memory (AM) [25], a network of the regions associated with AM processing was identified and classified into ''core regions'', ''secondary regions'' and ''regions that were infrequently activated across studies''. Here, core regions included the medial and ventrolateral prefrontal cortices, medial and lateral temporal cortices, temporoparietal junction, retrosplenial/posterior cingulate cortex and the cerebellum [25]. With respect to our findings, the left temporal pole is considered to be a ''secondary region'' of autobiographical memory processing according to this classification. The second region found in our study, the posterior parahippocampal/lingual cortex most closely corresponds to a broader region in the metaanalysis, the right medial temporal lobe. This region is classified as core region according to this metaanalysis and not further subdivided. In a second metaanalysis on neuroimaging studies of AM both left temporal pole and right parahippocampal cortex are found to be regions that show significant concordance across neuroimaging studies in healthy subjects [26]. In this study, peak coordinates of all clusters are reported which allowed us to compare the distance between the peak coordinate in our study (after conversion from MNI into Talairach coordinates [27]) with the peak coordinate reported in this meta-analysis. The largest distance between both peak coordinates of the temporal pole cluster amounts to 13 mm (along the z-axis), while the largest distance between both peak coordinates in the right parahippocampal cluster amounts to 22 mm. Thus the abnormalities found in our patient project to regions relevant for AM also according to meta-analyses in healthy volunteers. The rather large distance between the peak within the clusters and the findings in our patient may stem from: (i) the analytical difference between peak localization and cluster distribution, (ii) the variance between the different operationalizations of AM, and (iii) a potential interindividual variance. Furthermore it seems relevant to compare our findings in a single patient to previous reports in other patients with similar amnestic syndromes [2,5,6,7,8,28,29,30]. In this vein, previous studies reported that lesions of the temporopolar cortex might result in FRA [2,28]. A meta-analysis of published cases of FRA suggests, that damage to the anterior temporal lobe results in pronounced impairment of episodic but preserved semantic memory [29]. This corresponds to the pattern we found in our patient.
The majority of reported cases with retrograde memory deficits show multiple as opposed to isolated lesions [30,4]. Similarly our patient also showed VBM-based grey matter abnormalities in the border of posterior parahippocampal (PHC) and lingual cortex. PHC is part of the medial temporal lobe and contributes to fundamental functions sustaining retrograde memory [30,4]. It has been suggested, that lesions confined to the hippocampus proper result in a temporally graded retrograde amnesia, while lesions involving adjacent areas, like PHC, cause severe, temporally extensive and ungraded amnesia which converges with the findings in our patient [30]. Moreover, PHC is involved in familiarity judgements of memories. Interestingly, temporal lobe epilepsy patients have been reported to show interictal hypometabolism in this region associated with déjà-vues, a frequent symptom prior to anticonvulsant therapy in our patient [31]. It may be argued, that the morphological alterations in our patient are unrelated to the memory deficit and represent the mere epiphenomenon of a cryptogenic temporal lobe epilepsy (CTLE). Indeed a recent VBM-study did report abnormalities in CTLE patients [32], however, therapy refractory patients with a long history and high seizure frequency were enrolled (,35 seizures/ year; epilepsy duration: ,24 years). On the contrary our patient had suffered only one seizure rendering tissue damage due to repetitive and sustained seizure activity unlikely. Hence we consider the subtle parahippocampal/lingual and temporopolar alterations disclosed by VBM-analysis to represent structural correlates of the deficit in the here reported FRA-case. It cannot be entirely excluded that additional covert damage to other temporal lobe regions that was not detected by our analysis might contribute to the patient's deficit. Nevertheless, our findings point out that very subtle structural abnormalities in critical structures of the memory network might result in pronounced deficits of the autobiographical memory. The fact that we found abnormalities in two different structures on both hemispheres, does not allow us to infer specific contributions of each structure or its laterality to the memory deficit in our patient. Nevertheless our results agree with the notion, that combined lesions of MTL and neocortical structures can lead to FRA [30,4]. Beyond the implications for the diagnosis of FRA our results advocate the use of VBM in conditions that do not show abnormalities in clinical neuroradiological assessment. | 2014-10-01T00:00:00.000Z | 2011-10-19T00:00:00.000 | {
"year": 2011,
"sha1": "bc7d51aa31ec172f547e8657073cd8b6d02110bf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026538&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc7d51aa31ec172f547e8657073cd8b6d02110bf",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118862106 | pes2o/s2orc | v3-fos-license | Pionic Contribution to Neutrinoless Double Beta Decay
It is well known that neutrinoless double decay is going to play a crucial role in settling the neutrino properties, which cannot be extracted from the neutrino oscillation data. It is, in particular, expected to settle the absolute scale of neutrino mass and determine whether the neutrinos are Majorana particles, i.e. they coincide with their own antiparticles. In order to extract the average neutrino mass from the data one must be able to estimate the contribution all possible high mass intermediate particles. The latter, which occur in practically all extensions of the standard model, can, in principle, be differentiated from the usual mass term, if data from various targets are available. One, however, must first be able reliably calculate the corresponding nuclear matrix elements. Such calculations are extremely difficult since the effective transition operators are very short ranged. For such operators processes like pionic contributions, which are usually negligible, turn out to be dominant. We study such an effect in a non relativistic quark model for the pion and the nucleon.
INTRODUCTION
The discovery of neutrino oscillations can be considered as one of the greatest triumphs of modern physics. It began with atmospheric neutrino oscillations [1]interpreted as ν µ → ν τ oscillations, as well as ν e disappearance in solar neutrinos [2]. These results have been recently confirmed by the KamLAND experiment [3], which exhibits evidence for reactor antineutrino disappearance. As a result of these experiments we have a pretty good idea of the neutrino mixing matrix and of the two independent quantities ∆m 2 , e.g m 2 2 − m 2 1 and m 2 3 − m 2 2 . Fortunately these two ∆m 2 values are vastly different, This means that the relevant L/E parameters are very different. Thus for a given energy the experimental results can approximately be described as two generation oscillations. For an accurate description of the data, however, a three generation analysis [4]- [5] is necessary. We thus know that the neutrinos are massive, with two non zero ∆m 2 , and they are admixed. We do not know, however, whether they are Majorana, i.e. the mass eigenstates coincide with their antiparticles, or of Dirac type, i.e. the mass eigenstates do not coincide with their antiparicles. Furthermore we do not know the absolute mass scale as well as the sign of ∆m 2 32 . The first question can be settled by neutrinoless double beta decay (0ν ββ− decay). The second will also, most likely, be settled by this process.
We should stress, of course, the fact that the light neutrino mediated process is not the only mechanism available for 0νββ [6]. Among those are some which involve heavy intermediate particles.
These lead to very short ranged two body effective transition operators, which must be dealt with care, due to the presence of the nuclear hard core. To this end three treatments have been proposed: • Treat the nucleons as composite particles (two nucleon mode).
This can be done in the context of non relativistic quark model or simply by assigning to the nucleon a suitable form factor [7].
• Consider the possibility of six quark cluster in the nucleus [8] • Consider other particles in the nuclear soup. The most prominent are pions in flight between the two interacting nucleons [6] In the present study we will examine the last possibility. This was examined long time ago [9] and it was revived in the context of R-parity violating supersymmetry a decade later [10,11,12] as well as recently [13]. It was shown that in the context of R-parity violating supersymmetry the pion mode is more important than the two nucleon mechanism. The same conclusion was reached recently in the context of effective field theory [14].
In the above treatments the pions were treated as elementary particles. This approach is reasonable in particle physics, but one knows, of course, that the hadrons involved are not elementary. Furthermore a crucial factorization approximation has to be made, by inserting only the vacuum as intermediate state, (see Eqs (82) and (85) below). Finally, even though the hadrons are elementary, in the interesting case of the pseudoscalar coupling an assumption had to be made about the quark mass, taken to be the current quark mass.
In this work we are going to adopt a different procedure. The hadrons will be assumed to have a quark substructure in the context of the harmonic oscillator. In the harmonic oscillator approximation the internal degrees of freedom can be separated from the center of mass motion. In this approach one derives the effective operator at the quark level by a suitable non relativistic expansion of the elementary amplitude. In some processes in our formalism one extra qq pair must be produced. This can can be achieved either through the weak interaction itself or via the strong interaction. The net result is that, in this new approach, one obtains new types of operators, including some that are non local at the nucleon level. One must weigh these advantages, however, against possible shortcomings of the need for a non relativistic reduction of the transition operator at the quark level.
THE CONTRIBUTION OF PIONS IN FLIGHT BETWEEN NUCLEONS
As we have mentioned in the introduction when the intermediate fermion, e.g. the Majorana neutrino, is very heavy the transition operator becomes very short ranged. In this case the usual two nucleon mechanism may be suppressed due to the nuclear hard core and the contribution of other particles in the nuclear soup, such as pions, may dominate. These mechanisms at the nucleon level are illustrated in Fig. 1.
The two body double beta decay operator, associated with heavy intermediate particle exchange, will be normalized in a way which is consistent with the light intermediate neutrino. We begin with the intermediate heavy neutrino. Then: L, R stand for leftt handed and right habded currents respectively with The corresponding expression in momentum space becomes: The function A(p 1 , p 2 , p ′ 1 , p ′ 2 ) depends on the assumed mechanism for the neutrinoless double beta decay.
FIG. 1: The double beta decay of two neutrons into two protons at the two nucleon level (a) arising when all the intermediate particles at the quark level are very heavy. The double beta decay of a neutron with the simultaneous production of a π + , which is then absorbed by another neutron converting it into a proton (b) (one pion mode). A neutron can also be converted into a proton and a π − . The π − then double beta decays into a π + , which subsequently is absorbed by another neutron converting into a proton (c) (two pion mode).
The factor η L,R N is not usually included in the nuclear matrix element. The factor R0mp me will be absorbed into the effective nuclear operator, while the factor 4π m 2 p will eventually be included in the effective coupling, as will be discussed in this work.
With the above expressions the formula for the life time due to heavy intermediate neutrinos in left handed V-A theories can be cast in the form: The two nucleon contribution ( fV fA ) 2 Ω F − Ω GT was inserted in the above equation merely for comparison.
The case of other heavy intermediate particles, as those encountered in the R-parity violating supersymmetry can be handled in a similar fashion: In both cases: Where R 0 is the nuclear radius, x π = m π r ij and F (1) The function A(p 1 , p 2 , p ′ 1 , p ′ 2 ) depends on the pion mode under consideration.
THE 2-PION MODE
The spin dependence of the transition operator is in this case trivial. So we will focus on the orbital structure of of the operator The function A(p 1 , p 2 , p ′ 1 , p ′ 2 ) is independent of the momenta in the standard V-A theory as well as in the case of the scalar (S-S) theory. It is, however, a model dependent function in the case of psedoscalar (P-P) interaction encountered, e.g., in R-Parity violating SUSY mediated double beta decay. In the last case we find where A i is the amplitude resulting from the non relativistic reduction of the pseudoscalar involved in the d → u coupling, i.e.ū where σ i is the spin of the quark i and We find it convenient to rewrite them as follows: Where q = P π is the momentum of the pion in flight between the two nucleons and ρ and ρ ′ are the relative internal momenta (see next subsection). One normally ignores at this level the momentum carried away by the two leptons. The 2π 0ν − ββ decay contribution in the case of heavy Majorana neutrino or any other Majorana fermion is explicitly shown in Fig. 2.
Orbital integrals in the two pion exchange.
The pion wave function is given by: where P π is the pion momentum and with p 1 and p 2 being the momenta of the quark and antiquark participating in the pion. This wave function is normalized in the usual way: φ π ( ρ) is described by an 1s harmonic oscillator state. In momentum space it takes the form: Thus the orbital matrix element in this case takes the form: where x = bπ bN . b π and b N are the harmonic oscillator (HO) size parameters for the pion and the nucleon respectively. We have decided to introduce the ratio x as a variable to be adjusted. In V-A theories after incorporating the spin we find: where ≺ |σ 1 .σ 2 | ≻= −3 is the spin ME. One now can construct the effective transition operator in coordinate space at the nuclear level. The effective coupling in V-A theory is given [6] by: Using f 2 πN N = 0.08 and b N = 1.0 fm we find α 2π = 0.013 and 0.11 for x = 1.0 and 0.5 respectively. For the scalar interaction one gets the value f 2 S /4 with the value of f S depending on the specific particle model.
The dependence of the results on the pion size parameter is exhibited in Figs 3. In the case of the pseudoscalar coupling, since the pion has spin zero, we encounter the combination: In this case one can show that the orbital amplitude is Where q the momentum of the propagating pion and The above equation can be rewritten in a way that the pion propagator is manifest: In other words there appear two terms c 0 2π and c q The first gives rise to an effective operator similar to that of the V-A theory with a coupling The second term, contributing when the u and d quarks are not degenerate, yields a coupling α 2π (Ω 1π ) where: which is associated with the operator with one pion propagator less, i.e. that encountered in the 1π mode (see below). Such an operator is absent in the elementary particle treatment, even though the quarks are assumed to be non degenarate.
2π (x) on the left and f (2) 2π (x) on the right as a function of x = bπ b N .
THE 1-PION MODE
In this case a positively charged pion, produced in virtual double beta decay of a neutron into a proton, is absorbed by another neutron converting it into a proton. At the quark level the first of these steps is exhibited in Figs 4-6. In these figures a qq pair is created out of the vacuum. In the first two figures this is achieved as, e.g., in a gluon exchange [15] or a multigluon exchange simulated in the 3 P 0 model [16], [17], [18], [19]. The latter is a fairly old model, which still continues to be successfully applied in the description of meson decays [20]. In Fig. 6 this pair is created by the weak interaction itself.
The pion mediated 0ν ββ decay in the so-called 1π mode. At the top we show the diagram in which the quarks of the pion are spectators , i.e. the heavy intermediate heavy fermion f is exchanged between the other two quarks. × indicates that a qq pair is created out of the vacuum in the context of a multigluon exchange. We will call it direct diagram.
The orbital part at the quark level Orbital wave functions in momentum space are expressed in terms of Jacobi coordinates: FIG. 5: The same as in Fig. 4 involving the exchange diagram. In this case the quark involved in the pion participates in the exchange of the heavy fermion f , co-operating this way with another quark belonging in the nucleon.
FIG. 6: The same as in Fig. 5 but in a novel mechanism, i.e. one in which the qq pair is produced by the weak interaction itself.
Where P π , P and P ′ are the momenta of the pion and the two nucleons respectively and Where p i , i = 1, 3 are the momenta of the three quarks of one nucleon, p ′ 1 , p ′ 2 , p ′ 4 the momenta of the three quarks of the other nucleon and p 4 , p ′ 3 are those of the quarks involved in the pion. This notation was chosen since the interaction preserves the fermion lines p i ←→ p ′ i The above wave functions were normalized in the usual way: The internal wave functions are given by: The pion wave function has already been defined (see Eq. (19)), except that sometimes we will write: The integrals over the momentum variables Q, Q ′ and Q π can be trivially performed due to the δ functions. Thus the orbital integral becomes: where Ω ββ depends on the mechanism involved as we now discuss.
1. The qq pair is created by the 0νββ operator ( 0ν qq case) The case in which the qq pair is created by the 0νββ operator (see fig. 6). Then up to terms linear in the momentum the effective operator takes the form: 2m u (scalar and vector) (44) It is, of course, understood that the scalar and pseudoscalar must be multiplied by suitable coupling constants. The full operator takes the form: The product of the three δ functions can be cast in the form By setting ξ ′ = ξ and η ′ = η + 2 3 q we get After the integration (see next section) we get: The last expressions result in the case of the constituent mass for the quarks, m u = m d = m p /3. In the above equations: 2. Double beta decay and strong qq production ( 3 P 0 qq case). In this case one needs the collaborative effect of the 0νββ interaction acting between quarks together the strong interaction, which creates a pion out of the vacuum (a' la 3P 0 model or multigluon exchange): where g ′ r a dimensionless constant proportional to the parameter g r = 13.4 ± 0.1, which is known from experiment. One finds where 5where φ π (0) is the pion wave function at the origin.
• The direct term in the one pion contribution. In this case (see fig. 4) none of the two interacting quarks participates in the pion as defined above. Thus we get: The product of the above three δ functions can be cast in the form The first of these δ-functions expresses momentum conservation. Going into the Jacobi variables we find: We find it convenient to use the above δ functions to obtain: One finds: Furthermore A-terms, appearing in the case of the pseudoscalar contribution, take the form: Thus using the corresponding δ-functions the η and η ′ integrations can be done trivially.
• The exchange term in the one pion contribution. By this we mean that one of the interacting particles participates in the pion (see fig. 5) . Proceeding as above have: Going into the Jacobi variables we find: We find it convenient to use the above δ functions to obtain: Thus the ξ ′ and η ′ can be done trivially. Furthermore A-terms, appearing in the case of the pseudoscalar contribution, take the form: iσ3 × σ4 axial The 0νββ decay amplitude at the nucleon level.
Performing the orbital integrals we encountered in the previous section, we must evaluate the spin-flavor ME for the various operators encountered above, classified according to their spin rank. The obtained matrix elements, in units of the nucleon spin ME are included in table I). Using these results one can obtain the needed amplitude at the nucleon level. As expected from the above discussion we will consider three possibilities:
The 0ν qq case
In this case we can write the amplitude as where σ N is the nucleon spin and M E(s − f ) is the spin-flavor matrix element ( see table I) and J orb is the radial integral. One finds The coefficients C i can be read off from Eqs 51-53, namely The term p N of the amplitude will lead to a non local effective operator in coordinate space.
2. The 3 P 0 qq case Double beta decay proceeds via two quarks in a state with isospin one, which is color antisymmetric. So the two quarks must be in a spin one state. So there is no contribution in V-A theories, since the vector and the axial vector contributions are identical. For the scalar and pseudoscalar cases the needed couplings depend on the particle model assumed. In the R-parity violating SUSY the coupling is , e.g. 3 8 (η T + 5 3 η P S ) found in [11]. In our discussion we will not include such a model dependent coupling. We will distinguish the two possibilities: a) The direct term. In this case we can write the amplitude as In the case of the scalar contribution we find from table I that In the case of the pseudoscalar contribution (see Appendix) and in the local approximation p N = 0 we find: We expect this to be a good approximation. In any event it makes the operator tractable.
The corresponding orbital integral is: We not with satisfaction that any uncertainties in the pion w.f. have dropped out, at least if the non local term in the exponential are ignored.
b) The exchange term.
The amplitude takes the form: Again there is no contribution in V-A theories, since the vector and the axial vector contributions are identical. In the case of the scalar contribution we find from table I that In the case of the pseudoscalar contribution for the constituent quark masses we get: where x = bπ bN . Note the presence of the q 2 in the first term. This will lead to an operator with a different radial dependence, i.e. F (k) i (x) (see Eq. (10)). The corresponding orbital integral for the exchange term is: or In this instance the obtained results depend on the pion w.f. at the origin (via x).
RESULTS
Our main results are the coefficients α 2π and α 1π , which multiply the standard nuclear matrix elements. We will not elaborate further on the new non local terms (at the nucleon level).
The coupling coefficients α2π
Before presenting our results we should mention that in the elementary particle treatment [11] one can write Obtained under the factorization approximation: The parameter h π is given by Returning back to our approach we note that the non relativistic reduction is applicable in the constituent quark mass framework, m u = m d ≈ m N /3. In this case the pseudoscalar term contribution becomes: i.e. it is quite a bit smaller. It is also much smaller than the value 0.20 obtained in the elementary particle treatment [11] using current quark masses. This disagreement cannot be healed by the fact that in the present case we encounter a very strong dependence of the results on the pion size parameter, see Fig. ??, unless we use very unrealistic values of the pion size parameter. One expects, of course, an enhancement of the pseudoscalar contribution, if one uses the current quark masses, since they are assumed to be very small. Indeed this way for typical values x = 1, b N = 1 fm, m d = 5 MeV and m u = 10 MeV we obtain α 2π = −1.3 and α 2π (Ω 1π ) = 0.08, which are very large. We should mention, however, that the validity of the non relativistic reduction at the quark level may be questionable in this case.
The coupling coefficients α1π
Before proceeding further we will briefly present how the coefficient α 1π was obtained in the context of the elementary particle treatment [11]: The needed parameters were obtained using the factorization approximation one writes in the case of 1 − π mode < p|j P J P |nπ > = 5 3 < p|J P |n >< 0|J P |π − >, < p|J P |n >= F P ≈ 4.41 (85) The matrix element < 0|J P |π − > was given above (see Eq. (82)). Thus these authors [11] find: Returning to our approach these coefficients are obtained in the following procedure: First we write Then, ignoring the momentum dependence in the exponential, we get: 1. Double beta decay only. From Eqs(69)-(71) we see that the only local contribution comes from the axial current.
Using m d = 5 MeV, m u = 10 MeV and g r = 13.4 we get On the other hand for the constituent masses we find: The corresponding coefficient that must multiply the nuclear matrix element is α 1π
The direct term
There is no contribution of the direct diagram if the non local terms are ignored.
the exchange term
• In the case of the current quark masses we get the standard term: In addition we have an operator which results from the term in the amplitude, which was cubic in q. Thus we factor out the q 2 m 2 π and absorb it in the effective transition operator. In the remaining coefficient we merely replace q 2 by m 2 π . Thus Proceeding as above we get respectively: The coefficient f cur 1π (x) is associated with the standard operator Ω 1π (x π ), while g cur 1π (x) must be linked with a new type of operatorΩ 1π (x π ) with modified radial dependence , i.e. F (k) (10)). Both coefficients are so normalized that f cur 1π (1)=g cur 1π (1) = 1. In any case the use of current quark masses leads to very large values.
• The constituent quark masses.
In this case we get: Again the coefficient f con 1π (x) is associated with the standard operator, while g con 1π (x) must be linked with the operatorΩ 1π (x π ), with f con 1π (1) = g con 1π (1) = 1. The functions f A 1π , f cur 1π(x), g cur 1π (x) f A 1π , f con 1π (x) and g con 1π (x)are shown in Fig. 7. For x = 1 for the standard local 1π operator considering all contributions mentioned above with constituent quark masses we find α 1π = 7.3 × 10 −2 , which is in size almost a factor of 2 larger than that obtained in elementary particle treatment [11] (see Eq.(86)) . Note, however, that our results depend on the pion size parameter. 94)), the long dash is associated with with the exchange q-independent coefficient (f cur 1π ) and the the short dash with that of g cur 1π (see Eq. (97) ). On the right we show the same quantities obtained with constituent quark masses.
DISCUSSION
In the present paper we have considered the effective 0νββ decay operator associated with the exchange of heavy particles mediated by pions in flight between nucleons. A harmonic oscillator non relativistic quark model in momentum space was employed for the pion and the nucleon. This allowed one to separate out the relative from the center of mass motion. The ratio of the pion to the nucleon harmonic oscillator parameter, x = b π /b N was treated as a parameter. When needed, the constituent quark mass equal to 1/3 of the nucleon mass employed. The obtained results were compared to the elementary particle treatment, with current quark masses, previously employed. In the case of the two pion mode we find a new term with different momentum dependence, which is not present in the elementary particle treatment. This gives rise to a new operator, which has the same structure as the one previously associated with the one pion mechanism.
In connection with one pion mechanism we found that there exist three diagrams, which cannot be distinguished in the elementary particle treatment, namely: 1. Diagrams in which the qq is crated out of the vacuum via the strong interaction.
In this case we employed the 3 P 0 model. The strength of this interaction was fitted to the pion nucleon coupling g r . We distinguished two possibilities: • The two interacting quarks participate only in the structure of the nucleon.
• One of the interacting quarks participates in the structure of the pion.
2. The the qq is crated by the weak interaction itself.
Depending on the mechanism we encountered new non local terms, i.e. terms which depend on the nucleon momentum. These will lead to new types of effective nuclear operators, which have not been examined up to now. The results obtained in the present calculation depend among other things on the ratio of the pion to nucleon size parameters. Using reasonable values for this ratio we obtain values of α 1π , which are in good agreement with those obtained in the elementary particle treatment. Regarding the couplings α 2π , however, we find that they are slightly smaller than those obtained in the elementary particle treatment in the case of the V-A theory. They are, however, quite a bit smaller than those obtained in the case of the pseudoscalar term, when the constituent quark masses are used. We can, of course, obtain much larger values for the pseudoscalar term, if the current quark masses are used.Admittedly, however, it may not be very consistent to do so in our approach, since it is essentially a non relativistic treatment. We thus suspect that the small current quark masses are behind the large values found in the elementary particle treatment.
In summary, taking into account the fact that a number of approximations are behind both approaches, we may say that there exists a reasonable agreement between them, which gives a degree of confidence in both. A more complete comparison can, of course, be made only after the inclusion in the calculation of the nuclear matrix elements of the new operators found in the present approach, namely: i) the local operatorΩ 1π (x π ) resulting from terms cubic in q and ii) the non local operators, which depend on the nucleon momentum. | 2009-11-11T11:30:56.000Z | 2009-11-11T00:00:00.000 | {
"year": 2009,
"sha1": "37818210449cfc6596e8310e7f63f34f9566c5dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.2117",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "37818210449cfc6596e8310e7f63f34f9566c5dc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46933011 | pes2o/s2orc | v3-fos-license | Expression of HSP90AA1/HSPA8 in hepatocellular carcinoma patients with depression
Background Depression may influence susceptibility to cancer, and the genes and signaling pathways that may mediate this association are unclear. Methods Here, we used isobaric tagging for relative and absolute quantitation, 2-dimensional liquid chromatography, and mass spectrometry to compare proteins expressed in hepatocellular carcinoma in patients with or without depression. Results A total of 89 proteins were up-regulated and 44 were down-regulated in patients with depression. HSP90AA1 and HSPA8 were up-regulated, which correlated with elevated levels of VEGF, VEGFR2, PI3K, and AKT1 and reduced levels of caspase 9 and BAD. Disease-free survival rate was significantly lower and risk of tumor recurrence was significantly higher in patients with depression, which may reflect high HSP90AA1/HSPA8 expression. Conclusion These results suggest that the VEGF/VEGFR2 pathway may be associated with HCC recurrence in patients expressing high levels of HSP90AA1/HSPA8.
Introduction
Hepatocellular carcinoma (HCC) is one of the most common malignancies in the world. 1 An average of 700,000 new cases are diagnosed each year, 50% of which occur in People's Republic of China. 2 The Guangxi area has a high incidence of liver cancer, where it is associated with a crude mortality rate of 27.31 per 10 million (41.78 males and 11.71 females per 10 million), placing it first among all types of malignant tumors. 3 Some HCC patients may chronically experience negative emotions related to their health condition, especially because of the lack of appropriate long-term psychological counseling and social support.
This negative emotional state, like other psychosocial factors, may reduce the number of immune cells and cytokine secretion 4 by influencing the axis comprising the hypothalamic-pituitary-adrenal gland, gonads, and thyroid, which regulates neurotransmitter secretion. 5 In this way, negative emotional state may increase susceptibility to cancer and other diseases. 6 In fact, at least one study has suggested that cancer susceptibility may be higher among individuals who constantly struggle to meet the needs of others, compromise their own desires for the sake of others, and suppress negative emotions. 7 Incidence of cancer may be more than threefold greater among individuals with depression than among others, 8 and one study found depression to be present among 74.1% of cancer patients. 9 Depression may increase cancer risk by weakening the body's ability to conduct immune surveillance and repair damaged DNA. This facilitates the transformation of proto-oncogenes, inducing tumor growth, recurrence, and metastasis. Xiang et al Liver cancer patients show varying extents of depression, anxiety, and other adverse emotions. Moderate psychological intervention can significantly improve the quality of life of patients with liver cancer. 10,11 If biomarkers of depressionrelated liver cancer could be identified, they might enable earlier diagnosis and more personalized prevention and treatment.
In order to identify such biomarkers and help elucidate HCC pathways that may occur in a background of depression, we performed isobaric tagging of proteins in tumor tissues from HCC patients with or without depression. We identified 89 proteins that were upregulated and 44 that were downregulated in the presence of depression. Gene ontology and pathway analyses were performed, and protein-protein interaction (PPI) networks and modular analyses were carried out to identify central (hub) proteins in depression-related HCC. Potential correlations of these hub proteins and key pathways with clinicopathological features and prognosis were explored.
Methods
The study protocol for this trial was approved by the Guangxi Medical University Affiliated Tumor Hospital Ethics Committee and was designed in accordance with the Helsinki Declaration (2013 version). Written informed consent was obtained from all patients. Diagnostic criteria HCC tissues were taken from patients who had been diagnosed with HCC strictly according to the guidelines of the American Association for the Study of Liver Diseases. 12 Tumors were staged according to the Barcelona Clinic Liver Cancer system. 13 inclusion criteria HCC tissues were included if the patient 1) had been diagnosed with HCC based on liver tissue pathology; 2) could understand and communicate well enough to complete the depression assessments in this study; 3) had no personal or family history of mental illness or unconsciousness; and 4) provided written informed consent to participate in the study.
exclusion criteria
Patients and their tumor tissue were excluded from the study if they 1) were diagnosed with a condition other than HCC; 2) were diagnosed with a combination of HCC and another malignancy; 3) could not understand or communicate sufficiently to complete the depression assessments in this study; 4) had personal or family history of mental illness; or 5) did not want to participate in this study.
Follow-up
All patients were followed-up at 1 month after hepatectomy and every 3 months thereafter. During each follow-up visit, the following tests were performed: chest X-ray, abdominal computed tomography or magnetic resonance imaging, serum alpha-fetoprotein assay, and abdominal ultrasonography. The last follow-up was August 2017.
Depression assessment
All patients in the study were assessed for the presence and severity of depression. The Self-Rating Depression Scale (SDS) 14 features 20 items that are scored to yield a depression severity index (total score/80). An index below 0.50 is considered to indicate no depression; 0.50-0.59, mild depression; 0.60-0.69, moderate depression; and .0.70, severe depression. The Hospital Anxiety and Depression Scale-Depression (HADS-D) 15 has 14 items, each of which is assigned 0-3 points. Overall scores of 0-7 are considered to indicate no depression; 8-10, suspected depression; and 11-21, likely depression. For the purposes of the present study, we defined no depression as SDS,0.5 and HADS-D,7; mild depression, 0.5,SDS,0.59 and 8,HADS-D,10; and moderate/ severe depression, SDS.0.6 and HADS-D.11.
Proteomic analysis
Total protein was extracted from tumor tissues, and protein concentration was determined using the Bradford method and confirmed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Proteins were reduced, cysteines were blocked, and proteins were digested with trypsin (trypsin: protein, 1:40). The trypsin was added, the mixture was vortexed, and the digestion was allowed to proceed overnight at 37°C. The differential marker for isobaric tagging for relative and absolute quantitation (iTRAQ) was separated by reverse-phase liquid chromatography on a reversephase separation column (ZORBAX 300SB-C18 column, 5 μm, 300 Å, 0. Eluted polypeptides were analyzed using tandem mass spectrometry on an online QSTAR Pulsar™ XL MS/MS system (Applied Biosystems-MDS Sciex, Foster City, CA, USA) and RPLC column (ZORBAX 300SB-C18 column, 5 μm, 300 Å, 0.1-15 mm [Microm]). The iTRAQ-tagged peptides were fragmented at 113.1, 116.1, 117.1, 118.1, and 119.1 on CID, generating reporter ions while also producing peptide fragment ions. This allowed sequencing of the tagged peptides and identification of the corresponding proteins. The ratio of peak areas for the three iTRAQ reporter ions reflects the relative abundance of peptides and proteins in the sample.
gene ontology and pathway enrichment analysis
Candidate functions and pathways enriched in the presence of depression were analyzed using several online databases, including DAVID, KEGG PATHWAY (www.genome.jp/ kegg), Reactome (www.reactome.org), BioCyc (biocyc.org), and Funrich V3 software. Ontology analysis of differentially expressed proteins was performed using DAVID and Funrich version 3. In most analyses, the definition of statistical significance was P,0.05.
PPi network and modular analysis
PPI network was analyzed using the STRING website (http:// string-db.org). Potential relationships between candidate differentially expressed proteins and the degree of nodes were identified using Cytotype software. Proteins in central nodes may be hub proteins with important regulatory functions. The most significant modules from the PPI network were analyzed further using Cytotype MCODE.
Quantitative rT-Pcr
Total RNA was extracted from tumor tissue using Trizol reagent (Thermo Fisher Scientific, Waltham, MA, USA). Complementary DNA was generated using the PrimeScript reverse transcription kit (Takara, Tokyo, Japan) according to the manufacturer's instructions. Primers were designed according to the Genbank database (National Center for Biotechnology Information, Bethesda, MD, USA) and Primer 6.0 software (Primer-E, Auckland, New Zealand). Results were analyzed using the ABI StepOne Plus System (Applied Biosystems) Levels of target mRNAs were quantitated relative to levels of glyceraldehyde-3-phosphate dehydrogenase mRNA using the ΔC t method.
Western blotting
HCC tissues were lysed in RIPA (Solarbio, Beijing, People's Republic of China) buffer containing 1 mmol/L phenylmethanesulfonyl fluoride (Solarbio), and proteins were collected by centrifugation at 12,000 ×g for 10 minutes at 4°C. Protein concentration was determined using a bicinchoninic acid kit (Beyotime, Shanghai, People's Republic of China). Membranes were blocked for 2 hours with 5% skim milk in PBS containing Tween 20, and then incubated with rabbit anti-HSPA8/HSP90AA1 mAb (1:1,000, Abcam) overnight at 4°C. After washing 3 times each for 5 minutes in PBS containing Tween 20, membranes were incubated with goat anti-rabbit horseradish peroxidase-conjugated antibody (1:5,000) for 2 hours at room temperature. Bands were observed using enhanced chemiluminescence (Beyotime). All experiments were performed 3 times.
statistical analysis
Data were reported as mean±SD or median and range as appropriate. Intergroup differences in categorical data were assessed for significance using the χ 2 test or Fisher's exact test (2-tailed), whereas differences in continuous data were assessed using the Mann-Whitney U test. Overall survival was analyzed using the Kaplan-Meier method and compared between groups using the log-rank test. Multivariate Cox proportional hazard modeling was performed to identify independent prognostic factors based on adjusted HRs and associated 95% CIs. All statistical analyses were performed using SPSS 20.0 (IBM Corporation, Armonk, NY, USA). For all tests, P,0.05 was considered statistically significant.
Identification of proteins differentially expressed in depression-related hcc
Twenty patients were assessed for the presence and severity of depression based on the HADS-D and SDS scales. In the end, 10 patients were assigned to a group with moderate/severe submit your manuscript | www.dovepress.com
3016
Xiang et al depression, whereas the other 10 were assigned to a control group with no or mild depression. The combination of iTRAQ, 2-dimensional liquid chromatography and tandem mass spectrometry identified 133 proteins differentially expressed between the 2 patient groups: 89 proteins were upregulated and 44 were downregulated in the presence of moderate/severe depression ( Figure 1A, Table S1).
Ontology analysis of differentially expressed proteins
Proteins differentially expressed in the presence of moderate/ severe depression could be divided into a molecular functional group, biological process group, and cell component group ( Figure 1C and D, Tables S2 and S3).
analysis of signaling pathway enrichment
Proteins upregulated in the presence of moderate/severe depression comprised proteins involved in signaling events mediated by proteoglycan syndecan-1, VEGF, VEGFR1, VEGFR2, and α9 β1 integrin. Proteins downregulated in the presence of moderate/severe depression are involved mainly in biological oxidations, d-glucuronate degradation I, platelet degranulation, fat metabolism, and glucose metabolism ( Figure 1E, Tables S2 and S3).
These results suggest that HSP90AA1 and HSPA8 may be important hub proteins in depression-related HCC and may affect prognosis via the VEGF/VEGFR2-PI3K-AKT signaling pathway (Figures 1 and S1).
study population
A total of 131 patients (112 males) with a median age of 49 years were enrolled in this prospective study, of whom 45 showed no depression, 38 showed mild depression, and 48 showed moderate/severe depression based on the HADS-D and SDS scales. Clinicopathological characteristics are presented in Table 1. The 3 patient groups did not differ significantly (all P.0.05) except that patients with moderate/severe depression had significantly larger tumors and higher aspartate transaminase levels than the other 2 groups (all P,0.05).
expression of hsP90aa1 and hsPa8 in hcc tissues
Expression was assessed at the mRNA level by quantitative reverse transcription polymerase chain reaction (qRT-PCR) and at the protein level by Western blotting and immunohistochemistry. Mean HSPA8 mRNA level was significantly higher in patients with moderate/severe depression (n=45) than in patients with no depression (n=48, P,0.001) or mild depression (n=38, P,0.001; Figure 2A). Similarly, mean HSP90AA1 mRNA level was significantly higher in patients with moderate/severe depression (n=45) than in patients with no depression (n=48, P,0.01) or mild depression (n=38, P,0.001; Figure 2A). There was no significant difference in HSP90AA1 mRNA level between patients with no or mild depression (P=0.891).
Consistent with these results at the mRNA level, Western blotting showed significantly higher levels of HSP90AA1 and HSPA8 proteins in patients with moderate/severe depression than in the other 2 groups (Figures 2B and S2). Similarly, immunohistochemistry indicated higher levels of HSP90AA1 and HSPA8 in the presence of moderate/ severe depression ( Figure 2C). These findings suggest that HSPA8 and HSP90AA1 may be biomarkers of HCC-related to moderate/severe depression. correlation of hsP90aa1/hsPa8 upregulation with upregulation of proteins in the VegF/VegFr2-Pi3K-aKT pathway Western blotting and qRT-PCR showed higher levels of HSP90AA1, HSPA8, VEGF, VEGFR2, PI3K, and AKT1 expression in patients with moderate/severe depression than in the other patient groups as well as lower caspase 9 and BAD expression ( Figure 3).
Discussion
Our results with a population from Guangxi, which has a high incidence of liver cancer, [16][17][18] suggest that a substantial proportion of patients suffer depression and that this may affect prognosis. Posthepatectomy DFS in our cohort was significantly longer, and DFS rates higher, among patients with no or mild depression than among those with moderate/ severe depression, and poor DFS correlated with higher HSP90AA1/HSPA8 expression. Our comprehensive analysis of proteins differentially expressed between HCC patients with moderate/severe or mild/no depression indicates a correlation between high HSP90AA1/HSPA8 expression and activation of the VEGF/VEGFR2-PI3K-AKT pathway. It is tempting to speculate that this activation contributes to poor DFS by inducing endothelial cell proliferation and migration, which promotes angiogenesis and tumor growth, 19 as well as by inhibiting expression of BAD and caspase 9, which reduces tumor cell apoptosis. 20 In this way, the present study has generated testable hypotheses for future research that may improve our understanding of depression-related HCC and identify biomarkers that can detect it early. HSP90AA1 is a chaperone that is highly conserved among eukaryotes. It plays an important role in tumor cell proliferation, differentiation, survival, and movement as well as angiogenesis. It is an emerging target for tumor therapy. 21 HSP90AA1 is highly expressed in a variety of malignancies including breast, endometrial, ovarian, colon, lung, and prostate cancers. [22][23][24] During malignant tumor growth, it helps regulate mitochondrial apoptosis and signaling transduction triggered by the death receptor, stress signals, and growth factors. 25 HSP90AA1 can inhibit the initiation of apoptosis by preventing the binding of caspase 9 to apoptotic protein 1 activator. 20 In human leukemia cells, HSP90AA1 is inhibited and tyrosine kinase, Akt protein kinase B, and serinethreonine kinase are degraded, so that cells avoid apoptosis and undergo differentiation. 26 HSP90AA1 also promotes tumor formation by stabilizing mutant p53 complexes and thereby inhibiting apoptosis. 27 The ATPase inhibitor kaloxin inhibited proteolytic degradation of caspase 9 by activity of HSP90AA1. 28 In NIH3T3 cells, HSP90AA1 overexpression can inhibit apoptosis induced by tumor necrosis factor-α. 29 HSPA8 are stress proteins, and their overexpression in depression-related HCC may reflect that depression can expose patients to chronic stress. HSPA8 is involved in the regulation of tumor cell proliferation and apoptosis, which is closely related to the development, biological behavior, and prognosis of HCC and other tumors. 32 In early liver cancer, HSPA8 is expressed abnormally in the cytoplasm and nucleus, and its expression increases with disease progression, whereas its expression remains negligible or low in precancerous lesions and proliferative nodules. So, HSPA8 detection can be used for early detection of liver cancersensitive indicators. In fact, HSPA8 has been suggested as an early biomarker of liver cancer. 33 The upregulation of HSPA8 in depression-related HCC may be an anticancer response. Udono and Srivastava 30 have shown that HSPA8 in tumor cells binds to tumor-specific antigenic polypeptides to facilitate their recognition by the host immune system. HSPA8 can also induce maturation of antigen-presenting cells, promoting the transformation of Th cells into Th1 cells, directly activating TCRγδT cells and natural killer cells. 31 Activation of the signal transduction pathway of VEGF and its receptor VFGFR can stimulate the proliferation and migration of vascular endothelial cells, angiogenesis in HCC, and promote tumor growth and metastasis. 34 HCC cells must form new blood vessels to obtain sufficient oxygen and nutrition to support their rapid growth. 35 This rapid growth can lead to hypoxia within tumors, triggering expression of hypoxia-inducible factor and VEGF. VEGF production is also induced by matrix metalloproteinases, IFN-α and -γ, and NF-κB during HCC progression. 36 VEGF binds to its receptor VEGFR to activate a series of signal transduction pathways, The VEGF-AKT pathway can also promote endothelial cell proliferation and angiogenesis through mTORC2 and FOXO1. 19,[37][38][39] Limitations of our study include the fact that the data come from a small patient population at a single center, which increases the risk of systematic errors and bias. Follow-up was relatively short; as a result, few patients underwent surgical resection after tumor recurrence during the study. It would be important to assay recurrent tumors for HSP90AA1/HSPA8 expression, which we plan to do in future work.
Conclusion
Our findings suggest that a substantial proportion of HCC patients in this HCC-endemic region of Guangxi suffer depression, and that moderate/severe depression can significantly affect post hepatectomy prognosis. Moderate/severe depression in HCC may be associated with upregulation of HSP90AA1 and HSPA8, which in turn correlates with activation of the VEGF/VEGFR2 pathway. This activation may contribute to HCC recurrence. Future studies should explore the potential usefulness of HSP90AA1 and HSPA8 as biomarkers of depression-related HCC, which may facilitate early diagnosis and individualized prevention and treatment. HCC patients who have moderate/severe depression and high expression of HSP90AA1 and HSPA8 may benefit from more intensive psychological intervention and postsurgical care as well as more frequent follow-up. | 2018-06-12T01:45:13.096Z | 2018-05-22T00:00:00.000 | {
"year": 2018,
"sha1": "46b8e67b0aac8cbc3879b34e71a2d99abc9ebce5",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=42227",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c952dc71e4c72b0a21752e9bbba1cbc0f53941b4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250357676 | pes2o/s2orc | v3-fos-license | Pediatric lung transplantation for COVID‐19: Unique clinical and psychosocial barriers
Abstract Background SARS‐CoV‐2 infection in the age group of 0–17 years contributes to approximately 22% of all laboratory‐confirmed SARS‐CoV‐2 infections. Fortunately, this age group has a lower death rate (0.5 per 100 000) that accounts for only 4% of the total deaths due to COVID‐19. Despite the low mortality rate in the pediatric population, children of minority groups represented 78% of the deaths highlighting the existing disparities in access to health care. Methods With the emergence of the more contagious COVID‐19 variants and the relatively slow pace of vaccination among the pediatric population, it is possible to see more cases of significant lung injury and potential for transplantation for the younger age group. Results To our knowledge, our patient is the youngest to have undergone lung transplantation for SARS‐CoV‐2. Conclusion The case presented unique challenges, particularly in relation to timing for listing and psychosocial support for parents who were his decision makers.
His clinical course was notable for hemoptysis and recurrent bilateral pneumothoraces, requiring chest tube placement. During the next 2 weeks, he was extubated and re-intubated again but never required a tracheostomy as he was awake throughout, requiring minimal sedation. He was transferred to our facility 1 month later for further management and evaluation for lung transplantation. His ECMO cannula was changed to left subclavian vein (Medtronic 27 Fr) cannula to enable ambulation and was extubated to high flow nasal cannula.
CT thorax was notable for areas of ground-glass opacification, cystic dilatation, and pulmonary fibrosis (Figure 1). A second course of high-dose methylprednisolone failed to reverse lung injury, and there was no indication of clinical improvement (refractory oxygen needs and inability to wean off ECMO support). Daily targeted physical therapy improved his conditioning, initially from bed to chair, and finally to being able to ambulate for 50 ft consistently. Throughout this phase, the medical team maintained constant communication with his parents (who made decisions for him), who were vacillating from the decision to assess for transplantation in anticipation of natural recovery. Even as the parents opted to wait for his recovery during the initial period, they were equally skeptical of a protracted stay on ECMO due to risk for complications. Of particular concern for the parents was the modest median 5-year survival for lung transplant recipients and the significant lifestyle limitations he may experience at his young age. The medical team had to ensure that the parents had a comprehensive understanding of the process before consenting to transplantation. In particular, psychosocial factors including medication adherence were assessed meticulously given the young age of the patient. Following multiple rounds of discussions with family, an expedited pretransplant evaluation was performed and no obvious contra-indications were identified.
He was ultimately listed for bilateral lung transplantation 66 days after being on ECMO. During the period he was listed, his parents required constant reassurance. After 49 days of being listed, he successfully underwent bilateral lung transplantation. The donor was an 18-year-old male, DBD (donation after brain death). SARS-CoV-2 was ruled out for the donor from both nasopharyngeal swab and bronchial alveolar lavage specimens. A conventional clamshell exposure was used for the surgery. Intraoperatively, we encountered several adhesions mostly limited to the hilum, but hemostasis was achieved without difficulty. The pneumonectomy and implantation proceeded in the standard fashion. The explants demonstrated evidence of lung injury with congestion and areas of cystic dilatation ( Figure 2). The ECMO support was decannulated on first postoperative day and he was extubated subsequently. The patient was discharged home 3 weeks after lung transplantation in stable condition.
| DISCUSS ION
In children above the age of 11, pediatric lung transplantation is most commonly performed for children with cystic fibrosis, followed by idiopathic pulmonary hypertension. 3 In this age group, encountering patients with COVID-19 who require lung transplantation remains a rare possibility, due to the relatively indolent course of illness in children. Pediatric lung transplantation presents distinct challenges compared with the adults-due to considerations with size matching and the evolving immune system in children; and also the psychosocial factors associated with decision-making by parents despite the unique considerations of pediatric lung transplantation, the median survival is 5.7 years, which is comparable with adults. 3 In reality, the medical management prior to transplantation in our patient did not differ significantly from the adults who require an expedited transplantation. As in other patients transplanted for COVID-19, we monitored the patient for signs of irreversibility such as lack of change in clinical variables (degree of oxygen support, arterial blood gases, ability to wean from extracorporeal support, and imaging signs of pulmonary fibrosis). In our patient, who was anthropometrically similar to an adult, finding a donor did not prove to be a significant challenge.
Intraoperative findings of dense hilar and pleural adhesions were similar to previously reported cases. 4 The histopathology findings of explanted lungs were notable for diffuse interstitial expansion with proliferation of fibroblasts, myofibroblasts, and occasional lymphocytes with reactive type 2 pneumocyte hyperplasia a focal non-specific pattern of lung injury. There was evidence of mature collagenous fibrosis. The pathology findings in the lung are identical to previously reported cases where diffuse interstitial fibrosis dominates the pattern with other non-specific findings. 5 A unique aspect to be considered in performing an expedited lung transplantation in the pediatric age group is the psychosocial impact of such a definitive therapy. There were perceptible challenges to decision making specifically in relation to the decision F I G U R E 1 CT thorax obtained 90 days after the initial diagnosisdemonstrating bilateral diffuse ground-glass opacification with areas of cystic dilatation, atelectasis, and early fibrosis and timing of listing. The medical team was diligent about repeatedly ensuring that the patient and his parents had a comprehensive understanding of his clinical condition and the intricacies of lung transplantation. Furthermore, the psychosocial impact of transplantation in adolescence, including risk of non-adherence and ensuing complications was carefully evaluated at every phase. 6 The appropriate timing of listing was equally arduous given that the concern of performing a definitive procedure like lung transplantation in a young patient had to be counterbalanced against the risks of losing the window for lung transplantation (in anticipation of recovery) due to complications of being on prolonged extracorporeal support. The complications of prolonged extracorporeal support are not trivial and include hemorrhage, thrombosis, sepsis, multi-organ failure, limb ischemia, and limb loss. 7,8 Even though it was compelling to monitor and hope for natural recovery, the irreversible nature of his lung injury, in addition to young age, long term ECMO dependence and deconditioning, necessitated lung transplantation as the curative treatment.
The parents required considerable reassurance over a period of time regarding the appropriateness and timing of decision to list for transplantation.
In performing expedited lung transplantation for the pediat- | 2022-07-09T06:17:25.396Z | 2022-07-07T00:00:00.000 | {
"year": 2022,
"sha1": "8d0227716a03de9ab365391d1702202972e2361d",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "f45ac5c40da258aa23f89605fc480a1fda823172",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
37025845 | pes2o/s2orc | v3-fos-license | An overview of the regulatory planning system in New South Wales : identifying points of intervention for health impact assessment and consideration of health impacts
The experience of health impact assessment (HIA) in NSW has shown that it is possible to incorporate considerations of health impacts into decision-making concerning urban planning. In NSW, the Environmental Planning and Assessment Act 1979 is the regulatory framework governing urban planning. This legislative system provides opportunities for HIA and the consideration of health impacts as part of developing plans, policies and development proposals within NSW. Patrick J. HarrisA,B, Ben F. Harris-RoxasA and Elizabeth HarrisA ACentre for Health Equity Training, Research & Evaluation (CHETRE), University of New South Wales BCorresponding author. Email: patrick.harris@unsw.edu.au fessionals with a summary of the complex regulatory framework which governs urban planning in NSW, suggesting points of intervention for HIA within this system. The discussion will assist health professionals to better communicate with their planning colleagues in proposing the effective use of HIA in all planning decisions – from broad plans to the determination of site-specific development applications. It encourages them to consider adverse health impacts as well as positively providing for the wellbeing of the community. The two levels of the planning system: plan-making and development assessment The Act covers two principal areas of interest for HIA, plan-making and development applications. Both provide valuable opportunities to encourage consideration of health impacts and the use of HIA. Plan-making Plan-making is covered by Part 3 of the Act through statutory and non-statutory environmental planning instruments (Figure 1), which link directly to health and well-being through their provisions. These instruments include: protecting the environment; controlling development; reserving land for public use; the provision, maintenance and retention of affordable housing; controlling advertising; and protecting and conserving ecological communities. An additional opportunity presents itself with respect to ‘such other matters as are authorised or required to be included in the environmental planning instrument by this or any other Act’.7 Accordingly, considerations under the NSW Public Health Act 1991 could be taken into account at this point. Statutory environmental planning instruments There are three statutory environmental planning instruments: (1) State Environmental Planning Policies, (2) Regional Environmental Plans and (3) Local Environmental Plans. (1) The first of these, State Environmental Planning Policies, deals with issues of significance to the state and people of NSW and are overseen by the NSW State Government. There are over 70 of these, many of which have direct and indirect links to health.9
The benefits and promises of health impact assessment (HIA) for urban planning have been clearly articulated throughout this issue of the Bulletin and in the broader international literature.[5][6] Land-use planning and development in NSW is governed by the Environmental Planning and Assessment Act 1979. 7he Act provides opportunities for the use and development of HIA as an urban planning tool and the consideration of health impacts within the planning and development system.Table 1 outlines the objectives of the Act which have direct links to the wider determinants of health. 3,8However, this important statutory influence on health is largely unknown to health professionals.
This article provides an overview of the planning system, focussing on statutory plan-making and the development assessment process.The purpose is to provide health pro-An overview of the regulatory planning system in New South Wales: identifying points of intervention for health impact assessment and consideration of health impacts Abstract: The experience of health impact assessment (HIA) in NSW has shown that it is possible to incorporate considerations of health impacts into decision-making concerning urban planning.In NSW, the Environmental Planning and Assessment Act 1979 is the regulatory framework governing urban planning.This legislative system provides opportunities for HIA and the consideration of health impacts as part of developing plans, policies and development proposals within NSW. fessionals with a summary of the complex regulatory framework which governs urban planning in NSW, suggesting points of intervention for HIA within this system.The discussion will assist health professionals to better communicate with their planning colleagues in proposing the effective use of HIA in all planning decisions -from broad plans to the determination of site-specific development applications.It encourages them to consider adverse health impacts as well as positively providing for the wellbeing of the community.
The two levels of the planning system: plan-making and development assessment
The Act covers two principal areas of interest for HIA, plan-making and development applications.Both provide valuable opportunities to encourage consideration of health impacts and the use of HIA.
Plan-making
Plan-making is covered by Part 3 of the Act through statutory and non-statutory environmental planning instruments (Figure 1), which link directly to health and well-being through their provisions.These instruments include: protecting the environment; controlling development; reserving land for public use; the provision, maintenance and retention of affordable housing; controlling advertising; and protecting and conserving ecological communities.An additional opportunity presents itself with respect to 'such other matters as are authorised or required to be included in the environmental planning instrument by this or any other Act'. 7Accordingly, considerations under the NSW Public Health Act 1991 could be taken into account at this point.
Statutory environmental planning instruments
There are three statutory environmental planning instruments: (1) State Environmental Planning Policies, (2) Regional Environmental Plans and (3) Local Environmental Plans.
(1) The first of these, State Environmental Planning Policies, deals with issues of significance to the state and people of NSW and are overseen by the NSW State Government.There are over 70 of these, many of which have direct and indirect links to health. 92) At the next level, Regional Environmental Plans are also overseen by the NSW State Government.These plans may also incorporate health-related issues, providing detailed regional land-use planning across issues such as urban growth, commercial centres, extractive industries, recreational needs, rural lands, and heritage and conservation.Development Control Plans may also be related to other plans such as a 'place plan' to establish sites for community centres in a residential area which can build social capital.At the same time, Development Control Plans can link to Section 94 of the Act (Contribution towards provision or improvement of amenities or services), which requires developers to contribute additional facilities and services as a result of their development (eg the provision of public parkland).
Another important non-statutory planning document created by the State Government, and replacing Regional Environmental Plans, are Regional Strategies.While not statutory instruments, they are policy documents providing ministerial direction which Local Environmental Plans are required to follow. 10Therefore improved consideration of health impacts within Regional Strategies could have a wide-reaching influence on health and well-being (see Wells et al. in this issue).
Development assessments
Development assessments, the consideration of specific proposals for development, are covered by Part 3A and Part 4 of the Act.Part 3A is concerned with developments defined as 'Major Projects' by the Minister for Planning, and their assessment is overseen by State Government. 13,14Part 4, which relates to other developments, is managed by local government (guidance is available from each local council).For all developments in both Parts 3A and Part 4, there are three stages in the assessment process at which consideration of health, or use of HIA, can be inserted: (1) consultation before lodgement of an application; (2) the lodgement of an application; and (3) the assessment of the application.
(1) Consultation before lodgement of an application A proponent will consult either the Department of Planning (Part 3A applications) or local government (Part 4).At this stage, there are opportunities for health to engage with both the Department of Planning and individual local councils to encourage the consideration of health at this early stage of the process.For Part 3A applications, the Department of Planning provides information that must be included in the submission of an environmental assessment.For Part 4, individual local councils provide guideline documents for lodgement requirements.
(2) Lodgement of an application This stage, when the assessment is lodged (for both Part 3a and 4), provides further opportunities for health and wellbeing to inform the initial acceptance or rejection of the assessment by the Department of Planning or local council.For Part 3A, following consultation with relevant agencies (including the Department of Health), the Director General of the Department of Planning may request additional information or refuse to exhibit the environmental assessment.For Part 4, local councils may reject applications that are unclear in their intentions or provide insufficient information; or councils may request additional information.
(3) Assessment of the application For Part 3A, the Director General will consult with relevant agencies before finalising an assessment report.This report is then submitted to the Minister for a determination; the project can be rejected or approved, with conditions considered appropriate.
For Part 4, local governments assess applications using criteria laid out in Section 79c of the Act.Section 79c contains many avenues of influence for health, through five considerations.The first considerations are environmental planning instruments (State Environmental Planning Policies, Regional Environmental Plans, Local Environmental Plans, Regional Strategies) and Development Control Plans.Second considerations are any potential impacts of the development, including environmental, social and economic.The third considerations involve the suitability of the site for the development (eg any natural characteristics, ease of access and availability of services).The fourth considerations entail submissions made in accordance with the Act (eg from neighbours, other bodies such as advocacy agencies).The fifth considerations encompass the public interest, including health and wellbeing.
Conclusion
This overview of the regulatory planning system in NSW provides an insight into the consideration of health and health impacts in planning.However, a word of warning is required.Despite the importance of regulation governing the work of those involved in planning, research in Australia and overseas has indicated that regulations alone are insufficient to fully address health impacts. 15,16A more strategic and creative approach is required that combines regulation with proactive strategies by the health sector to foster collaboration and trust.
(a) to encourage: (i) the proper management, development and conservation of natural and artificial resources, including agricultural land, natural areas, forests, minerals, water, cities, towns and villages for the purpose of promoting the social and economic welfare of the community and a better environment (ii) the promotion and co-ordination of the orderly and economic use and development of land (iii) the protection, provision and co-ordination of communication and utility services (iv) the provision of land for public purposes (v) the provision and co-ordination of community services and facilities (vi) the protection of the environment, including the protection and conservation of native animals and plants, including threatened species, populations and ecological communities, and their habitats (vii) ecologically sustainable development and (viii) the provision and maintenance of affordable housing (b) to promote the sharing of the responsibility for environmental planning between the different levels of government in the State and
A,B , Ben F. Harris-Roxas A and Elizabeth Harris A
BCorresponding author.Email: patrick.harris@unsw.edu.au
Table 1 . Objectives of the NSW Environmental Planning and Assessment Act 1979, No. 203
Of specific importance to health are Development Control Plans.These plans support and supplement controls established in the Local Environmental Plans by way of more detailed planning and design guidelines that must be taken into account by a development.For example, a Local Environmental Plan will specify The objectives of this Act are: uses are permitted through zoning (eg town houses in a residential zone).In turn the Local Environmental Plan can link to a Development Control Plan which guides the way this development is carried out and what should be in place when the development occurs (eg cycleway to encourage physical activity).
Source: NSW Government.Environmental Planning and Assessment Act 1979 No. 203.*RegionalEnvironmentalPlans are increasingly being superseded by Regional Strategies.Source: Dr Danny Wiggins, personal communications.Figure 1.Part 3 of the NSW Environmental Planning and Assessment Act 1979.what | 2017-06-16T13:08:23.457Z | 2007-10-18T00:00:00.000 | {
"year": 2007,
"sha1": "3c0b1cb7383c6ebd866f03afa91b7eeeec3b0429",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.phrp.com.au/wp-content/uploads/2014/10/NB07073A.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3c0b1cb7383c6ebd866f03afa91b7eeeec3b0429",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215745128 | pes2o/s2orc | v3-fos-license | Self-similar blow-up profiles for a reaction-diffusion equation with strong weighted reaction
We study the self-similar blow-up profiles associated to the following second order reaction-diffusion equation with strong weighted reaction and unbounded weight: $$ \partial_tu=\partial_{xx}(u^m) + |x|^{\sigma}u^p, $$ posed for $x\in\real$, $t\geq0$, where $m>1$, $02(1-p)/(m-1)$. As a first outcome, we show that finite time blow-up solutions in self-similar form exist for $m+p>2$ and $\sigma$ in the considered range, a fact that is completely new: in the already studied reaction-diffusion equation without weights there is no finite time blow-up when $p<1$. We moreover prove that, if the condition $m+p>2$ is fulfilled, all the self-similar blow-up profiles are compactly supported and there exist \emph{two different interface behaviors} for solutions of the equation, corresponding to two different interface equations. We classify the self-similar blow-up profiles having both types of interfaces and show that in some cases \emph{global blow-up} occurs, and in some other cases finite time blow-up occurs \emph{only at space infinity}. We also show that there is no self-similar solution if $m+p<2$, while the critical range $m+p=2$ with $\sigma>2$ is postponed to a different work due to significant technical differences.
Introduction
The goal of this paper is to study and classify the self-similar blow-up profiles for the following reaction-diffusion equation with weighted reaction u t = (u m ) xx + |x| σ u p , u = u(x, t), (x, t) ∈ R × (0, T ), (1.1) in the following range of exponents where, as usual, the subscript notation in (1.1) indicates partial derivative with respect to the time or space variable. By finite time blow-up we understand the situation when a solution which was bounded before, becomes unbounded at time T ∈ (0, ∞). More precisely, we say that a solution u to (1.1) blows up in finite time if there exists T ∈ (0, ∞) such that u(T ) ∈ L ∞ (R), but u(t) ∈ L ∞ (R) for any t ∈ (0, T ). The smallest time T < ∞ satisfying this property is known as the blow-up time of u. Here and in the sequel, we denote by u(t) the map x → u(x, t) for a fixed time t ∈ [0, T ]. The present work is a part of a larger project developed by the authors having the aim to understand the blow-up behavior of solutions to reaction-diffusion equations with weighted reaction and unbounded weights. The reaction-diffusion equation has been considered since long and its blow-up behavior in the range p > 1 is nowadays well understood, at least in one space dimension. Good surveys of the classical results on finite time blow-up for (1.3) with either m = 1, or m > 1 but p > 1 can be found in the books [27] and [28]. However, in the present work we consider exponents p ∈ (0, 1), a case in which it is known that finite time blow-up does not occur for bounded and compactly supported initial conditions. Eq. (1.3) for exponents p ∈ (0, 1) has been considered in a series of papers by de Pablo and Vázquez [21,22,23] where the rather complex but very interesting qualitative theory is developed. In this sequence of works it is shown that the Cauchy problem associated to Eq. (1.3) is generally ill-posed, as uniqueness of solutions is lacking. More precisely, local existence of solutions is established in suitable functional spaces and it is moreover shown that all the solutions (if more than one) having the same initial condition can be ordered between a minimal solution u and a maximal solution u obtained as a limit process [22]. Concerning deeper qualitative properties of solutions such as uniqueness, finite or infinite speed of propagation, interface equation, it is shown in [21] that many of these properties depend strongly on the sign of m + p − 2, for example • if m + p − 2 ≥ 0, finite speed of propagation of compactly supported solutions occurs and given a bounded initial condition u 0 , it is shown that uniqueness of solutions holds true if and only if u 0 (x) > 0 for any x ∈ R. Indeed, the authors of [21] prove that the maximal solution u is always positive, while the minimal solution u has always compact support if u 0 is itself compactly supported. Thus, at least two solutions are obtained, while for positive data u 0 uniqueness is established.
• if m + p − 2 < 0, infinite speed of propagation is established in [21]: for any data u 0 such that u 0 ≡ 0, local solutions become strictly positive u(x, t) > 0 for any t > 0, thus uniqueness then holds true along the lines of the previous case.
The non-uniqueness of solutions to Eq. (1.3) has been further investigated in [23], and a classification of all the possible solutions starting from a fixed initial condition is given. Moreover, the large time behavior of solutions is addressed in [18], and in all these works the self-similar solutions of the equation, of the form u(x, t) = t −α f (xt −β ) with suitable exponents α, β and profiles f play a significant role both as subjects for the comparison principle and as patterns that the solutions approach for large times [18]. This proves the importance of having a good knowledge of the self-similar solutions to Eq. (1.1), such solutions are expected to give the patterns of the whole dynamics of the equation. Moreover, they are often used also for comparison with other solutions whose bounds are established in this way.
Concerning the reaction-diffusion equations with weighted reaction terms, a number of works are devoted to their qualitative theory and focus on the existence of the Fujita exponents (that is, exponents p * such that, for p < p * any solution to Eq. (1.1) blows up in finite time) and, above this exponent, on giving further conditions on the initial data u 0 for finite time blow-up to take place, or on the contrary, smallness conditions insuring that the solutions to Eq. (1.1) are global. We recall here, in the semilinear case, the works by Pinsky [25,26] and for the slow diffusion m > 1, p > m, the very general paper by Suzuki [29] establishing conditions on the tail of u 0 (x) as |x| → ∞ for the blow-up to take place. Andreucci and Tedeev [1] establish the blow-up rates for m > 1, p > m and suitable range of σ > 0, even in the more general case of the doubly nonlinear equation. More recent papers deal with more general cases of unbounded weights, either pure positive powers or pure negative powers (that are unbounded at the origin), or even studying finite time blow-up for equations with two weights, one on the reaction term and another one on ∂ t u, such as for example [30,16,17]. When the reaction is weighted with a pure power term such as |x| σ , which vanishes at x = 0, another natural question is whether x = 0 (and more generally the zeros of the weight in the case of a general weight V (x)) can be a blowup point. This has been studied in [5,6,7,8], focusing on the case of the homogeneous Dirichlet problem in a bounded domain.
Recently the authors started a long term project of understanding the dynamics of Eq. (1.1) in different cases of m, p and σ, with the aim of answering some finer questions concerning the finite time blow-up: classifying the blow-up sets, obtaining blow-up rates and if possible, establishing the patterns of general solutions near the blow-up time. Taking into account the relevance of the self-similar solutions for these questions, we focused on classifying the possible blow-up patterns for Eq. (1.1), obtaining some interesting and completely new types of profiles (whose existence depends on the magnitude of σ) that do not exist in the non-weighted case. We also show in [12] that for p = 1 but σ > 0 finite time blow-up produces, a fact that is not true with σ = 0 (that is, without a weight). In another recent work [14] we show that for the critical case p = m > 1 there exist multiple blow-up profiles if σ > 0 is sufficiently small but all these profiles cease to exist when σ increases, a fact that has to be further understood (as in that case, the blow-up phenomenon is no longer possible to follow a global in space self-similar pattern). Finally, in [12,13] a study of self-similar profiles is performed for 1 ≤ p < m showing that the profiles and their blow-up sets strongly differ with respect to σ: finite time blow-up occurs globally for σ > 0 small, while the blow-up set of the profiles is shown to be only the space infinity when σ > 0 increases, due to the strength of the power |x| σ when |x| is very large. The present work is aimed to continue this study, for the very interesting case when 0 < p < 1 but σ > 0 is large enough in order to force solutions to blow up in finite time. The general qualitative theory of a very similar reaction-diffusion equation to our Eq. (1.1) with p < 1 will be developed in a companion paper [11], where the results of the present work are strongly used.
Main results. As we have explained above, this paper deals with the self-similar blow-up profiles for Eq. (1.1), in the range of exponents (1.2). It is a well established fact that the self-similar solutions to (1.1) contain significant information on the qualitative properties of general solutions: indeed, on the one hand they are expected to give the "optimal" behavior in a priori estimates for general solutions and on the other hand they are the patterns that generic solutions approach asymptotically (either as t → ∞ in the case of global solutions, or as t → T if finite time blow-up occurs). Thus, knowing how the selfsimilar profiles behave is an information of utmost importance in the study of nonlinear diffusion and reaction-diffusion equations. In the case of Eq. (1.1) and exponents as in (1.2), our classification of self-similar solutions shows in particular that we are in a range where solutions are expected to blow up in finite time, as self-similar solutions do. To be more precise, we look for backward self-similar solutions in the form where T ∈ (0, ∞) is the finite blow-up time and α > 0, β ∈ R exponents to be determined. Replacing the form (1.4) in (1.1), we readily find that and the self-similar profile f is a solution to the non-autonomous differential equation Let us notice that the condition σ > 2(1 − p)/(m − 1) insures that the self-similarity exponents α and β as in (1.5) are well-defined and positive, thus it is the lower bound for σ that will lead to finite time blow-up of the solutions. This is in strong contrast with, for example, the autonomous case σ = 0 where solutions exist and remain bounded globally in time, as shown for example in [21,22]. We perform in the sequel a deep study of the previous ODE. We thus define what we understand by a good profile below (similar to [12,13]).
Definition 1.1. We say that a solution f to the differential equation (1.6) is a good profile if it fulfills one of the following two properties related to its behavior at ξ = 0: This definition agrees with the well-known notion of an interface for a solution. Indeed, a solution u to Eq. (1.1) in the form (1.4) with a profile f having an interface at ξ 0 ∈ (0, ∞), has a time-moving interface at |x| = s(t) = (T − t) −β ξ 0 for any t ∈ (0, T ). This is why, also as in our previous works [12,13,14], we will be interested in the good profiles with interface according to Definition 1.1. The range 0 < p < 1 will introduce two big novelties with respect to the previously studied cases. First of all, the analysis will differ according to the sign of the expression m + p − 2. This is a feature of the range p < 1 which has been noticed also in the non-weighted case [21,22], and it is strongly related to the existence of the interfaces: indeed, it is shown in [21] that when m + p − 2 ≥ 0, finite speed of propagation holds true, thus good profiles with interface are expected, while for m + p − 2 < 0 the speed of propagation of the supports becomes infinite, thus the interfaces disappear and a solution (even if the initial condition u 0 is compactly supported) becomes positive immediately. The second important novelty in the range (1.2) with respect to the results in our previous works is the existence of two different interface behaviors. Indeed, even a formal calculation on Eq. (1.6) gives that a solution may develop an interface at a point ξ 0 > 0 in the following two forms: , as ξ → ξ 0 . This is the standard interface behavior inherited from the diffusion (the Barenblatt solutions to the standard porous medium equation have this type of contact at the interface) and will be called interface of Type I in the text. • , as ξ → ξ 0 . This is a new interface behavior that is interesting and will be analyzed in the paper. We will call it for convenience interface of Type II.
We are now in a position to state, one by one, the main results of this paper. We begin with a general existence result of good profiles with both kinds of interfaces. Theorem 1.2 (Existence of good profiles with interface). For any m > 1, p ∈ (0, 1) such that m + p > 2 and σ > 2(1 − p)/(m − 1), there exists at least one good profile with interface of Type I and one good profile with interface of Type II to Eq. (1.6), in the sense of the previous definitions.
This shows that the patterns for blow-up to Eq. (1.1) may be different. We will discuss about this further when we introduce the interface equation and we show that the two types of interface are strongly different with respect to the interface equation. For now, we continue with our main results, particularizing them with respect to their behavior both at the starting point ξ = 0 and at their interface point (Type I or Type II). We first have a general result concerning profiles with interface of Type II. Theorem 1.3 (Good profiles with interface of Type II). For any m > 1, p ∈ (0, 1) such that m + p > 2 and σ > 2(1 − p)/(m − 1), there exist good profiles with interface of Type II and satisfying property (P2) in Definition 1.1. More precisely, these good profiles behave near the origin in the following way: and the corresponding self-similar solutions blow up in finite time t = T only at the space infinity.
This is an interesting result completing Theorem 1.2 for profiles with interface of Type II. Let us notice that in our previous papers [12,13] we obtained a similar result for equations presenting either algebraic (power-like) or exponential spatial decay as |x| → ∞, which were good self-similar solutions too but without interfaces. It seems that for p > 1, the "ancient" spatial decay as |x| → ∞ (which exists for p ≥ 1) converts into the behavior of interface of Type II, due to the change of sign of 1 − p. However, the co-existence of different interfaces is a very noticeable phenomenon in our case.
In the same line as in our previous works, a strong difference of the graph and behavior of profiles holds true with respect to the magnitude of σ. Indeed, when σ is sufficiently closer to its lower limit 2(1 − p)/(m − 1), we find that all the profiles f (ξ) starting with f (0) = 0 form interfaces of Type II. More precisely: Theorem 1.4 (Good profiles with interface for σ small). For any m > 1, p ∈ (0, 1) such that m + p > 2, we have the following results: (a) There exists σ 0 ∈ (2(1 − p)/(m − 1), ∞) such that for any σ ∈ (2(1 − p)/(m − 1), σ 0 ), all the good profiles satisfying property (P2) in Definition 1.1 present an interface of Type II. Moreover, for any σ ∈ (2(1 − p)/(m − 1), σ 0 ) there exist good profiles with interface of Type I and satisfying property (P1) in Definition 1.1. The corresponding self-similar solutions to the latter profiles blow up globally (that is, at any point x ∈ R) in finite time t = T .
For σ sufficiently large things are different. There will be no longer good profiles with interface presenting the behavior in (1.8) at ξ = 0. But we can characterize more precisely the profiles with interface of Type I. Theorem 1.5 (Good profiles with interface for σ large). For any m > 1, p ∈ (0, 1) such that m + p > 2, there exists σ 1 > 2(1 − p)/(m − 1) sufficiently large such that for any σ ∈ (σ 1 , ∞), there exist blow-up profiles satisfying property (P2) in Definition 1.1, presenting the behavior (1.7) as ξ → 0 and having an interface of Type I at some point ξ = ξ 0 ∈ (0, ∞). The corresponding self-similar solutions to these profiles blow up in finite time t = T only at space infinity.
Remark.
A first interesting point to be emphasized is that finite time blow-up occurs also for m > 1 and p < 1. This is due to the strong influence of the weight |x| σ , since it is not true in the non-weighted case (that is, when σ = 0). Moreover, the blow-up set strongly differs with σ: for σ relatively small (close to the lower limit 2(1 − p)/(m − 1)), both self-similar solutions presenting a global blow-up and self-similar solutions presenting blow-up only at the space infinity (while they remain bounded at any |x| fixed) do exist, while for σ very large it is likely that all self-similar solutions blow up at the space infinity. Such a difference with respect to the blow-up behavior was deduced also in our previous papers [12,13] dealing with exponents p = 1, respectively 1 < p < m, where we rigorously define the blow-up set and describe in greater detail the blow-up at the space infinity.
Finally, let us notice that all the previous theorems hold true in the hypothesis that m + p > 2. Due to significant differences in the techniques of the proofs and in some of the results, we separate the very critical case m + p = 2 to a companion paper [15]. However, the subcritical case m + p < 2 is very simple and striking: Theorem 1.6 (Non-existence for m + p < 2). Let m > 1, p ∈ (0, 1) such that m + p < 2 and let σ > 2(1 − p)/(m − 1). Then there exist no good blow-up profiles with or without interface.
The fact that blow-up profiles with interface do no longer exist in the case m + p < 2 was expected, since it is known [21,22] that solutions propagate with infinite speed in this case. But the general non-existence result is very striking, as there is no different behavior at all (such as a tail as |x| → ∞ for example) to replace the interface behavior in this case. The result in Theorem 1.6 is strongly related to a more general non-existence result for a similar equation that will be given in [11], but we nevertheless give a full proof of the non-existence for self-similar profiles using the techniques of this paper.
The interface equation. Differences between interfaces of Type I and Type II. We discuss here, at a formal level, the interface equation satisfied by the interfaces of Type I and of Type II when m > 1, p ∈ (0, 1) and m + p > 2 to show the difference of behavior of the two interfaces. Let us recall that for a compactly supported and radially symmetric solution u to Eq. (1.1), the interface (or free boundary) of u is defined as the supremum of its support at time t > 0 For solutions presenting an interface of Type I, we pass as usual to the equation for the and obtain the equation satisfied by v (similar to [21,22]) Starting from the obvious equality v(s(t), t) = 0 and formally differentiating with respect to t we readily get that Replacing now v t by the right-hand side in Eq. (1.9) and working on self-similar profiles it is easy to check that the terms in v xx and the last one vanish at s(t) (since m + p − 2 > 0) and we remain with the standard interface equation similar to the one fulfilled for example by the solutions to the standard porous medium equation, thus the reaction term involving σ plays no role here. On the other hand, for an interface of Type II, we follow an idea used for traveling wave solutions stemming from Herrero and Vázquez [9] (see also [19,20]) and introduce the following change of function specific to the range p < 1 The equation solved by w is thus, by finding again that and replacing w t with the right-hand side of (1.11), we readily find that on the self-similar solutions the first two terms cancel at the interface point (s(t), t) and we are left with the (free) last term. Thus the interface equation for interfaces of Type II is , (1.12) which reminds of the one obtained for the interface of the self-similar solutions to the reaction-convection-diffusion equation in [20] but in our case it also strongly depends on σ.
We notice that equations (1.10) and (1.12) are very different, which shows that the Type II interface behaviors is novel and qualitatively interesting, while the Type I behavior inherits the properties from the one with σ = 0.
2 The phase space when m + p > 2. Proof of Theorem 1.3 We will consider from now on, unless if the contrary is specified, that m + p > 2. The main tool in the proofs of the main results of the present work is a thorough analysis of a phase space associated to an autonomous dynamical system which is equivalent to the nonautonomous equation of profiles (1.6). Thus we transform Eq. (1.6) into an autonomous quadratic dynamical system by letting where we recall that α (and also β) is defined in (1.5) and the new independent variable η = η(ξ) is defined through the differential equation The differential equation (1.6) transforms into the system Let us remark at this point that the system (2.2) differs from the phase space system analyzed in both [12,13]. Indeed, the variable Z(η) = ξ σ f p−1 (ξ) used in the above quoted papers is no longer useful for p < 1 as it sends the interface behaviors to infinity. We have to work instead with the new system (2.2) which is very well adapted for the case p < 1 and m + p > 2. However, it has a further technical difficulty stemming from the fact that sometimes the coefficient σ − 2 in the third equation might be negative. Notice for now that the planes {X = 0} and {Z = 0} are invariant for the system and that X ≥ 0, Z ≥ 0, only Y being allowed to change sign. We easily find that for m + p > 2 there are three critical points in the finite plane: We devote the present section to the local analysis near the strongly non-hyperbolic point P 0 , which is rather technical and quite complex. The local analysis near the points P 1 , P 2 and of the critical points at infinity is left for the next section.
Local analysis of the point P 0 . The linearization of the system (2.2) near P 0 has the matrix thus it has a one-dimensional stable manifold and a two-dimensional center manifold. In order to study the center manifold and the flow on it, we perform the change of variable and after straightforward (but rather tedious) calculations, the system in variables (X, We can now apply the local center manifold theorem [24, Theorem 1, Section 2.12] to find (rather easily, by taking off the third order terms in the equation of T in the system (2.3)) that the center manifold has the equation and the flow on the center manifold is given by the almost homogeneous quadratic system which is easily obtained by keeping only the quadratic terms in the equations fulfilled by X, Z in the system (2.3). In order to study the flow given by this system in a neighborhood of the point (X, Z) = (0, 0) we need to use the rather complicated but complete classification of the (2,2)-homogeneous dynamical systems established in the renowned paper by Date [3]. This study will lead us directly to the proof of Theorem 1.3.
Proof of Theorem 1.3. As explained above, the proof consists esentially in the study of the homogeneous quadratic part of the system (2.4) above, using the theory from [3]. Putting aside the common factor 1/β which can be absorbed by a change in the indepedent variable, we have to study the quadratic system We will use from now on the same notation as in [3], where the reader can find more details on the classification that follows below. The idea is to compute several important invariants associated to the system (2.4) and study their sign in order to get the phase portrait near the origin. The tensor of the coefficients of the system (2.4) (written in homogeneous form) is given by and according to [3] we decompose this tensor P k λ,µ into its vector part p λ and its tensor part Q k λ,µ using the formulae where δ i,j = 1 if i = j and zero otherwise. In our case, it is easy to check that the vector part contains and the tensor part is given by With the help of these values, we further compute the Hessian of the fundamental cubic form associated to the system with the general formulae h k,l = 1 2 where 11 = 22 = 0 and 12 = − 21 = −1. After some easy calculations, we obtain that in our case the Hessian writes With all these numbers, we are now ready to compute the fundamental scalar invariants of degree 2 called D, H and F in [3], which are the basis of the classification of all the phase portraits. We thus have where in the last formula we denoted 11 = 22 = 0, 12 = − 21 = −1, and With these scalar invariants we can finally introduce the general set of invariants introduced by Date and Iri in [4] having the general expression In particular, it is easy to calculate K 2 and K 3 , more precisely According to the general classification in [3, p. 327] we find that we are in the case D < 0, K 2 < 0 and K 3 < 0, which corresponds to the phase portrait no. 8 in [3, Figure 8, p. 329]. We thus infer that the local behavior near the origin in the system (2.4) presents an elliptic sector in a sufficiently small neighborhood of the origin, as shown in Figure 1, hence there exist orbits in the phase space which go out and then enter the critical point P 0 along the center manifold. Coming back to variables (X, T, Z), the profiles contained in these connection have T ∼ 0 or equivalently, undoing the change of variable, We discard at this point the possibility that the limit behavior in (2.7) is taken as ξ → ∞. Indeed, assume for contradiction that (2.7) holds true as ξ → ∞. Since X(ξ) → 0, it follows that f (ξ) < Kξ 2/(m−1) for any K > 0 (for ξ large enough), whence On the other hand, we can just integrate the quadratic part of the system (2.4). More precisely, letting we obtain a homogeneous differential equation that can be explicitly integrated to find the general solution or equivalently in terms of profiles which can be written also as (2.8) But we just noticed above that ξ σ f (ξ) p−1 → ∞ as ξ → ∞, and since (m − p)/(1 − p) > 1, by keeping the dominating orders in ξ in (2.8) we get But the latter is equivalent after simplifications to which is a contradiction since (σ(m − 1) + 2(p − 1))/(1 − p) > 0. Thus, we cannot enter P 0 as ξ → ∞ in terms of profiles. The remaining possibilities are that (2.7) holds true as either ξ → 0 or as ξ → ξ 0 ∈ (0, ∞). The general solution to the equation (2.7) with exact equality to zero is On the one hand the differential equation with solution f 0 is an approximation for the true equation (2.7). On the other hand the orbits cannot go out from a critical point as ξ → ξ 0 = (C/(1 − p)) 1/σ . Assume for contradiction that this were the case. It then follows that f (ξ 0 ) > 0 and in a right-neighborhood of ξ 0 , since p − 1 < 0 and f (ξ 0 ) = 0. We thus get a contradiction to (2.7). It thus follows that the profiles contained in the orbits going out of P 0 behave as in (1.7) as ξ → 0. By similar arguments and according to the form of f 0 , the profiles contained in orbits entering P 0 have an interface of Type II at some point Since all these hold true for any σ > 2(1 − p)/(m − 1), any profile contained in such an elliptic orbit satisfies Theorem 1.3.
Local analysis of the critical points
In this section we complete the local analysis of the critical points of the system (2.2) both in the finite space and at infinity. This part is rather similar to [13, Section 2]. Let us recall that the critical point P 0 was studied in Section 2. We start with the remaining finite critical points P 1 and P 2 .
Lemma 3.1 (Local analysis of the point P 1 ). The system (2.2) in a neighborhood of the critical point P 1 has a one-dimensional unstable manifold and a two-dimensional stable manifold. The orbits entering P 1 on the stable manifold contain profiles such that Thus, this point gathers the Type I interface behavior.
Proof. The linearization of the system (2.2) near this critical point has the matrix: and respective eigenvectors (not normalized) We then have a two-dimensional stable manifold with orbits entering the point P 1 in the phase space and (as it is easy to check) a unique orbit going out of P 1 along the Y -axis.
We look for the profiles contained in the orbits entering P 1 on the two-dimensional stable manifold. We infer from the change of variables (2.1) that on the orbits entering P 1 . We show first that (3.2) holds true as ξ → ξ 0 ∈ (0, ∞). Indeed, assume first for contradiction that (3.2) holds true as ξ → ∞. Then But this is a contradiction with the fact that X(ξ) → 0 on an orbit entering P 1 . Assume now for contradiction that (3.2) holds true as ξ → 0. A similar argument based on the L'Hospital rule leads to a similar contradiction as before, after noticing that X(ξ) → 0 implies f m−1 (ξ) → 0 as ξ → 0 and thus the L'Hospital rule can be applied for the function f m−1 (ξ)/ξ 2 giving X(ξ) modulo a constant. We thus conclude that (3.2) holds true as ξ → ξ 0 for some ξ 0 ∈ (0, ∞) and readily get the behavior described in (3.1) by direct integration.
We complete the analysis of the critical points in the plane by performing the local analysis near P 2 .
Lemma 3.2 (Local analysis of the point P 2 ). The system (2.2) in a neighborhood of the critical point P 2 has a two-dimensional stable manifold and a one-dimensional unstable manifold. The stable manifold is contained in the invariant plane {Z = 0}. There exists a unique orbit going out of P 2 , containing profiles such that Proof. The linearization of the system (2.2) near the critical point P 2 has the matrix with eigenvalues λ 1 , λ 2 and λ 3 such that It is easy to check (by computing the eigenvectors corresponding to λ 1 and λ 2 and noticing that both have the Z-component zero) that the two-dimensional stable manifold is contained in the invariant plane {Z = 0}. Similarly as in [12, Lemma 2.3] we conclude that there exists an unique orbit going out of P 2 towards the interior of the phase space, tangent to the eigenvector corresponding to λ 3 which is The local behavior (3.3) of the profiles contained in the orbit going out of P 2 is obtained from the fact that which is obvious that cannot hold true as ξ → ξ 0 ∈ (0, ∞) (in such a case f (ξ) would start from a positive constant) and again a contradiction based on the L'Hospital rule similar to the ones in the proof of Lemma 3.1 discards the possibility that ξ → ∞. Thus (3.4) holds true necessarily as ξ → 0 and this is equivalent to the claimed local behavior (3.3).
Local analysis of the critical points at infinity. Together with the finite critical points already analyzed, in order to understand the global picture of the phase space associated to the system (2.2), we need to analyze its critical points at the space infinity. To this end, we pass to the Poincaré hypersphere according to the theory in [24, Section 3.10]. We thus introduce the new variables (X, Y , Z, W ) by and we derive from [24, Theorem 4, Section 3.10] that the critical points at space infinity lie on the equator of the Poincaré hypersphere, hence at points (X, Y , Z, 0) where X 2 + Y 2 + Z 2 = 1 and the following system is fulfilled: where P 2 , Q 2 and R 2 are the homogeneous second degree parts of the terms in the right hand side of the system (2.2), that is We thus find that the system (3.5) becomes Taking into account that we are considering only points with coordinates X ≥ 0 and Z ≥ 0, we find the following critical points at infinity (on the Poincaré hypersphere): which are the same ones as for the phase space systems in [12,13]. We perform next the local analysis near each one of them. This analysis follows closely the one in [12], thus we will sometimes skip some details. Proof. We apply part (a) of [24, Theorem 5, Section 3.10] to infer that the flow in a neighborhood of Q 1 is topologically equivalent to the flow in a neighborhood of the origin (y, z, w) = (0, 0, 0) for the system where the minus sign has been chosen in the system (3.7) in order to match the direction of the flow. We deduce it from the first equation of the original system (2.2), which givesẊ < 0 in a neighborhood of Q 1 , taking into account that |X/Y | → +∞ near this point. Thus Q 1 is an unstable node since the linearization of the system (3.7) near the origin has eigenvalues 1, 2 and σ. The local behavior of the profiles contained in the orbits going out of Q 1 is given by whence by integration z ∼ Cw σ/2 . Coming back to the original variables and recalling that the projection of the Poincaré hypersphere has been done by dividing by the X variable, we infer that Z/X ∼ CX −σ/2 , C > 0, thus which leads easily to f (ξ) ∼ a for some a > 0. Moreover, the latter holds true as ξ → 0, since at Q 1 we have X → ∞. We thus get f (0) = a > 0 with no further condition on the derivative f (0). Lemma 3.4 (Local analysis of the points Q 2 and Q 3 ). The critical points Q 2,3 = (0, ±1, 0, 0) in the Poincaré hypersphere are an unstable node, respectively a stable node. The orbits going out of Q 2 to the finite part of the phase space contain profiles f (ξ) such that there The orbits entering the point Q 3 and coming from the finite part of the phase space contain profiles f (ξ) such that there exists Proof. Part (b) of [24, Theorem 5, Section 3.10] gives that the flow of the system (2.2) near the points Q 2 and Q 3 is topologically equivalent to the flow near the origin (x, z, w) = (0, 0, 0) of the system where the minus sign works for one of the points and the plus sign for the other point.
From the second equation of the original system (2.2) we infer thaṫ in a neighborhood of both points Q 2 and Q 3 (since Y → ±∞ and dominates over the other variables in a neighborhood of these points), which gives the direction of the flow from right to left and proves that the minus sign in the system (3.8) corresponds to Q 2 and the plus sign to Q 3 . Thus Q 2 is an unstable node and Q 3 is a stable node. In order to establish the local behavior, we notice that in a neighborhood of the origin of the system (3.8) we have whence by integration x ∼ Cw m or in terms of initial variables X ∼ CY 1−m . Using the formulas for X, Y in (2.1) we obtain (f m ) (ξ) ∼ Cξ (m+1)/(m−1) , and the desired sign-changing behavior at some finite point ξ = ξ 0 ∈ (0, ∞) following a very similar discussion as in [12,Lemma 2.7] or [14,Lemma 2.4]. We omit the details.
We next analyze first the critical point Q 5 and let Q 4 for the end. This is motivated by the fact that the local analysis near Q 5 follows the same techniques as used in the previous Lemmas.
Lemma 3.5. The critical point Q 5 in the Poincaré hypersphere has a two-dimensional unstable manifold and a one-dimensional stable manifold. The orbits going out from this point into the finite region of the phase space contain profiles satisfying in a right-neighborhood of ξ = 0.
Proof. We infer again from [24, Section 3.10] that the flow in a neighborhood of the point Q 5 is topologically equivalent to the flow of the already considered system (3.7) but in a neighborhood of the critical point (y, z, w) = (1/m, 0, 0). Moreover, when approaching Q 5 we have thus we have to choose again the minus sign in the system (3.7). The linearization of (3.7) near Q 5 (including the change of sign given by the minus sign in front ofẏ,ż,ẇ) has the matrix thus we find a two-dimensional unstable manifold and a one-dimensional stable manifold. Analyzing the eigenvectors of the matrix M (Q 5 ) we find that the orbits going out from Q 5 on the unstable manifold go to the finite part of the phase-space, while the orbits entering Q 5 on the stable manifold remain on the boundary of the hypersphere. In order to study the profiles contained in the orbits going out of Q 5 , we deduce from the relation and after direct integration we obtain f (ξ) ∼ Kξ 1/m as ξ → 0, for K > 0, as desired.
We remain with the point Q 4 , that brings nothing new for our analysis. We indeed have Lemma 3.6. There are no profiles contained in the orbits connecting to the critical point Q 4 .
It is then easy to discard with the aid of (1.6) that f (ξ) → ∞ as ξ → ∞. The possibility that L ∈ (0, ∞) is also discarded as follows: standard calculus results (see for example [ We then find by evaluating then (1.6) at ξ = ξ n that lim n→∞ (ξ σ n f (ξ n ) p − αf (ξ n )) = 0, which is again in contradiction with (3.10). We thus remain with the case f (ξ) → 0 as ξ → ∞ and strictly decreasing on some interval (R, ∞), R > 0. Then, if 2(1 − p)/(m − 1) < σ ≤ 2 we already get a contradiction with (3.9), since in that case If σ > 2, we can write (1.6) in the form We infer from (3.9) that there exists R 0 > R sufficiently large such that for any ξ > R 0 we have and also Since there exists at least a subsequence (ξ n ) n≥1 such that ξ n → ∞ and (f m ) (ξ n ) > 0 for any positive integer n, the above inequalities contradict (3.11) evaluated at ξ = ξ n for n large enough, ending the proof.
We close this section with a local uniqueness of profiles with interface of Type I. This will allow us to employ the backward shooting method to prove the existence of good profiles with interface of Type I in the next section. Proof. For this proof, it is not easy to work with our system (2.2) since all the profiles with interface of Type I are gathered in the critical point P 1 . We thus use a different change of variable which identifies the profiles in terms of their interface point by letting thus obtaining the following system
Existence of good profiles with interface of Type I
In this section, we employ the local analysis performed in the previous sections to show that for any σ > 2(1 − p)/(m − 1) there exists at least one good profile with interface of Type I. Since the same fact for profiles with interface of Type II has been proved in Section 2, this completes the proof of Theorem 1.2. The strategy used to prove this existence result is the backward shooting method, that is shooting from the interface point ξ = ξ 0 ∈ (0, ∞) and trace backward the unique profile with interface of Type I exactly at ξ = ξ 0 , according to Proposition 3.7. The idea is to show that profiles with an interface at ξ 0 > 0 very small are strictly decreasing, while profiles with an interface at ξ 0 large have a change of sign at some point ξ 1 ∈ (0, ξ 0 ). However, because of techical reasons we cannot perform the backward shooting in the phase space associated to the system (2.2) and we introduce a new change of variables by setting Z = U V , X = U (m−1)/(m+p−2) or equivalently In variables (U, Y, V ) we obtain the autonomous dynamical system and we notice that, despite the fact that the system (4.2) is no longer quadratic, it has a very important property that the third equation is very simple and the component V is non-decreasing along the trajectories in the phase space. We moreover notice that V is a power of ξ and the behavior of interface of Type I means now orbits entering the critical points P (v 0 ) = (0, −β/α, v 0 ) with v 0 ≥ 0. The uniqueness proved in Proposition 3.7 can be easily transferred here and thus get that for every v 0 > 0 there exists a unique orbit entering the critical point P (v 0 ) coming from the interior of the phase space and containing the unique profile with interface at the point ξ = ξ 0 ∈ (0, ∞) given by We are thus ready to start our backward shooting method, which is formalized in the two propositions below.
Proposition 4.1. In the previous notation, the orbits entering points P (v 0 ) with v 0 > 0 sufficiently small contain profiles f (ξ) that are decreasing and have a negative slope at Proof. Since any profile with interface is decreasing in a neighborhood of the interface point, recalling that at any point P (v 0 ) we have Y = −β/α, a non-decreasing profile must cross first the plane {Y = 0} in the phase space associated to the system (4.2) and then also the plane {Y = −β/2α} before reaching any of the critical points P (v 0 ). The direction of the flow on the plane {Y = −β/2α} is given by the sign of the expression The orbits crossing this plane have to do it in the region where F (U, V ) < 0, which is equivalent to (4.4) One can readily optimize in U in the expression in (4.4) to find that h(U ) has a positive minimum v 0 = h(U 0 ) attained at Since the variable V is monotone increasing along the trajectories, it follows that an orbit crossing the plane {Y = −β/2α} can reach critical points P (v 0 ) only for v 0 > v 0 = h(U 0 ). Thus the profiles contained in the orbits entering the points P Remark. Let f be such a decreasing profile (as obtained in Proposition 4.1 for v 0 small). Then the self-similar formula gives a one-parameter family of supersolutions to Eq. (1.1) for any such fixed profile f (ξ). We stress here that these supersolutions will be strongly used for comparison in the forthcoming paper [11] in order to prove the local existence and finite speed of propagation of general solutions to a similar equation to (1.1) with compactly supported initial conditions in the range m + p > 2.
With respect to shooting from ξ 0 ∈ (0, ∞) very large we state Proposition 4.2. In the previous notation, the orbits entering points P (v 0 ) with v 0 > 0 sufficiently large contain profiles f (ξ) with a backward change of sign at some point ξ 1 ∈ (0, ξ 0 ) in the following sense where xi 0 and v 0 are related by (4.3).
Proof. First of all, we work in the invariant plane {X = 0} seen as a limiting case in variables (X, Y, Z). The phase space associated to the system (2.2) restricts to the following system (4.5), and this orbit comes from the unstable node Q 2 at infinity. Let R(z 0 ) be a point on this unique orbit inside the plane {X = 0} and with component Z = z 0 > 0. By Lemma 3.1 and its proof we deduce that this unique orbit contained in {X = 0} enters P 1 tangent to the eigenvector e 3 = (0, 1, (m + p − 1)β/α), thus if z 0 > 0 is taken to be sufficiently small, we readily get that We consider small balls B(R(z 0 ), δ) centered at R(z 0 ). The next step in the proof is to show that for any given radius δ > 0, there exists some v(δ) sufficiently large such that the unique orbit entering the critical point P (v 0 ) = (0, −β/α, v 0 ) for any v 0 > v(δ) in the phase space associated to the system (4.2) intersects B(R(z 0 ), δ). To this end, we fix v 0 > 0 and perform the change of variable in (4.2) whose linearization has explicit trajectories obtained by an easy integration to get The trajectories of the nonlinear system (4.6) are approximated by the linear ones above. It thus follows that in a neighborhood of P (v 0 ) the points on the trajectory entering P (v 0 ) have the form for λ > 0 sufficiently small. Coming back to the initial variables (X, Y, Z) by undoing the change of variable (4.1), the above points become Letting now λ = z 0 /v 0 we get that the previous trajectory passes through points of the form and given δ > 0, there exists a sufficiently large v(δ) such that Q(z 0 ) ∈ B(R(z 0 ), δ) for any v 0 > v(δ). We end the proof by a standard continuity argument showing, since Q 2 is an unstable node, that there exists δ 0 > 0 sufficiently small such that all the trajectories intersecting the ball B(R(z 0 ), δ 0 ) come from Q 2 , and in particular also come from Q 2 all the orbits entering P (v 0 ) for v 0 > v(δ 0 ).
The proof of Theorem 1.2 for profiles with interface of Type I is now standard and we will just give a sketch.
Proof of Theorem 1.2. Let A ⊆ (0, ∞) be the set of points η 0 ∈ (0, ∞) such that the unique profile having an interface of Type I at ξ = η 0 (according to Proposition 3.7) intersects the vertical axis with negative slope, that is, f (0) = a > 0, f (0) < 0. It follows by a standard argument of continuity that A is an open set which is nonempty according to Proposition 4.1. Let then ξ 0 = sup A. Thus, ξ 0 ∈ A (since A is open) and ξ 0 < ∞, as it readily follows from Proposition 4.2. It is then easy to check that the profile having an interface of Type I exactly at ξ = ξ 0 is a good profile with interface of Type I. We refer the reader to [12, Section 3] for a detailed proof of this statement, which applies absolutely identically in the present case.
Blow-up profiles for σ small
This section is devoted to the proof of part (a) in Theorem 1.4. Let us stress first that by σ small we understand in this case σ sufficiently close to its lower limit 2(1 − p)/(m − 1) and not to 0, as in [12]. We begin with the following Proposition 5.1. There exists σ 0 > 2(1 − p)/(m − 1) such that for any σ ∈ (2(1 − p)/(m − 1), σ 0 ), all the orbits going out from the points P 0 and P 2 into the interior of the phase space associated to the system (2.2) connect to the point P 0 . Thus, all the profiles contained in these orbits are good blow-up profiles with interface of Type II.
Proof. Although the proposition is stated in terms of the system in variables (X, Y, Z), we prove it using once more the new variables (U, Y, V ) introduced in (4.1) and the autonomous system (4.2). Borrowing the plan of the proof from [12,Proposition 4.1], the general plan is to "trace" the unique orbit going out of P 2 (according to Lemma 3.2) by imposing suitable barriers for it. Let us notice first that in variables (U, Y, V ) in order to shorten the notation. We divide the proof into several steps.
Step 1. On the one hand, the direction of the flow of the system (4.2) on the plane {U = U (P 2 )} is given by the sign of the expression which is negative for Y < Y (P 2 ). On the other hand, the direction of the flow of the system (4.2) on the plane {Y = Y (P 2 )} is given by the sign of the expression and the latter is negative for U < U (P 2 ). Since the connection going out of P 2 is tangent to the eigenvector e 3 in Lemma 3.2 having negative X and Y component, it follows that this connection goes out from P 2 in the region {U < U (P 2 ), Y < Y (P 2 )} and it remains forever in this region according to the direction of the flow. Moreover, all the connections going out of P 0 enter the same region.
Step 2. We next look for a constant k > 0 such that the plane of equation {Y + kV = 1} be an upper barrier for the orbits from P 2 and P 0 . The direction of the flow of the system (4.2) over this plane is given by the sign of the expression which, taking into account that along the orbits we are considering we have U < U (P 2 ), is negative for Y < 0 if we take for example k such that Thus, in the region Y ≥ 0 the orbits starting from P 2 and P 0 satisfy the bound Y + kV ≤ 1 with k as in (5.1). In particular, the orbits will intersect the plane {Y = 0} at a point whose coordinate V fulfills V ≤ 1/k. Thus, at this crossing point, we have (5.2) Letting we infer from (5.2) that there exists σ 0 sufficiently small (that we can take to also be smaller than 2) such that at the intersection point with the plane {Y = 0} the orbits satisfy thus they enter the half-space {Y < 0} in the region lying below the hyperbolic cylinder Step 3. The direction of the flow on the plane {Y = 0} is given by the sign of the expression Thus, the plane {Y = 0} can be crossed from right to left in the region where h(U, V ) < 0. By inspecting the equations forU andV in the system (4.2) we deduce that V is increasing, while U is decreasing along the trajectories in the half-space {Y ≤ 0}. Thus after the first crossing, h(U, V ) remains always negative along the trajectories (as U continues to decrease while V continues to increase). This implies that the orbit will remain forever in the region {Y ≤ 0}.
Step 4. We analyze now the direction of the flow of the system over the hyperbolic cylinder Step 2. This is given by the sign of the expression in the region {Y ≤ 0} and for σ ∈ (2(1 − p)/(m − 1), σ 0 ), where σ 0 has been chosen such that σ 0 < 2 and as in Step 2. Thus, a connection entering the interior of {U V < k 1 } for such a σ, cannot go out from the hyperbolic cylinder.
Step 5. Let us take now as barrier the plane {Y = −β/2α}. The direction of the flow on this plane is given by the sign of and we infer that this plane cannot be crossed from right to left by any trajectory through the region {U V ≤ k 1 }.
Step 6. End of the proof. Gathering all the previous steps, we notice that for any σ ∈ (2(1 − p)/(m − 1), σ 0 ), the unique orbit going out of P 2 and all the orbits going out of P 0 stay forever in the region {U < U (P 2 ), Y < Y (P 2 )}, and due to Steps 3 and 4, they cross the plane {Y = 0} in the region where {U V < k 1 } and thus remain forever in this region. Consequently, as shown in Steps 3 and 5 all these orbits will remain forever also in the strip {−β/2α ≤ Y ≤ 0}. Since the coordinates U and V are monotonic in the region {Y ≤ 0} along the trajectories, the orbits cannot end in a limit cycle and have to enter a critical point. We infer from the analysis done in Section 2 and Lemma 3.6 that these orbits have to enter the critical point P 0 (in the way explained in Section 2) and contain good profiles with interface of Type II.
The proof of Theorem 1.4, part (a) is now immediate. Indeed, Proposition 5.1 proves that there exists σ 0 > 0 such that for any σ ∈ (2(1 − p)/(m − 1), σ 0 ), all the good profiles satisfying property (P2) in Definition 1.1 have an interface of Type II. On the other hand, Theorem 1.2 shows that for any such σ there exists also at least a good profile with interface of Type I, and necessarily this good profile satisfies assumption (P1) in Definition 1.1, that is, f (0) = A > 0, f (0) = 0, as stated. We plot in Figure 2 a numerical simulation of the behavior of the orbits going out of the critical points P 2 and P 0 for σ sufficiently small (within the range of application of Theorem 1.4, part (a)). This section is devoted to the proof of the remaining results for the range m + p > 2, that is, part (b) in Theorem 1.4 and Theorem 1.5. The core of the argument is to prove that for σ sufficiently large, the connection going out of P 2 according to Lemma 3.2 enters the critical point Q 3 in the phase space associated to the system (2.2). All these proofs are very similar to the ones in [13, Section 5] and we will give a sketch of them or quote them directly if no differences appear. We start with the following technical result: Proof. It is obvious from the equation forẊ in the system (2.2) that X decreases along any trajectory in the region {Y < 0}. Assume for contradiction that the coordinate X is not decreasing along the orbit going of P 2 (necessarily this should happen for Y ≥ 0). Since both components X, Y start in a decreasing way in a neighborhood of P 2 , there exists a first point η 1 > 0 such thatẊ(η 1 ) = 0, X (η 1 ) ≥ 0. But 0 ≤ X (η 1 ) = (m − 1)X(η 1 )Ẏ (η 1 ), whenceẎ (η 1 ) ≥ 0. Thus coordinate Y had to change monotonicity already along the trajectory going out of P 2 at some first point η 2 ≤ η 1 . That meansẎ (η 2 ) = 0 and Y (η 2 ) ≥ 0. If η 2 = η 1 , sinceẊ(η 2 ) =Ẏ (η 2 ) we obtain after differentiating again the second equation in (2.2) that and a contradiction. If η 2 ∈ (0, η 1 ), that meansẊ(η 2 ) < 0 (since η 1 > η 2 is the first point where X ceases to be decreasing). Taking into account that along the orbit going out of P 2 we have Y ≤ Y (P 2 ) ≤ 1 and that for σ > 2 Z is increasing in the region {Y ≥ 0} we derive again that and a contradiction. Thus X is decreasing and one can check in a similar way that the component Y is also decreasing along the orbit from P 2 .
Coming back to the analysis of the invariant plane {Z = 0}, we have the following preparatory result which has been already proved as [12,Lemma 5.4] (to which we refer the interested reader). We are now in a position to state the main technical result of this section Proposition 6.3. There exists σ 1 > 0 sufficiently large such that for any σ ∈ (σ 1 , ∞), the unique orbit going out from P 2 in the phase space associated to the system (2.2) enters the critical point Q 3 . Moreover, for any σ ∈ (σ 1 , ∞) there are also orbits connecting from P 0 to Q 3 . Proof. The system (2.2) is topologically equivalent to the system (6.1) obtained for the variables (notice that modulo some constants, Z = Z/X) and which was used all along the paper [13]. In our case, we could not use this system from the beginning as some critical points become points at infinity since p < 1. But we can use this system in the current proof, which is now perfectly identical to the proof of [13,Proposition 5.6]. Let us notice that Lemma 6.1 is independent of the change of variable Z = Z/X, thus it also applies to the system (6.1). A careful inspection of the proof of [13, Proposition 5.6] (and its previous technical result [13,Lemma 5.5]) which uses Lemma 6.1 as an important technical tool, shows that the fact that p > 1 it is nowhere used along the proof, thus it can be extended to our case. Indeed, the only elements used in an essential way in the proof are the facts that m > 1, m > p, σ(m − 1) + 2(p − 1) > 0, and the fact that, considering the planes 3) respectively X = BY + C, B = m(m − 1) 2m 2 + 5m + 1 , C = (2m + 1)(m − 1) 2m(2m 2 + 5m + 1) (6.4) the orbit going out of P 2 starts in the region where simultaneously Z > E − DY and X > BY + C. These two facts are also true in our range of parameters, since X and Y are exactly the same (they do not depend on p) and Z passes to be at infinity at the starting point of the orbit for p < 1, thus Z > E − DY holds true in a trivial way at the beginning of the orbit. We refer the reader then to the (completely detailed) proof of [13,Proposition 5.6] for the rather tedious and long calculations showing that the orbit starting from P 2 has to cross a critical plane after which it can no longer return. Going back to our initial system (2.2) and translating the result, we conclude that the connection from P 2 will connect to the critical point Q 3 for σ sufficiently large. Using Lemma 6.2 and standard continuity arguments it follows that for any σ large when the orbit from P 2 enters Q 3 , there are also orbits going out of P 0 and connecting Q 3 . The details of this last argument are given in [13,Proposition 5.6,Step 4] or [12, pp. 2091-2092].
With all the previous technical steps, we are in a position to prove part (b) in Theorem 1.4.
The set C is open since Q 3 is an attractor. The same argument cannot be used directly for the point P 0 , as it is not an attractor by itself. But we get from the analysis in Section 2 and the classification in [3] that there exists a sufficiently small neighborhood B(P 0 , δ) of P 0 such that any trajectory of the system entering B(P 0 , δ) either comes out from or enters P 0 . Since all the connections going out of P 0 enter the half-space {Y > 0}, it follows that P 0 behaves exactly like an attractor in the half-ball B(P 0 , δ) ∩ {Y < 0}, that is, any trajectory entering the half-ball enters P 0 afterwards. Since the orbits going out of P 2 can only enter P 0 after crossing the plane {Y = 0} (which in terms of profiles means arriving to a maximum point and then starting to decrease towards the interface), these orbits can only enter P 0 from the negative side {Y < 0}, thus the same argument as for an attractor shows that A is an open set. Since both A and C are nonempty, as insured by Propositions 5.1, respectively 6.3, we infer that the set B is also nonempty (and closed), thus there exists at least one σ * ∈ (2(1 − p)/(m − 1), ∞) such that P 2 connects to P 1 for σ = σ * (containing thus a profile with interface of Type I).
The proof of the remaining Theorem 1.5 is similar as the previous "three-sets argument", since the same argument stays true also for the orbits going out of P 0 itself: if they enter P 0 forming the elliptic sector as shown in Section 2, they do that also through the half-space {Y < 0}. We omit the details which are easy and similar to the proof of [12,Lemma 5.5]. We plot in Figure 3 the outcome of numerical experiments on the behavior of the orbits going out of the critical points P 2 and P 0 in the critical case (as in Theorem 1.4, part (b)) and for σ large (according to Theorem 1.5). This section is devoted to the case m + p < 2 and the proof of Theorem 1.6. Let us notice first, at a formal level, that the previous study gives us the idea that good blow-up profiles with interface do not exist. As we know already, in the phase space associated to the system (2.2) for m + p > 2 the critical point encoding the interface behavior are P 0 and P 1 . An inspection of the analysis in Section 2 for P 0 and in Lemma 3.1 for the point P 1 gives that, if the expression m + p − 2 changes sign, big differences occur. Indeed, recalling the invariants D, K 2 and K 3 in the analysis in [3], we notice that when m + p − 2 < 0 we are in the case D < 0, K 2 > 0, K 3 < 0 which corresponds to the phase portrait number 3 in [3, Figure 8, p. 329], showing that there are no longer orbits entering P 0 . On the other hand, the analysis in Lemma 3.1 changes as the third eigenvalue λ 3 = −(m + p − 2)β/α becomes positive, thus by inspecting the eigenvectors for P 1 we obtain that there are no profiles either entering or going out of P 1 . Indeed, keeping the analysis in Lemma 3.1, the linearization near the point would have a two-dimensional unstable manifold generated by the eigenvectors e 2 = (0, 1, 0), e 3 = 0, 1, (m + p − 1)β α , and included completely in the invariant plane {X = 0} and a one-dimensional stable manifold contained in the invariant plane {Z = 0}, none of them containing solutions to (1.6).
All the above are of course formal considerations, as these analysis do not remain valid when m + p < 2 since in this case Z(ξ) = ∞ and thus the critical points P 0 and P 1 do not exist anymore in the same form as we studied them. But still, these formal arguments give us an understanding of why interfaces disappear when m + p < 2. To make it rigorous, it is sufficient to introduce a phase space for a system where the critical points can be analyzed one by one and show that no interface behavior may exist. Unfortunately, the system we used (2.2) is not good for this aim since the points P 0 , P 1 and P 2 will all unify with the point Q 4 at infinity making the analysis very difficult. We thus have to introduce a new quadratic autonomous system where the component Z behaves well. We are led to the following change of variable: where we recall that α is defined in (1.5) and the new independent variable η = η(ξ) is defined through the differential equation The differential equation (1.6) transforms into the system and it is easy to see that (7.2) does not have finite critical points, thus all its critical points are at infinity. The most important favorable thing related to the system (7.2) is that, since for m + p < 2 we have and the definition of Z in (7.1), any interface behavior or even tail behavior as ξ → ∞ has to be seen in a critical point at infinity with the component Z = 0. It is thus sufficient to study the critical points at infinity for the system (7.2) to show that such behavior is impossible and prove Theorem 1.6. We will be rather brief below, skipping some technical details as the analysis is very similar to the one performed in Section 3.
Proof of Theorem 1.6. We pass to the Poincaré hypersphere following [24, Section 3.10]. We introduce new variables (X, Y , Z, W ) by letting: and according to [24,Theorem 4, Section 3.10], the critical points at infinity in the phase space associated to the system (7.2) lie on the Poincaré hypersphere at points (X, Y , Z, 0) where X 2 + Y 2 + Z 2 = 1 and they solve the system where P 2 , Q 2 and R 2 are the homogeneous second degree parts of the polynomials in the right hand side of the system (7.2), that is The system (7.3) becomes Straightforward calculations give that the system (7.4) has seven critical points and each one of them has a direct correspondence to the critical points studied in Section 3. We give (in a rather sketchy way) their analysis one by one below.
• The critical points (0, ±1, 0, 0) in the Poincaré hypersphere are topologically equivalent, according to part (b) of [24, Theorem 5, Section 3.10], to the origin in the system: where the minus sign corresponds to one of the points and the plus sign to the other one. We see that both these points are nodes (one unstable and one stable) and the orbits connecting to them contain profiles with a change of sign at some positive point ξ 0 ∈ (0, ∞). These points correspond to the critical points Q 2 and Q 3 in our initial system (2.2).
• The critical point (0, 0, 1, 0) cannot contain profiles having either an interface or a tail behavior as ξ → ∞, since we noticed that this implies Z = 0. In fact, this point corresponds to the critical point Q 4 in the system (2.2).
• A standard analysis shows that the two-dimensional unstable manifold lies in the invariant plane {x = 0} while the one-dimensional stable manifold lies in the invariant plane {w = 0} (both invariant planes corresponding to the system (7.6)), and the orbits contained in these manifolds contain no profiles. This is the point that would have been the correspondent to the critical point P 1 in the system (2.2), as explained in the formal considerations related to the change of sign of m + p − 2 at the beginning of the current Section.
• There exists one more critical point having all three non-zero components, that is A detailed analysis of this point shows that it corresponds to the critical point P 2 in the system (2.2). However, for our goals the point can be discarded even without performing this analysis, as we explained that the points of interest for the interface or tail behavior should necessarily have Z = 0.
Since these are all the critical points in the system (7.2), and they codify thus all the information about the blow-up profiles, we conclude that there is no blow-up profile either with interface at a finite ξ 0 ∈ (0, ∞) or with a tail behavior as ξ → ∞, ending the proof.
Final comments and extensions
We gather in this final page some comments about the remaining cases and some open problems.
1. The very interesting case m + p = 2 is not considered in the current work and is studied in the companion paper [15]. This is because a significant number of differences in the techniques appear. By inspecting for example the autonomous system (2.2) we notice that the equation forŻ simplifies and instead of the critical points studied here, we have a critical parabola of equation formed by critical points and connecting the critical points P 0 = (0, 0, 0) and P 1 = (0, −β/α, 0). Studying the parabola involves different techniques than the ones we have used in the present work. Moreover, an interesting feature of the critical case m + p = 2 is that the interface behaviors coincide and there cannot be an interface at some ξ 0 ∈ (0, ∞) sufficiently large, radically contrasting the results in Section 4. The backward shooting method is no longer applicable for this case and it will be replaced by other techniques based on the geometry of the phase space.
2. The uniqueness of good profiles with interface of Type I is an interesting open problem to be raised in relation to the results we get in the present paper. Indeed, good profiles with interface are not unique and moreover we show in Section 2 that good profiles with interface of Type II are infinite for any fixed σ. However, the local uniqueness of the Type I interface behavior at a given ξ 0 ∈ (0, ∞) (see Proposition 3.7) and the proof of Theorem 1.2 by the backward shooting method suggest at an intuitive level that the uniqueness of this type of profile for a given σ is expected to be true. We do not have any clue about a proof of it and we feel that it requires to obtain results of monotonicity (of some of the trajectories at least, or of the global change of the phase space) with respect to σ, a task which is usually very difficult.
3. The non-existence of profiles when m + p < 2 given as Theorem 1.6 hides in fact a deeper fact: it is expected that no solution except the zero one exists for Eq. (1.1) when m + p < 2. In the related paper [11] we show, among other results, such a sharp non-existence result for a related equation with a stronger weight on the reaction term, that is ∂ t u = ∆u m + (1 + |x|) σ u p in the same range m + p < 2. But we feel that the difference between (1 + |x|) σ and |x| σ is not so essential for the non-existence and our Theorem 1.6 confirms these expectations. | 2020-04-14T01:00:38.658Z | 2020-04-12T00:00:00.000 | {
"year": 2020,
"sha1": "a41e5ed70c75f8eaaa70a07112fb044a6e18f36b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.05650",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a41e5ed70c75f8eaaa70a07112fb044a6e18f36b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
265148122 | pes2o/s2orc | v3-fos-license | Pivoting to telemedicine in a single-day multidisciplinary liver tumor clinic during COVID-19: the Texas Liver Tumor Center experience
Cancer guidelines recommend that all patients with hepatocellular carcinoma (HCC) have an evaluation by a multidisciplinary team to assess liver health, stage the cancer, and discuss treatment and palliative care options. Coronavirus disease 2019 (COVID-19) had a catastrophic impact on patients with cancer resulting in increased disease burden due to late diagnosis and treatment delays. Late diagnosis has highlighted the need for the early intervention of palliative care for patients with HCC. Conversion to telemedicine has been essential to caring for patients with all stages of cancer without added delays. Texas Liver Tumor Center (TLTC) offers patients with liver cancer at any stage a single-day multidisciplinary evaluation with tumor board review facilitating the early integration of treatment and palliative care services. National Comprehensive Cancer Network (NCCN) guidelines support increasing and improving access to palliative care. TLTC allows for the early integration of palliative care within a 1-day clinic model with an incorporated tumor board. This unique model of patient care decreases the burden of separate patient visits, may expedite the time from diagnosis to first treatment, facilitates the early intervention of palliative care specialists, and allows for optimal screening for clinical trials. In this review, we will provide an overview of the current multidisciplinary models of care for HCC and describe the successful pivot of TLTC from a fully in-person single-day multidisciplinary clinic with a multidisciplinary tumor board (MDTB) to a fully virtual experience, thereby maintaining access to this unique clinical model of patient care during the COVID-19 pandemic. The ability to pivot from in-person clinical visits to completely virtual visits increases patient access to care and enables more physicians to participate. Areas for future study include the impact on patient experience, clinical outcomes, and cost-effectiveness of this high-resource model.
Introduction Background
Cancer guidelines recommend that all patients with hepatocellular carcinoma (HCC) have an evaluation by a multidisciplinary team to assess liver health, stage the cancer, and discuss treatment or palliative options (1,2).National Comprehensive Cancer Network (NCCN) guidelines support increasing and improving access to palliative care (3).The team may include a hepatologist, medical oncologist, pathologist, diagnostic radiologist, interventional radiologist, surgical/transplant oncologist, radiation oncologist, anesthesiologist, gastroenterologist, palliative care specialist, oncology nurses, nutritionists/ dieticians, psychologists/psychiatrists, and social workers (1,2).As many patients with liver cancer present with advanced disease, bringing a comprehensive team together is crucial in determining the optimal multimodality treatment approach, including palliative options (4).The multidisciplinary team determines the modality, sequencing, and intensity of therapies based on the number, size, and location of tumors, patient performance status, patient characteristics, liver function, and patient goals.
Rationale and knowledge gap
Many high-volume centers have established multidisciplinary HCC tumor boards to review imaging and treatment options and develop a plan of care.Tumor board recommendations are provided to the referring physician/team, who then make the recommended specialty referrals.The specialty service handles evaluation, scheduling, and coordination of treatment, which is usually far removed from the tumor board process.There is the potential for patients to be lost during the multiple referral and scheduling process and for miscommunication with the specialists regarding tumor board recommendations and the care plan.A 1-day clinic model with an incorporated tumor board decreases the burden of separate patient visits, may expedite the time from diagnosis to first treatment, facilitates the early intervention of palliative care specialists, and allows for optimal screening for clinical trials.Conversion to telemedicine has been essential to caring for patients with all stages of cancer without additional delays.
According to Li et al., coronavirus disease 2019 (COVID-19) had a catastrophic impact on patients with cancer resulting in increased disease burden due to late diagnosis and treatment delays (3).Johns Hopkins Hospital Multidisciplinary Liver Clinic (MDLC) noted an increase in patients presenting with incurable or untreatable cancers during the COVID-19 pandemic due to delays in cancer diagnosis and treatment (3).This highlights the need for the early integration of palliative care into the care plan of HCC patients.Texas Liver Tumor Center (TLTC) offers the opportunity to introduce palliative care early in the evaluation process.
Objective
In this review, we will provide an overview of current multidisciplinary models of care for HCC, and we describe the successful pivot of TLTC from a fully in-person single-day multidisciplinary clinic with a multidisciplinary tumor board (MDTB) to a fully virtual experience, thereby maintaining access to this unique clinical model of patient care during the COVID-19 pandemic.
Review of models of multidisciplinary clinics for HCC
In-person multidisciplinary models for liver cancer Zhang et al. published the impact of a single-day multidisciplinary clinic on the management of liver tumor patients at Johns Hopkins Hospital MDLC (4).They concluded that MDLC evaluation significantly impacted management due to changes in diagnosis and treatment plan.Jia et al. reported the practice patterns and real-world clinical outcomes for patients presenting to the Johns Hopkins Hospital MDLC for HCC and biliary tract cancer (BTC) (5).The MDLC looked at changes in diagnosis, change in treatment, overall survival (OS), and disease-free survival.The authors concluded that coordinated expert multidisciplinary care is feasible for primary liver cancers with high adherence to recommendations and resulted in a change in treatment for second-opinion patients.It is unclear from the analysis if patients had imaging, laboratory testing, tumor board review, and multidisciplinary physician visits in a single day.
According to Soares et al., there are broader benefits to the multidisciplinary clinic, including improved patient satisfaction, decreased time to initial treatment, and changes in management strategies (6).The ability to improve patient care decisions by incorporating the simultaneous collaboration of multiple specialists addresses the issue of patient care delays common for patients with HCC (6).In their analysis of the multidisciplinary single-day HCC clinic at UT Southwestern, Yopp and colleagues found that the multidisciplinary approach is associated with an improved median survival of 13.2 months compared to 4.8 months observed in patients diagnosed before the multidisciplinary clinic formed (P=0.005)(7).
Virtual tumor board only model
Many cancer centers have converted to a fully virtual tumor board format.Dharmarajan et al. from the University of Pittsburgh published their experience transitioning to a fully virtual MDTB during the COVID-19 pandemic (8).The data revealed that 57.9% of attending physicians and graduate medical trainees preferred the virtual MDTB to a traditional in-person format, and 78% preferred to continue virtually after COVID restrictions were lifted citing the ease of attendance and greater participation by outside physicians (8).Disadvantages focused on technical issues related to poor sound quality, poor connections, or inability to screen share (8).
Based on our literature review, summarized in Table 1, there are no recent publications regarding a fully virtual single-day multidisciplinary clinic for patients with HCC.This paper will describe the TLTC experience converting from an in-person clinical format to a fully virtual clinical format during the COVID-19 lockdown and the processes of both formats.
TLTC
TLTC offers a single-day, comprehensive, multidisciplinary clinic for patients with liver tumors, including HCC.TLTC is a University Health Transplant Institute San Antonio clinic in partnership with Texas Liver Institute, a private transplant hepatology practice, and University of Texas Health San Antonio (UTHSA).The Tumor Center is located at South Texas' only National Cancer Institute (NCI)-designated cancer center, Mays Cancer Center at UTHSA, serving a majority Latino population.
In-person TLTC
Between July 15, 2016, and March 23, 2020, TLTC and the MDTB were conducted in-person.Patients present to TLTC at 7:30 am for a blood draw for labs and are then transported with a TLTC staff member to the imaging center for updated magnetic resonance imaging (MRI) and/or computed tomography (CT) imaging.TLTC staff remains with the patient until the imaging is completed.Patients are transported back to TLTC to begin morning evaluations by transplant hepatology, transplant surgery physician assistant, dietician, and social worker.The rotation of the individual clinician visits is managed by the TLTC administrative assistant (AA).
The patient is discharged for a 1-to 2-hour lunch break, during which time the MDTB meets, and each patient is presented for comprehensive review and discussion.Board attendees include transplant hepatology, transplant surgery, surgical oncology, gastrointestinal oncology, radiation oncology, interventional radiology, body radiology, palliative care, and referring physicians.physicians are encouraged to participate.The MDTB engages in imaging and pathology review followed by a robust discussion of treatment options, including clinical trial candidacy.The board formulates treatment recommendations and a comprehensive plan of care for each patient.
Upon return to TLTC in the afternoon, the patient is roomed to await visits with the treating physicians, as recommended by the MDTB.The patient may see one to three physicians to discuss the plan of care.All procedures, imaging, lab work, and surgeries are scheduled at the time of the visit.At the end of the day, each patient meets again with the TLTC physician assistant or registered referring physicians to review the treatment recommendations and care plan and answer questions.A written care plan is provided, including all scheduling and instructions.This information is also available to the patient via the electronic health record (EHR).
Virtual TLTC
The COVID-19 quarantine and lockdown in March 2020 facilitated the need for a quick pivot to a completely virtual format for the full-day multidisciplinary visit and the tumor board.TLTC transitioned to a full-day telemedicine visit and continued full operations without cessation of services.Webex platform was used as it was already established within the health system.
The first fully virtual TLTC was held on March 30, 2020.As institutional mandates regarding virtual care eased in October 2020, allowing for in-person meetings, the TLTC adopted a hybrid visit format allowing some in-person visits to decompress the clinic as mandated by the health systems.In-person visits were allowed with restrictions regarding the number of people in the clinic.Only one support person was allowed per patient and additional staff, dietician, and social worker, remained remotely located.Physicians could conduct in-person visits.TLTC would pivot several times from hybrid to fully virtual as required by hospital policy.Figure 1 illustrates the percentage of patients seen in-person and from March of 2020 through 2022.
Methods
TLTC administrative staff contact each patient to discuss the virtual visit format and establish EHR patient portal access.A secure virtual Webex invitation for the date and time of the visit is emailed to the patient.Laboratory testing and pre-clinical imaging are scheduled for the patient by the TLTC team before the visit.The AA calls each patient, reviews Webex installation, and performs a Webex test 3-7 days prior to the visit.The patient is instructed to log into the visit with camera and microphone on at the appointed date and time and remain in the visit until evaluated individually by the transplant hepatologist, transplant surgery physician assistant, social worker, and dietician.Clinicians rotate in and out of the virtual clinic room as directed by the AA.The AA remains in the visit throughout the morning to keep the Webex visit open.After the morning session is completed, the patient is logged out of the visit with instructions to log back in at a specific time for the afternoon session.
The tumor board meets via Webex to review imaging, and pathology, discuss recommendations and formulate a plan of care for each patient.Interventional radiology leads the meeting with an imaging review followed by pathology review and discussion.TLTC staff schedule treatment dates and times during the tumor board to ensure that each patient leaves with a schedule of appointments for all recommended procedures or imaging.
Patients log back into the Webex meeting in the afternoon to meet the treating physicians and review board recommendations and the care plan.The physicians again rotate in the virtual room as directed by the AA.If surgery is recommended, the surgical nurse coordinator will meet with the patient and sign virtual consent for surgery.Patients who are candidates for liver transplant evaluation virtually consults with the transplant surgeon to facilitate liver transplant evaluation.Before discharge, the TLTC nurse coordinator meets with the patient to review scheduled procedures and imaging, answer questions, and provide the recommendations and treatment plan in writing.The recommendations and treatment plan are also available to the patient via the EHR.All recommendations, imaging reports, lab results, and clinical visit notes are provided to the referring physician and primary care provider.Table 2 provides a summary of the TLTC in-person and virtual day.
Findings
While multidisciplinary evaluation to determine the therapy plan for HCC is considered the standard of care for HCC, many patients do not have access to advanced evaluation and treatment options in their local community.The Association of Community Cancer Centers (ACCC) conducted a survey to identify factors associated with the delivery and coordination of care for HCC patients (9).Of the 31 providers, 69% were from non-teaching community hospitals, freestanding cancer centers, private practice, or others, and 61% of their cancer programs did not have a specialized hepatobiliary multidisciplinary team.
Offering a virtual visit for new patients who have transportation barriers, are geographically distant, or are unable to physically attend an in-person clinical visit increases access to care.
Patients who do not have access to liver transplant evaluation and advanced treatment options for HCC in their communities can virtually access the single-day model decreasing emotional, financial, and physical stress.According to Worster and Swartz, European studies have shown that integrating palliative care using telemedicine improves symptom management (10).The single-day model encourages the introduction of palliative care to HCC patients at the initial evaluation to provide support at any stage of the disease process.Clinicians collaborate in realtime during and after the tumor board increasing quality and continuity of care.Potential benefits for hospital-based systems include increased referral volumes and recovery of overhead costs through imaging, surgeries, procedures, and infusions.While a single-day multidisciplinary model is a gold standard for expediting care for patients with HCC, there are limitations.The long visit time as well as the breadth of information given can be overwhelming to the patient and family.Insurance barriers include obtaining financial clearance for several specialties and insurance company restrictions on virtual new patient visits post-COVID-19.In late 2022 most insurance carriers warned that new patient telemedicine visits would no longer be covered while follow-up telemedicine visits would be covered.Insurance coverage was another principal factor in the conversion of the fully virtual and hybrid TLTC model back to an in-person model.Also, scheduling staff noted that when offered the option of a Webex visit or an in-person visit, patients chose to attend the clinic in person.
Additionally, it is a challenge to coordinate physician schedules due to competing clinical responsibilities.Some patients lack access to technology or are unfamiliar with technology.In many cases, the latter is overcome by utilizing family members who do have access to and understand technology.The single-day multidisciplinary model requires a high level of coordination and staffing to manage preclinical testing and imaging, virtual access, and the comfort of patients and their families for the day (snacks, water, box lunches).As referenced previously, the data indicated that when given the choice between an inperson initial evaluation or a telemedicine initial evaluation, patients prefer to be seen in the clinical setting.Table 3 summarizes the benefits and limitations of the single-day multidisciplinary model for patients with liver tumors.
Conclusions
TLTC offers patients with liver tumors an in-person or virtual multidisciplinary evaluation with tumor board review, which is the new gold standard.The rising incidence of primary liver cancers and the complexity of diagnostic and treatment options, including surgery or transplantation, and an increasing array of immunotherapies make this an ideal disease state for a multidisciplinary team approach.Such a hybrid model increases access to care for patients with all stages of disease, introduces palliative care early in the evaluation process, facilitates increased physician participation in treatment planning, and culminates in expedited time to treatment for most patients.Areas for future study includes the impact of the single-day model on the patient experience, clinical outcomes, and cost-effectiveness of this high-resource model.Virtual (i.e., Webex) versus in-person clinic visits by year at the TLTC.TLTC, Texas Liver Tumor Center.
Table 1
Summary of multidisciplinary clinic and tumor board publicationsMultidisciplinary cancer clinics may improve patient care The multidisciplinary cancer clinic is an effective and convenient means of delivering expert opinion about the diagnosis and management of liver tumors Jia et al.Multidisciplinary care has been associated with improved survival in patients with primary liver cancers Coordinated expert multidisciplinary care is feasible for primary liver cancers with high adherence to recommendations and a change in treatment for a sizeable minority of patients Soares et al.To evaluate differences in OS in patients with HCC after the establishment of a multidisciplinary clinic for HCC The multidisciplinary clinic for the evaluation and treatment of patients with HCC is associated with improved OS Yopp et al.To evaluate differences in OS in patients with HCC after the establishment of a multidisciplinary clinic for HCC A multidisciplinary clinic for evaluating and treating patients with HCC is associated with improved OS Dharmarajan et al.To assess the feasibility of designing and implementing virtual multidisciplinary clinic in a large academic network Virtual multidisciplinary clinic is feasible to design and implement in a large academic medical network COVID-19, coronavirus disease 2019; OS, overall survival; HCC, hepatocellular carcinoma.
Ann Palliat Med.Author manuscript; available in PMC 2024 June 25. | 2023-11-14T06:19:02.556Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "4b0e5e6e75e960718b9047b1cc2eb6456cb200c8",
"oa_license": "CCBYNCND",
"oa_url": "https://apm.amegroups.org/article/viewFile/118406/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16c192d12e7a79a5ce3807f9cdcc9bd449ff7802",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2720332 | pes2o/s2orc | v3-fos-license | Prediction of primary somatosensory neuron activity during active tactile exploration
Primary sensory neurons form the interface between world and brain. Their function is well-understood during passive stimulation but, under natural behaving conditions, sense organs are under active, motor control. In an attempt to predict primary neuron firing under natural conditions of sensorimotor integration, we recorded from primary mechanosensory neurons of awake, head-fixed mice as they explored a pole with their whiskers, and simultaneously measured both whisker motion and forces with high-speed videography. Using Generalised Linear Models, we found that primary neuron responses were poorly predicted by whisker angle, but well-predicted by rotational forces acting on the whisker: both during touch and free-air whisker motion. These results are in apparent contrast to previous studies of passive stimulation, but could be reconciled by differences in the kinematics-force relationship between active and passive conditions. Thus, simple statistical models can predict rich neural activity elicited by natural, exploratory behaviour involving active movement of sense organs. DOI: http://dx.doi.org/10.7554/eLife.10696.001
Introduction
A major challenge of sensory neuroscience is to understand the encoding properties of neurons to the point that their spiking activity can be predicted in the awake animal, during natural behaviour. However, accurate prediction is difficult without experimental control of stimulus parameters and, despite early studies of awake, behaving animals (Hubel, 1959), subsequent work has most often effected experimental control by employing anaesthesia and/or passive stimulation. However, the active character of sensation (Gibson, 1962;Yarbus, 1967), based on motor control of the sense organs, is lost in reduced preparations. Recent methodological advances permit a way forward: in the whisker system, it is now possible to record neuronal activity from an awake mouse, actively exploring the environment with its whiskers, whilst simultaneously measuring the fundamental sensory variables (whisker kinematics and mechanics) likely to influence neuronal activity (O'Connor et al., 2010b).
Our aim here was to predict spikes fired by primary whisker neurons (PWNs) of awake mice engaged in natural, object exploration behaviour. The manner in which primary neurons encode sensory information fundamentally constrains all downstream neural processing (Lettvin et al., 1959). PWNs innervate mechanoreceptors located in the whisker follicles (Zucker and Welker, 1969;Rice et al., 1986). They are both functionally and morphologically diverse; including types responsive to whisker-object contact and/or whisker self-motion (Szwed et al., 2003;Ebara et al., 2002). PWNs project to the cerebral cortex, analogously to other modalities, via trisynaptic pathways through the brainstem and thalamus (Diamond et al., 2008).
Here, we show that PWN responses are well-predicted by rotational force ('moment') acting on the whisker, while whisker angle is a poor predictor. Moment coding accounts for spiking during both whisker-object interaction and whisker motion in air. Moment coding can also account for findings in previous studies of passive stimulation in the anaesthetized animal; indicating that the same biomechanical framework can account for primary somatosensory neuron responses across diverse states. Our results provide a mechanical basis for linking receptor mechanisms to tactile behaviour.
Results
Primary whisker neuron activity during object exploration is predicted by whisker bending moment We recorded the activity of single PWNs from awake mice ( Figure 1A,E, Figure 1-figure supplement 1) as they actively explored a metal pole with their whiskers (N = 20 units). At the same time, we recorded whisker motion and whisker shape using high-speed videography (1000 frames/s; Figure 1D, Video 1). As detailed below, PWNs were diverse, with some responding only to touch, others also to whisker motion. Since each PWN innervates a single whisker follicle, we tracked the 'principal whisker' of each recorded unit from frame to frame, and extracted both the angle and curvature of the principal whisker in each video frame (total 1,496,033 frames; Figure 1B-E; Bale et al., 2015). Whiskers are intrinsically curved, and the bending moment on a whisker is proportional to how much this curvature changes due to object contact (Birdwell et al., 2007): we therefore used 'curvature change' as a proxy for bending moment (O'Connor et al., 2010a). Whisker-pole contacts caused substantial whisker bending (curvature change), partially correlated with the whisker angle ( Figures 1E, 4E) and, consistent with Szwed et al. (2003) and Leiser and Moxon (2007), robust spiking ( Figures 1E, 2E).
To test between candidate encoding variables, our strategy was to determine how accurately it was possible to predict PWN activity from either the angular position or curvature change of each recorded unit's principal whisker. To predict spikes from whisker state, we used Generalised Linear Models (GLMs; Figure 2A). GLMs, driven by whisker angle, have previously been shown to provide a simple but accurate description of the response of PWNs to passive stimulation (Bale et al., 2013) eLife digest The brain receives information from the world through the senses. In particular, cells called sensory neurons can detect signals from the environment and relay the information to the brain. A critical test of how well we understand the role of a given sensory neuron is whether it is possible to predict its activity under natural conditions. Previous research has succeeded in predicting the responses of sensory neurons in animals that were anaesthetised. However, it has been difficult to extend this approach to awake animals.
Mice and other rodents rely on their whiskers to tell them about their surroundings. Campagner et al. set out to predict how the sensory neurons that send information from whiskers (or 'whisker neurons') to the brain would respond in awake mice that were actively exploring an object in their environment. The approach involved using high-speed video (1,000 frames per second) to film the whiskers while the mice used them to explore a thin metal pole. At the same time, Campagner et al. recorded the electrical activity of the whisker neurons. The videos were used to calculate the forces acting on the whiskers, and then computational models were used to relate the activity of the neurons to the forces.
This approach allowed Campagner et al. to predict the responses of the whisker neurons, even when the mice were exploring the pole freely and unpredictably, simply from knowledge of the forces that were acting on the whiskers.
Together, these findings move the field of neuroscience forward by showing that sensory signals and neuronal responses can be correlated even in an awake animal. A key challenge for the future will be to further extend the approach to investigate how the signal conveyed by sensory neurons is transformed by neural circuits within the brain. and have mathematical properties ideal for robust parameter-fitting (Truccolo et al., 2005;Paninski et al., 2007).
For each recorded unit (median 69,672 frames and 550 spikes per unit), we computed the GLM parameters that best predicted the unit's spike train given the whisker angle time series, using half the data as a training set for parameter-fitting (8 total fitted parameters -5 for stimulus filter, 2 for history filter, 1 bias; Figure 2-figure supplement 3). We then assessed prediction performance using the other half of the data as a testing set: we provided the GLM with the whisker angle time series as input and calculated the predicted spike train, evoked in response (Materials and methods). We then compared the recorded spike train to the GLM-predicted one ( Figure 2B-C) and quantified the similarity between the smoothed spike trains using the Pearson correlation coefficient (PCC). This is a stringent, single-trial measure of model prediction performance (Figure 2-figure Figure 1. Electrophysiological recording from single primary whisker units in awake, head-fixed mice and simultaneous measurement of whisker kinematics/mechanics. (A) Schematic of the preparation, showing a tungsten microelectrode array implanted into the trigeminal ganglion of a headfixed mouse, whilst a metal pole is presented in one of a range of locations (arrows). Before the start of each trial, the pole was moved to a randomly selected, rostro-caudal location. During this time, the whiskers were out of range of the pole. At the start of the trial, the pole was rapidly raised into the whisker field, leading to whisker-pole touch. Whisker movement and whisker-pole interactions were filmed with a high-speed camera. (B, C) Kinematic (whisker angle q) and mechanical (whisker curvature k, momentM, axial forceF ax and lateral forceF lat ) variables were measured for the principal whisker in each video frame. When a whisker pushes against an object during protraction (as in panel D, red and cyan frames), curvature increases; when it pushes against an object during retraction (as in panels B and C), it decreases. (D) Individual video frames during free whisking (yellow and green) and whisker-pole touch (red and cyan) with tracker solutions for the target whisker (the principal whisker for the recorded unit, panel E) superimposed (coloured curve segments). (E) Time series of whisker angle, push angle and curvature change, together with simultaneously recorded spikes (black dots) and periods of whisker-pole contact (red bars). Coloured dots indicate times of correspondingly coloured frames in D. DOI: 10.7554/eLife.10696.003 The following figure supplements are available for figure 1: supplement 1B). We then repeated this entire procedure for the whisker curvature time series. Although angle GLMs predicted spike trains of a few units moderately well (2/20 units had PCC > 0.5), they performed poorly for the majority (median PCC 0.06, IQR 0.019-0.3; Figure 2B-D, orange). This was unlikely to be because of nonlinear tuning to whisker angle, since quadratic GLMs fared only marginally better (median PCC 0.097, IQR 0.042-0.31; p=0.044, signed-rank test, Figure 2-figure supplement 1A). In contrast, we found that, at the population level, the curvature GLMs were substantially more accurate than the angle GLMs (median PCC 0.52, IQR 0.22-0.66; p=0.0044, signed-rank test; Figure 2B-D, blue) with prediction accuracy up to PCC 0.88. Curvature GLMs also predicted spikes during touch episodes significantly more accurately (median PCC 0.57, IQR 0.23-0.72) than did angle GLMs during non-touch episodes (median 0.06, IQR 0.02-0.35; p=0.005, signedrank test). At the level of individual units, 90% had above chance PCC and we termed these 'curvature-sensitive' (Materials and methods). Of the curvature-sensitive units, 61% were sensitive to positive curvature change and 39% to negative curvature change (Materials and methods).
The result that curvature predicted PWN responses better than angle was robust to the number of fitted parameters: a GLM sensitive to instantaneous curvature (4 parameters: 1 stimulus filter parameter, 2 history filter parameters and 1 bias) exhibited very similar prediction accuracy (Figure 2-figure supplement 1C). The result was also robust to time-scale: prediction accuracy based on curvature was significantly greater than that based on angle for smoothing time-scales in the range 1-100 ms (signed-rank test, p<0.05, Bonferroni-corrected).
Although the activity of most units was better predicted by whisker curvature change than by whisker angle, there was significant variability in prediction performance, and there were a few units for which the angle prediction performance was appreciable ( Figure 2D). However, we found that this could largely be attributed to redundancy. When a mouse whisks against an object, curvature change and angle fluctuate in concert (Birdwell et al., 2007;Bagdasarian et al., 2013;Pammer et al., 2013;Figures 1E, 4E and Figure 4F-G). When we fitted GLMs using both curvature change and angle as input, these GLMs predicted the spike trains no more accurately (median PCC 0.53, IQR 0.40-0.62; p=0.067, signed-rank test; Figure 2D) than GLMs based on curvature change alone. Moreover, on a unit-by-unit basis, for 65% of units, curvature change GLMs predicted spikes better than angle (signed-rank test, p<0.05, Bonferroni-corrected); only for 5% of units did angle predict spikes better than curvature change. GLMs based on curvature change also predicted spike trains more accurately than GLMs based on 'push angle' -the change in angle as the whisker pushes against an object ( Figure 1E; median PCC 0.25, IQR 0.04-0.45; p=0.006, signed-rank test). Moreover, prediction accuracy of GLMs fitted with both push angle and curvature change (median PCC 0.52, IQR 0.2-0.69) inputs was no better than that of GLMs fitted with curvature alone (p=0.43, signed-rank test).
In principle, neurons might also be sensitive to the axial force component (parallel to the whisker follicle) and/or lateral force component (orthogonal to axial) associated with whisker-object contact ( Figure 1B Solomon and Hartmann, 2006;Pammer et al., 2013). We restricted our analysis to bending moment since, under our experimental conditions, axial/lateral force components were near-perfectly correlated with bending moment (Figure 2-figure supplement 2) and bending moment is likely to have a major influence on stresses in the follicle (Pammer et al., 2013). Video 1. Video of an awake mouse, exploring a pole with its whiskers with simultaneous electrophysiological recording of a primary whisker neuron.
At the start of the video, the pole is out of range of the whiskers. The whisker tracker solution for the principal whisker of the recorded unit is overlaid in red. White dots represent spikes; orange trace shows whisker angle (scale bar = 40˚); blue trace shows whisker curvature change (scale bar = 0.05 mm -1 ). Video was captured at 1000 frames/s and is played back at 50 frames/s. To further test the curvature-encoding concept, we asked whether curvature GLMs could account for the response of PWNs to whisker-pole touch. To this end, we parsed the video data into episodes of 'touch' and 'non-touch'. Units fired at a higher rate during touch than otherwise (Szwed et al., 2003;Leiser and Moxon, 2007). Without any further parameter-adjustment, the curvature-based GLMs reproduced this effect ( Figure 2E): the correlation coefficient between recorded . Spike trains discretized using 1-ms bins and smoothed with a 100 ms boxcar filter. Prediction performance (Pearson correlation coefficient, PCC) for this unit was 0.59. Inset shows tuning curves for both GLMs, computed by convolving the relevant sensory time series (angle or curvature change) with the corresponding GLM stimulus filter to produce a time series of filter coefficients, and estimating the spiking probability as a function of filter coefficient (25 bins). (C) Analogous to panel B, for a second example unit. Prediction performance PCC for this unit was 0.74. (D) Prediction performance (PCC between predicted and recorded spike trains) compared for GLMs fitted with three different types of input: curvature change alone; angle alone; both curvature change and angle. Each blue/orange/green dot is the corresponding PCC for one unit: large black dots indicate median; error bars denote inter-quartile range (IQR). To test statistical significance of each unit's PCC, the GLM fitting procedure was repeated 10 times on spike trains subjected each time to a random time shift: magenta dots show these chance PCCs for the unit indicated by the magenta circle; the mean chance PCC was computed for each unit and the large grey dot shows the median across units. Black circles indicate units whose PCC was significantly different to chance (signed-rank test, Bonferroni-corrected, p<0.0025). To facilitate direct comparison between results for curvature change GLM and angle GLM, these are re-plotted in the inset. (E) Left. Firing rate during touch episodes compared to that during non-touch episodes for each unit, compared to corresponding predicted firing rates from each unit's curvature change GLM. Right. Medians across units: error bars denote IQR; * denotes differences significant at p<0.05 (signed-rank test). DOI: 10.7554/eLife.10696.007 The following figure supplements are available for figure 2: and GLM-predicted firing rate for touch episodes was 0.97. Collectively, the above results indicate that, during active touch, the best predictor of whisker primary afferent firing is not whisker angle but rather the bending moment.
Primary whisker neuronal activity during whisking is predicted by moment
During free whisking -in the absence of whisker-pole contact -whisker curvature, and therefore bending moment, changed little ( Figure 1E, Figure 4F); consistent with previous studies Quist et al., 2014). Yet, 50% of recorded units ('whisking-sensitive units') were significantly modulated by whisking amplitude ( Figure 3A). Consistent with Szwed et al. (2003), PWNs were diverse: 45% were curvature-sensitive (significant PCC for curvature based GLM) but not whisking-sensitive; 45% were both curvature-and whisking-sensitive; 5% were whisking-sensitive but not curvature-sensitive. The presence of whisking sensitivity suggests that moment due to whisker bending is not the only force that influences PWN activity. A likely candidate is the moment associated with the rotational acceleration of a whisker: this moment is proportional to the whisker's angular acceleration (Quist et al., 2014;Materials and methods). Consistent with this possibility, we found that whiskingsensitive units were tuned to angular acceleration ( Figure 3B) and that 50% of these were phasemodulated ( Figure 3C). Angular acceleration tuning was diverse: some units fired to acceleration in a particular direction (rostral or caudal), whilst others responded to acceleration in both directions ( Figure 3B, Figure 3-figure supplement 1). Moreover, for whisking-sensitive units (but not whisking-insensitive ones), quadratic GLMs trained on data from non-touch episodes were able to predict spikes using whisker angle acceleration as input ( Figure 3D-E; whisking-sensitive units, median PCC 0.37, IQR 0.18-0.58; whisking-insensitive, median PCC -0.0071, IQR -0.035-0.041; p=0.0017 ranksum test for whisking-sensitive vs whisking-insensitive units). For 70% of whisking-sensitive units, directional selectivity for acceleration was consistent with that for curvature. These findings indicate that, in the absence of whisker-object contact, responses of PWNs to whisking itself can be accounted for by sensitivity to the moment associated with angular whisker acceleration.
Relation between kinematics and mechanics is different in active vs passive touch and has implications for neural encoding
We found, during active object exploration, that curvature change, but not whisker angle, predicts PWN firing. In apparent contrast, studies using passive whisker stimulation have reported that PWNs encode whisker angle and its temporal derivatives (Zucker and Welker, 1969;Gibson and Welker, 1983;Lichtenstein et al., 1990;Jones et al., 2004;Arabzadeh et al., 2005;Bale and Petersen, 2009;Lottem and Azouz, 2011;Bale et al., 2013). We wondered whether the discrepancy might be due to differences in whisker mechanics between passive and active stimulation conditions. To test this, we analysed the relationship between angle and curvature change during active touch and compared it to that during passive whisker stimulation. During active pole exploration, angle and curvature change were, over all, only loosely related (median correlation coefficient 0.20, IQR 0.079-0.39, Figures 4D-E). Important contributory factors were that the angle-curvature relationship was both different for touch compared to non-touch ( Figure 4F) and dependent on object location ( Figure 4G). In contrast, during passive stimulation, whisker angle was near perfectly correlated with curvature change (for C2, correlation coefficients 0.96 and 0.94 respectively; similar results for C5; (Birdwell et al., 2007). Simulations confirmed that, due to the tight relationship between the variables, a unit tuned purely to curvature change can appear tightly tuned to angle (Figure 4-figure supplement 1). The implication is that apparent sensitivity to whisker angle under passive stimulation conditions can be accounted for by moment-tuning.
Discussion Prediction of spikes fired by sensory neurons under natural conditions
In the endeavour to understand how neurons encode and process sensory information, there is a basic tension between the desire for tight experimental control and the desire to study animals under natural, unconstrained conditions. Theories of sensory encoding suggest that neural circuits have evolved to operate efficiently under natural conditions (Simoncelli and Olshausen, 2001;Reinagel, 2001). Previous studies have succeeded in predicting/decoding spikes evoked by passive presentation of natural sensory stimuli to anaesthetised/immobilised animals (Lewen et al., 2001;Arabzadeh et al., 2005;Pillow et al., 2008;Mante et al., 2008;Lottem and Azouz, 2011;Bale et al., 2013), but it has been difficult to extend this approach to encompass natural, active movement of the sense organs. Here, we have addressed this general issue, taking advantage of experimental possibilities recently created in the whisker system (O'Connor et al., 2010a), and the ability of computational methods, such as GLMs, to uncover stimulus-response relationships even from data with complex statistical structure (Paninski et al., 2007;Fairhall and Sompolinsky, 2014).
Our main finding was that responses of PWNs, recorded as an awake mouse actively explores an object with its whiskers, can be predicted from the forces acting on the whiskers. Given that, for each unit, we were attempting to predict the entire~70 s time course of activity, the variability of the behaviour of untrained mice (O'Connor et al., 2010a), and the lack of trial-averaging as a noise reduction strategy, it is remarkable that we found model prediction correlation coefficients up to 0.88. A challenge of studying neural coding under unconstrained, awake conditions is that sensory variables tend to correlate. A valuable feature of the GLM training procedure is that it takes such correlations into account. We found that, although whisker angle predicted spikes for a subset of units, this effect was very largely explained by a curvature-coding model, together with the correlation between angle and curvature.
Mechanical framework for tactile coding
Pushing a whisker against an object triggers spiking in many PWNs (Szwed et al., 2003;Szwed et al., 2006;Leiser and Moxon, 2007). Biomechanical modelling by Hartmann and co-workers accounts for this by a framework where the whisker is idealised as an elastic beam, cantilevermounted in the skin (Birdwell et al., 2007;Quist et al., 2014). When such a beam pushes against an object, the beam bends, causing reaction forces at its base. Our data are in striking agreement with the general suggestion that mechanoreceptor activity is closely related to such reaction forces. Our results show that curvature change associated with contact-induced whisker bending, and acceleration associated with whisker rotation, predict PWN spiking. Our results also provide a mechanical basis for previous findings: our finding of subtypes of curvature-only and curvature-acceleration PWNs is consistent with previous reports of 'touch' and 'whisking-touch' units (Szwed et al., 2003;. Thus, a common framework accounts for diverse PWN properties. Our finding that whisker angle predicts PWN spikes poorly indicates that whisker angle can change without modulating mechanotransduction in the follicle. This is consistent with evidence that, during artificial whisking, the follicle-shaft complex moves as a rigid unit (Bagdasarian et al., 2013). In apparent contrast, previous studies using passive stimulation in anaesthetised animals have consistently reported a tight relationship between whisker kinematics and PWN response. In the cantilever whisker model, passively induced changes in whisker angle correlate highly with whisker bending. We confirmed that this applies to real whiskers in vivo and demonstrate that moment-sensitive units can thereby appear angle-tuned. In this way, moment-encoding can account for primary neuron responses not only during active touch but also under passive stimulation. More generally, our results highlight the importance of studying neurons under natural, active sensing conditions.
In this study, we considered PWN encoding under conditions of pole contact, since this is wellsuited to reaction force estimation (O'Connor et al., 2010a;Pammer et al., 2013) and involves object-stimulus interactions on a~100 ms time-scale that is conducive to single-trial analysis. Since whisker bending is ubiquitous in whisking behaviour, it is likely that our finding of curvature sensitivity is a general one. However, prediction performance varied across units, suggesting that other force components may also be encoded. Other experimental conditions -for example, textured surfaces -may involve multiple force components (Quist and Hartmann, 2012;Pammer et al., 2013;Bagdasarian et al., 2013) and/or encoding of information by spike timing on a finer time-scale Petersen et al., 2001;Arabzadeh et al., 2005;Bale et al., 2015).
It is axiomatic that mechanoreceptors are sensors of internal forces acting in the tissue within which they are embedded (Abraira and Ginty, 2013) and therefore valuable to be able to measure mechanical forces in the awake, behaving animal. In general, including the important case of primate hand-use, the complex biomechanics of skin makes force-estimation difficult (Phillips and Johnson, 1981). In contrast, for whiskers, the quasi-static relationship is relatively simple: the bending moment on a whisker is proportional to its curvature. This has the important implication that reaction forces can be directly estimated from videography in vivo (Birdwell et al., 2007;O'Connor et al., 2010a;Pammer et al., 2013). Our results are the first direct demonstration that such reaction forces drive primary sensory neuron responses -likely involving Piezo2 ion channels (Woo et al., 2014;Poole et al., 2015;Whiteley et al., 2015) -and provide insight into how sensitivity to touch and self-motion arises in the somatosensory pathway (Szwed et al., 2003;Yu et al., 2006;Curtis and Kleinfeld, 2009;Khatri et al., 2009;O'Connor et al., 2010b;Huber et al., 2012;Petreanu et al., 2012;Peron et al., 2015).
Moment-based computations in tactile behaviour
Extraction of bending moment is a useful first step for many tactile computations. Large transients in bending moment signal object-touch events, and the magnitude of bending is inversely proportional to the radial distance of contact along the whisker (Solomon and Hartmann, 2006). As illustrated by our results on the statistics of active touch, if integrated with cues for whisker self-motion, whisker bending can be a cue to the 3D location of an object (Szwed et al., 2003;Birdwell et al., 2007;Bagdasarian et al., 2013;Pammer et al., 2013). Bending moment can permit wall following (Sofroniew et al., 2014) and, if integrated across whiskers, can in principle be used both to infer object shape (Solomon and Hartmann, 2006) and to map the spatial structure of the environment (Fox et al., 2012;Pearson et al., 2013).
Summary and conclusion
We have shown that the responses of primary whisker neurons can be predicted, during natural behaviour that includes active motor control of the sense organ, from forces acting on the whiskers. These results provide a bridge linking receptor mechanisms to behaviour.
Materials and methods
All experimental protocols were approved by both United Kingdom Home Office national authorities and institutional ethical review.
Surgical procedure
Mice (C57; N=10; 6 weeks at time of implant) were anesthetized with isoflurane (2% by volume in O 2 ), mounted in a stereotaxic apparatus (Narishige, London, UK) and body temperature maintained at 37˚C using a homeothermic heating system. The skull was exposed and a titanium head-bar (19.1 Â 3.2 Â 1.3 mm; O'Connor et al., 2010a) was first attached to the skull~1 mm posterior to lambda (Vetbond, St. Paul, MN), and then fixed in place with dental acrylic (Lang Dental, Wheeling, IL). A craniotomy was made (+0.5 to -1.5 mm posterior to bregma, 0-3 mm lateral) and sealed with silicone elastomer. Buprenorphine (0.1 mg/kg) was injected subcutaneously for postoperative analgesia and the mouse left to recover for at least 5 days.
Behavioural apparatus
Mice were studied in a pole exploration apparatus adapted from O'Connor et al., 2010a , but were not trained on any task. A mouse was placed inside a perspex tube (inner diameter 32 mm), from which its head emerged at one end, and immobilised by fixing the head-bar to a custom mount holder. The whiskers were free of the tube at all times. The stimulus object was a 1.59 mm diameter metal pole, located~3.5 mm lateral to the mouse's snout. To allow control of its anterior/posterior location, the pole was mounted on a frictionless linear slide Schneeberger,Roggwil,Germany) and coupled to a linear stepper motor (NA08B30, Zaber, Vancouver, Canada). To allow vertical movement of the pole into and out of range of the whiskers, the pole/actuator assembly was mounted on a pneumatic linear slide (SLS-10-30-P-A, Festo, Northampton, UK), powered by compressed air. The airflow was controlled by a relay (Weidmü ller, Richmond, VA). In this way, the pole moved rapidly (~0.15 s) into and out of range of the whiskers. The apparatus was controlled from Matlab via a real-time processor (RX8, TDT, Alachua, FL).
Electrophysiology
We recorded the activity of PWNs from awake mice in the following way. To permit reliable whisker tracking (see below), before each recording session, A, B and E whisker rows were trimmed to the level of the fur, under brief isoflurane anaesthesia. The trigeminal ganglion was targeted as previously described (Bale et al., 2015). The silicone seal was removed and a 3/4 shank tungsten microelectrode array (FHC, Bowdoin, ME, recording electrodes 8 MW at 1 kHz, reference 1 MW; tip spacing~500 mm) was lowered through the brain (angle 4˚to vertical in the coronal plane) using a micromanipulator (PatchStar, Scientifica, Uckfield, UK) under isoflurane anaesthesia. Extracellular potentials were pre-amplified, digitised (24.4 kHz), filtered (band pass 300-3000 Hz) and acquired continuously to hard disk (RZ5, TDT). The trigeminal ganglion was encountered 6-7 mm vertically below the pial surface and whisker-response units identified by manual deflection of the whiskers with a small probe. Once a well-isolated unit was found, the whisker that it innervated (the 'principal whisker', PW) was identified by manual stimulation. To define the PW, we deflected not only untrimmed whiskers, but also the stubs of the trimmed whiskers. Any unit whose PW was a trimmed whisker was ignored. At this point, anaesthesia was discontinued. Once the mouse was awake, we recorded neuronal activity during repeated presentations of the pole ('trials'). Before the start of each trial, the pole was in the down position, out of reach of the whiskers. The pole was first moved anterior-posteriorly to a position chosen randomly out of a set of 11 possible positions, spanning a range ± 6 mm with respect to the resting position of the base of the PW. A trial was initiated by activating the pneumatic slide relay, thus moving the pole up into the whisker field, where it remained for 3 s before being lowered. At the end of a recording session, the microelectrode array was withdrawn, the craniotomy sealed with silicone elastomer, and the mouse returned to its home cage.
High-speed videography
Using the method of O' Connor et al. (2010a) to image whisker movement/shape, whiskers ipsilateral to the recorded ganglion were illuminated from below using a high-power infrared LED array (940 nm; LED 940-66-60, Roithner, Vienna, Austria) via a diffuser and condensing lens. The whiskers were imaged through a telecentric lens (55-349, Edmunds Optics, Barrington, NJ) mounted on a high speed camera (LTR2, Mikrotron, Unterschleissheim, Germany; 1000 frames/s, 0.4 ms exposure time). The field of view of the whiskers was 350Â350 pixels, with pixel width 0.057 mm.
Response to touch and non-touch events
Mouse whisking behaviour during the awake recording was segmented into 'touch', and 'non-touch' episodes. Touches between the PW of each unit and the pole were detected manually in each frame of the high-speed video. A frame was scored as touch if no background pixels were visible between the pole silhouette and the whisker. Any frame not scored as a touch was scored as non-touch. Touch and non-touch firing rates for a given unit were computed by averaging activity over all corresponding episodes.
Whisker tracking
Since the trigeminal ganglion lacks topography, it is difficult to target units that innervate a specific whisker, and therefore desirable for a whisker tracker to be robust to the presence of multiple rows of whiskers. However, since neurons in the ganglion innervate individual whiskers, it is sufficient to track only one whisker (the PW) for each recorded neuron. To extract kinematic/mechanical whisker information, we therefore developed a whisker tracker ('WhiskerMan'; Bale et al., 2015) whose design criteria, different to those of other trackers (Perkon et al., 2011;Clack et al., 2012), were to: (1) be robust to whisker cross-over events; (2) track a single, target whisker; (3) track the proximal segment of the whisker shaft. The shape of the target whisker segment was described by a quadratic Bezier curve r(t,s) (a good approximation away from the zone of whisker-object contact; Quist and Hartmann, 2012;Pammer et al., 2013): r(t,s) = [x(t,s), y(t,s)], where x, y are horizontal/vertical coordinates of the image, s = [0,..,1] parameterises (x,y) location along the curve and t is time. We fitted such a Bezier curve to the target whisker in each image frame using a local, gradient-based search. The initial conditions for the search were determined by extrapolating the solution curves from the previous two frames, assuming locally constant, angular velocity. The combination of the low-parameter whisker description and the targeted, local search made the algorithm robust to whisker crossover events. The 'base' of the target whisker was defined as the intersection between the extrapolated Bezier curve and the snout contour (estimated as described in Bale et al., 2015). The solution curve in each frame was visually checked and the curves manually adjusted to correct occasional errors.
Estimation of kinematic/force parameters
The whisker angle (q) in each frame was measured as the angle between the tangent to the whisker curve at the base and the anterior-posterior axis ( Figure 1C). Whisker curvature (k) was measured at the base as k ¼ x 0 y 00 Àx 00 y 0 ðx 0 2 þy 0 2 Þ 3=2 , where x', y' and x'', y'' are the first and second derivatives of the functions x(s) and y(s) with respect to s ( Figure 1C). Since reaction force at the whisker base reflects changes in whisker curvature, rather than the intrinsic (unforced) curvature (Birdwell et al., 2007), we computed 'curvature change' Dk = kk int , where k int , the intrinsic curvature, was estimated as the average of k in the first 100 ms of the trial (before pole contact; O'Connor et al., 2010a). During free whisking, whisker angle oscillated with the characteristic whisking rhythm, but curvature changed little. The small changes in whisker curvature during free whisking were consistent with torsional effects . We estimated the number of whisking cycles from the duration of touch/non-touch episodes and the whisking frequency: median 419 whisking cycles per unit during touch periods; 415 during non-touch periods.
Under conditions of whisking against a smooth surface, such as in the present study, the quasistatic framework of Birdwell et al. (2007) applies. Dk, measured, at the base of a whisker, in the horizontal plane, is proportional to the component of bending moment in that plane. We used Dk as a proxy for bending moment. Bending moment (M), Axial (F ax ) and lateral force (F lat ) at the whisker base were calculated, during periods of whisker-pole contact, using the method of Pammer et al. (2013), using published data on areal moment of inertia of mouse whiskers (Pammer et al., 2013), along with whisker-pole contact location (see Figure 1-figure supplement 2 for details). Pole location, in the horizontal plane, in each frame, was identified as the peak of a 2D convolution between the video image and a circular pole template. To localise whiskerpole contact, the whisker tracker was used to fit the distal segment of the whisker close to the pole, seeded by extrapolation from the whisker tracking solution for the proximal whisker segment, described above. Whisker-pole contact location was defined as the point where this distal curve segment was closest to the detected pole centre. Pole and contact locations were verified by visual inspection.
As expressed by Newton's second law of rotational motion, the moment -or torque -of a rigid body, rotating in a plane, is proportional to the body's angular acceleration. During free whisking, a whisker behaves approximately as a rigid body and, for the whiskers considered in this study, their motion is predominantly in the horizontal plane (Bermejo et al., 2002;Knutsen et al., 2008). Thus, to assess whether such a moment is encoded by PWNs, we measured angular whisker acceleration during free whisking as a proxy. Acceleration was calculated from the whisker angle time series after smoothing with a Savitzky-Golay filter (polynomial order 5; frame size 31 ms).
Push angle -the change in angle as a whisker pushes against an object -was measured during touch epochs. For each touch episode, we determined the value of the angle in the frame before touch onset and subtracted this from the whisker angles during the touch.
Passive whisker deflection
To determine how whiskers move/bend in response to passive deflection under anaesthesia, a mouse was anesthetized (isoflurane 2%) and placed in the head-fixation apparatus. Individual whiskers (C2 and C5 trimmed to 5 mm) were mechanically deflected using a piezoelectric actuator as previously described (Bale et al., 2013;. All other whiskers were trimmed to the level of the fur. Each whisker, in turn, was inserted into a snugly fitting plastic tube attached to the actuator, such that the whisker entered the tube 2 mm from the face. Two stimuli were generated via a realtime processor (TDT, RX8): (1) a 10 Hz trapezoidal wave (duration 3 s, amplitude 8˚); (2) Gaussian white noise (duration 3 s, smoothed by convolution with a decaying exponential: time constant 10 ms; amplitude SD 2.1˚). During the stimulation, the whiskers were imaged as detailed above (1000 frames/s, 0.2 ms exposure time).
Electrophysiological data analysis Spike sorting
Single units (N=20) were isolated from the extracellular recordings as previously described, by thresholding and clustering in the space of 3-5 principal components using a mixture model (Bale and Petersen, 2009). A putative unit was only accepted if (1) its inter-spike interval histogram exhibited a clear absolute refractory period and (2) its waveform shape was consistent between the anaesthetised and awake phases of the recording.
Responses to whisking without touch
To test whether a unit responded to whisking itself, we extracted non-touch episodes as detailed above and computed time series of whisking amplitude and phase by band-pass filtering the whisker angle time series (6-30 Hz) and computing the Hilbert transform (Kleinfeld and Deschênes, 2011). Amplitudes were discretised (30 equi-populated bins) and the spiking data used to compute amplitude tuning functions. Phases for bins where the amplitude exceeded a given threshold were discretised (8 equi-populated bins) and used to construct phase tuning functions. To determine whether a unit was significantly amplitude-tuned, we fitted a regression line to its amplitude tuning curve and tested whether the slope was statistically significantly different to 0 (p=0.0025, Bonferroni-corrected). To determine whether a unit was significantly phase-tuned, we computed the maximum value of its phase tuning curve and compared this to the distribution of maxima of chance tuning functions. Chance tuning functions were obtained by randomly shifting the recorded spike sequences by 3000-8000 ms and recomputing tuning functions (500 times). A unit was considered phasetuned if its tuning function maximum (computed using amplitude threshold of 2˚) exceeded the 95th percentile of the shuffled distribution.
Acceleration tuning curves were quantified, for each unit, as follows. First, an acceleration tuning curve was estimated (as above). Units typically responded to both positive and negative accelerations, but with unequal weighting between them. To capture this, we fitted the following regression model to the tuning curve: Here, for each bin i of the tuning curve, r i was the firing rate and a i was the acceleration; m 0 , m 1 and m 2 were regression coefficients; the term D i (D i =1 if a i <0, D i =0 otherwise) allowed for asymmetric responses to negative and positive acceleration. Based on its best-fitting regression coefficients (p=0.05), units were classified as: having 'preference for negative acceleration', if m 2 was significantly >0; having 'preference for positive acceleration', if m 2 was significantly <0; as having 'no preferred direction' if both m 1 was significantly >0, and m 2 was not significantly different from 0; and as 'not acceleration sensitive' if neither m 1 nor m 2 were significantly different from 0.
Generalised Linear Model (GLM)
To investigate how well PWNs encode a given sensory variable (e.g., whisker angle, curvature), we fitted single unit activity to a GLM (Nelder and Wedderburn, 1972;Truccolo et al., 2005;Paninski et al., 2007), using methods similar to Bale et al., 2013. For each unit, a 'stimulus' time series (x) (whisker angle or whisker curvature change) and a simultaneously recorded spike time series (n) were discretized into 1 ms bins: x t and n t denote respectively the stimulus value and spike count (0 or 1) in bin t.
GLMs express how the expected spike count of a unit depends both on the recent stimulus history and on the unit's recent spiking history. The standard functional form of the model we used was: Here n t , the output in bin t, was a Bernoulli (spike or no-spike) random variable. The probability of a spike in bin t, y t , depended on three terms: (1) the dot product between the stimulus history vectorx t ¼ x tÀLkþ1 ; . . . ; x t ð Þ and a 'stimulus filter'k (length L k = 5); (2) the dot product between the spike history vector n À ! = (n t-Lh+1 ,. . .,n t ) and a 'spike history filter'h t (length L h = 2); (3) a constant bias b, which sets the spontaneous firing rate. f ð. Þ was the logistic function f ðzÞ ¼ ð1 þ e Àz Þ À1 . The preferred direction of the GLM is determined by the sign of the stimulus filter. Positive (negative) k coefficients tend to make positive (negative) stimuli trigger spikes. Since we found that GLM performance was just as good with L k = 1 as L k = 5 (Figure 2-figure supplement 1C), we used results from the L k = 1 case to define selectivity to curvature change direction: positive kimplies selectivity for positive curvature change; negative k selectivity for negative curvature change. When a whisker pushed against an object during protraction, curvature increased; when it pushed against an object during retraction, it decreased. To consider whether units might encode multiple sensory variables (e.g., both whisker angle and whisker curvature change), we used a GLM with multiple stimulus history terms, one for each sensory variable: Here the indices 1, 2 label the sensory variables.
Training and testing of the GLM were done using a cross-validation procedure. For each unit, half of the trials were assigned randomly to a training set and half to a testing set. The training set was used to fit the parameters (k , h À ! and b), while the testing set was used to quantify the similarity between the spike train of the recorded unit and that predicted by the GLM. GLM fitting was achieved by finding the parameter values (k , h À ! and b), which minimized a cost function consisting of the sum of the negative log-likelihood and a regularizing term Àak k À ! k 2 . For all units, model prediction performance on the test set was robust to variation of a over several orders of magnitude: a was therefore set to a standard value of 0.01. To quantify the performance of the model, the sensory time series of the testing set was used as input to the best-fitting GLM to generate a 'predicted' spike train in response. Both real and predicted spike trains were then smoothed by convolution with a 100 ms box-car filter and the similarity between them quantified by the Pearson correlation coefficient (PCC). For each unit, the entire training/testing procedure was repeated for 10 random choices of training/testing set and the final prediction accuracy defined as the median of the 10 resulting PCC values. Data from these 10 samples were also used to test whether an individual unit exhibited statistically significant prediction performance for different sensory features. To test whether the results were robust to the smoothing time-scale, the above procedure was repeated for a range of box-car smoothing filters (1, 5, 10, 20, 50, 70 ms). To test whether a given 'actual' PCC was statistically significant, we tested the null hypothesis that it could be explained by random firing at the same time-averaged rate as that of the recorded unit. To this end, the recorded spike sequences were randomly shifted by 3000-8000 ms and the training/testing procedure above applied to this surrogate data. This was repeated 10 times and the resulting chance PCCs compared to the actual PCC using a signed-rank test, p=0.0025 (Bonferroni-corrected). This analysis was used to classify units as being 'curvature-sensitive'.
Quadratic GLM
To test whether the units might exhibit nonlinear dependence on the stimulus parameters, we adapted the GLM defined above (Equation 1) to include quadratic stimulus variables (Rajan et al., 2013). This was important to assess whisker angular acceleration during free whisking, since a subset of units exhibited U-shaped acceleration tuning functions ( Figure 3B). Given a stimulus time series x t , the quadratic stimulus history vector was [x t-Lk+1 ,. . .,x t ,x 2 t-Lk+1 ,. . .,x 2 t ]. Fitting methods were otherwise identical to those detailed above.
Effect of angle-curvature correlations on apparent neuronal stimulus encoding in the passive stimulation protocol If, in a given recording, sensory variable X correlates with sensory variable Y, a neuron responsive purely to X will tend to appear tuned to Y. To investigate whether such an effect might produce apparent sensitivity to whisker angle in the passive stimulation paradigm, we simulated the response of curvature-tuned neurons to the whisker curvature change time series measured during passive white noise stimulation. To minimise free parameters, constrained GLMs (4 free parameters) were used, sensitive either to instantaneous curvature ðk À ! ¼ ½g) or to its first order derivative ðk À ! ¼ g[-1 1]), where g was a signed, gain parameter. Parameters ðh À ! ; b; gÞ were adjusted to produce two spike trains (one for training, the other for testing) with a realistic white noise induced firing rate (~50 spikes/s; Bale et al., 2013). We then attempted to predict the simulated, curvature-evoked (training) spike train by fitting GLMs (length 5 stimulus filter, 8 free parameters) using as input either angle or curvature change. Cross-validated model accuracy was computed as the PCC between the predicted spike train and the testing spike train (both smoothed by convolution with a 5 ms boxcar).
Effect of single-trial approach on GLM prediction performance
The objective of encoding models, such as GLMs, is to obtain an accurate description of the mapping between a stimulus and the neuronal spike trains it evokes. Since the random component of a neuron's response is inherently unpredictable, the best any model can do is to predict the probability of the spike train. To enable this, encoding models have generally (with few exceptions; Park et al., 2014) been applied to a 'repeated-trials' paradigm, where a stimulus sequence (e.g., frozen white noise) is repeated on multiple 'trials' (Arabzadeh et al., 2005;Lottem and Azouz, 2011;Bale et al., 2013;Petersen et al., 2008;Pillow et al., 2008). Model accuracy can then be quantified, largely free of contamination from random response variability, by comparing (using PCC or otherwise) the trial-averaged response of the model to the trial-averaged response of the neuron.
In contrast, in the present study of awake, actively whisking mice, the precise stimulus (time series of whisker angle/curvature) was inevitably different on every pole presentation: there were no precisely repeated trials to average over. Our standard model performance metric (PCC) was computed by comparing the response on a single long, concatenated 'trial' with the corresponding GLM predicted response. Such a PCC is downwards biased by random response variability.
To gauge the approximate magnitude of this downward bias, we used a simulation approach. By simulating the response of model neurons, we could deliver identical, repeated trials and thereby compare model prediction performance by a metric based on trial-averaging with that based on the single-trial approach. To this end, for each recorded unit, we used the best-fitting curvature change GLM to generate 100 trials of spike trains evoked by the curvature time series measured for that unit. Data from the first of these trials was used to fit the parameters of a minimal 'refitted GLM' (stimulus filter length 1, spike history filter length 2; bias; total 4 free parameters), and the single-trial performance quantified, using the approach of the main text (Figure 2-figure supplement 1B, left). Next, we used the refitted GLM to generate 100 repeated trials of spike trains evoked by the curvature time series. Repeated-trials performance was then quantified as the PCC between PSTHs obtained by trial-averaging (Figure 2-figure supplement 1B, right). | 2016-10-26T03:31:20.546Z | 2015-08-10T00:00:00.000 | {
"year": 2016,
"sha1": "70ffe41ae70551c84b9d0e20d745f5cd279b2879",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.10696",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8334927c9a41687d1f9bfeeaf25176c5f613305",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
73491435 | pes2o/s2orc | v3-fos-license | Influence of Musculoskeletal System Dysfunction Degree on Psychophysiological Indicators of Paralympic Athletes
The purpose of the work was to identify the influence of functional class and degree of damage to extremities on psychophysiological indicators of Paralympians. The study involved 33 elite athletes with musculoskeletal system disorders of the 6 (n = 15) and 10 (n = 18) functional classes in table tennis, aged 21–25 years old. Parameters characteristic for determining the psychophysiological state and typological characteristics of the nervous system were analyzed with the help of computer programs for psychophysiological testing. We determined the latent time of simple and complex reactions in different testing modes. Dispersion analysis was also used. We applied single-factor multidimensional dispersion analysis: one-way analysis of variance and General Linear Model, Multivariate. The indicators of psychophysiological testing were applied as dependent variables. The values of the functional class of athletes were used as the independent variable. To study the influence of damage degree of the upper or lower extremities on psychophysiological indicators, the extremities damage degree was applied as an independent variable. The time in the Paralympic 6 functional class to reach the minimum signal exposure in feedback mode was significantly longer compared with the 10 Paralympic functional class (p < 0.05). Comparing psychophysiological indicators when Paralympians are divided into groups more differentiated than functional classes (that is, according to the nature of the disease or the degree of limb lesions), significant differences were found in all psychophysiological indicators between the athletes of different groups. The greatest impact on psychophysiological indicators was a lesion of the lower extremities. The training of Paralympians in table tennis should consider the reaction rate indicators. In addition, when improving the functional classification of Paralympians in table tennis, a more differentiated approach should be taken when considering their capabilities, including psychophysiological indicators. During training and functional classification of Paralympic athletes in table tennis, it is important to consider their functional class as well as the degree of damage to upper and lower extremities and the level of psychophysiological functioning.
affected muscles, and the functioning of the nervous system. These differences considerably influence the features of the training process and competitive activities. For the rational arrangement of the table tennis training process for Paralympians, their functional class and the nature of the disease and the volume of affected muscles need to be considered.
In the literature, data have been reported concerning the interaction of muscles and the nervous system [31,32]. An important challenge is to determine whether Paralympians with different damage degrees of the musculoskeletal system differ in psychophysiological indicators. To do this, elite Paralympians in various functional classes can be compared according to the classification in the Paralympic table tennis. However, since Paralympians differ not only in functional class but also in the nature of the musculoskeletal system disorder and the volume of affected muscles, athletes should also be compared according to these characteristics.
The degree of musculoskeletal system disorder is determined by different scales [33,34]. The muscle strength [35], balance [36], general functionality [37], and risk of falls [38,39] are also considered. As such, Paralympians that are representative not only of functional classes but also of different musculoskeletal system damage should be compared. Identifying the psychophysiological functions features of Paralympians with different levels and features of the musculoskeletal system damage, specializing in table tennis, will allow the creation of a more precise and individualized training process. These data may be also useful for improving the functional classifications of Paralympians.
One of the main psychophysiological indicators is the reaction rate in various testing modes and typological features of the nervous system. Based on the analyzed literature, our hypothesis is the following: psychophysiological indicators are different in Paralympians with different levels of musculoskeletal system damage. The purpose of the work was to identify the influence of the functional class and the degree of damage to extremities on psychophysiological indicators of Paralympians.
Participants
The study involved 33 elite male athletes with the musculoskeletal system disorders of the 6 (n = 15) and 10th (n = 18) functional classes in table tennis, aged 21-25 years. The study was carried out in accordance with the principles of the Helsinki Declaration and approved by the Ethics Committee of the H.S. Skovoroda Kharkiv National Pedagogical University, Kharkiv, Ukraine.
Characteristics of Athletes
Minimum impairment for athletes was competed in a standing position with cerebral palsy, amputations, and other lesions of the musculoskeletal system [30] (Appendix A).
Division of Athletes According to Upper and Lower Limb Activity Degree
In addition to the functional classification, we also divided the athletes according to the upper and lower limb activity degree. This division was carried out for the athletes with a predominant upper limb impairment and a predominant lower limb impairment because these impairments are associated with different parts of the central nervous system.
Instruments in the Study
We developed a special scale is based on existing scales for assessing musculoskeletal system disorders and are applied in rehabilitation.
The scale used for muscular strength assessing was the Scale for Assessing Muscle Strength [35]. Patterns of weakness can help localize a lesion to a particular cortical or white matter region, spinal cord level, nerve root, peripheral nerve, or muscle. This scale involves testing the strength of each muscle group and recording it in a systematic fashion. The testing of each muscle group should be immediately paired with testing of its contralateral counterpart to enhance detection of any asymmetries. Muscle strength is often rated on a scale of 0.5 to 5.5.
The scale we used for balance assessment was the Berg Balance Scale (BBS) [36]. The BBS is a qualitative measure that assesses balance via performing functional activities, such as reaching, bending, transferring, and standing, that incorporates most components of postural control: sitting and transferring safely between chairs; standing with feet apart, feet together, in single-leg stance, and feet in the tandem; Romberg position with eyes open or closed; and reaching and stooping down to pick something off the floor. Each item is scored on a 5-point scale, ranging from 0 to 4, each grade with well-established criteria. Zero indicates the lowest level of function and 4, the highest level of function. The total score ranges from 0 to 56. The BBS is reliable (both inter-and intra-tester) and has concurrent and construct validity (validity coefficient = 0.78; validity coefficient = 0.74, respectively) [36].
The scale we used for functional capacity determination was the Functional Independence Measure (FIM) [37]. The FIM is a rating system of a patient's ability to perform self-care, sphincter control, mobility, locomotion, communication, social adjustment, and cognition tasks, each of which is rated on a scale between 1 and 7 points depending on the specific degree of assistance required for each task.
The scale we used for the risk of falls was the Fall Effect Scale [38]. The Falls Scale for the Older Person is an assessment tool designed to identify the older person's awareness of and practice of behaviors that could potentially protect against falling. People who do not use protective behaviors are potentially at risk of falling, in particular if they are in a group with risk factors for falls such as declining function. The Fall Effect Scale can be self-administered by the older person or administered through an interview, usually taking about 5 to 10 min to complete. It can also be mailed to the person prior to a home visit. Respondents are encouraged to provide a rating (Never, Sometimes, Often, or Always) for each statement and to avoid the "Does not apply" category unless absolutely necessary. This is why we tried to offer "Does not apply" only for those items where it was a possibility.
The Ashworth scale measures resistance during passive soft-tissue stretching and is used as a simple measure of spasticity [39]. The scale ranges from 0 to 4, where 0 denotes no increase in muscle tone; 1 denotes slight increase in muscle tone, manifested by catch and release or by minimal resistance at the end of the range of motion when the affected part(s) is (are) moved in flexion or extension; 1+ is a slight increase in muscle tone, manifested by a catch, followed by minimal resistance throughout the remainder (less than half) of the Ashworth scale; 2 is a more marked increase in muscle tone through most of the Ashworth scale, but affected part(s) easily move(s); 3 is a considerable increase in muscle tone, but passive movement difficult; and 4 denotes affected part(s) rigid in flexion or extension.
We also used the Dynamic Gait Index (DGI) [40]. DGI is used to assess gait, balance, and fall risk in elderly patients by evaluating usual steady-state walking and walking during more challenging tasks. The intended population is the elderly, stroke patients, and vestibular disorder patients who display poor balance and are at risk of falling. Gait index measures include 8 functional walking tests performed by the patient, and scored on the scale of 0 to 3. Scores of 19 or less are related to increased incidence of falls. The highest score achievable is 24, which indicates safe ambulators. Each question is scored with the lowest category that applies. Time for completion of the exam is 15 min.
As a result of our analysis of the existing assessment scales for the musculoskeletal system functions applied in rehabilitation, a comprehensive scale was developed to assess the nature of the musculoskeletal system damage and the volume of muscle with impaired function (Appendix B). The group with assessed as having disorders of the musculoskeletal system classified as "1" included 6 athletes; "2" included nine athletes; "3" included three athletes; "5" included six athletes; "8" included three athletes; and "9" included six athletes. There were no athletes with estimates of the degree of dysfunction of the musculoskeletal system classified as 5, 7, or 10 points in our study. Paralympians were tested on the level of psychophysiological functioning. The obtained data were mathematically processed to identify the effect of functional class of athletes and the nature of the musculoskeletal system disorder (the volume of muscle) on psychophysiological functions in two ways: (1) the influence of the functional class of athletes (athletes playing standing; 10 and 6 functional classes were compared) (scale, Appendix A) and (2) the influence of the nature of the musculoskeletal system disorder and the volume of muscle with impaired function (scale, Appendix B).
We evaluated the physical abilities of the Paralympians on this scale with a standard medical examination by athletes before international competitions. We used the data of the 2016 Paralympics. This procedure is standard for all Paralympians.
Methods and Organization of Research
The experiment was conducted in March 2018. To determine the psychophysiological state of athletes during the first and last week of the experiment, psychophysiological indicators were recorded using the computer program Psychodiagnostics (H.S. Skovoroda Kharkiv National Pedagogical University, Kharkiv, Ukraine) [41][42][43]. The following parameters were fixed: (1) Time a simple visual-motor reaction. Images appear on the monitor screen. The subject should click the left mouse button as soon as he sees the image. Performs 30 attempts. The average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded. (2) Choice reaction time (Choice reaction 2-3). Images appear on the monitor screen. The subject must press the left mouse button as soon as he sees the image of the geometric figure. The subject must press the right mouse button as soon as he sees the image of the animal. When other images appear, you do not need to click the mouse button. Performs 30 attempts. The average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded. (3) Time complex visual-motor reaction in the feedback mode. The subject must press the left mouse button as soon as he sees the image of the geometric figure. The subject must press the right mouse button as soon as he sees the image of the animal. When other images appear, the subject does not need to click the mouse button. The faster the subject reacts, the faster the next image appears.
The average value of the reaction time (ms), the standard deviation (ms), the number of errors is recorded. In addition, the smallest time the image stays on the screen (the minimum signal exposure time (ms)) is fixed. The time from the start of the test to the subject reaching the peak of the reaction in this test is also recorded (time to reach the minimum signal exposure (s)).
A complex of parameters of a compound visual-motor choice reaction of two of the three elements in feedback mode, that is, as the time reaction changes, the time of signal delivery changes. The "Short version" was used in the feedback mode: the exposure time varies automatically depending on the corresponding reactions of the subject. After the correct answer, the duration of the next signal is reduced by 20 ms, and after the wrong one, the next signal increases by the same amount. The range of the signal exposure change during the test subject's operation is within 20 to 900 ms with a pause between exposures of 200 ms. The right answer involves pressing the left (right) mouse button while displaying a certain exposure (image), or during a pause after the current exposure. In this test, the time for exciting the minimum exposure of the signal (the time from the start of the test to the subject reaching the peak of the reaction in this test) and the time of the minimum exposure of the signal (the smallest time the image stays on the screen) reflects the functional mobility of the nervous processes (ability to respond quickly to changing situations). The number of errors reflects the strength; the lower the value, the higher the mobility and strength of the nervous system. The duration of the initial exposure is 900 ms, the magnitude of the change in the duration of the signals with correct and therefore erroneous reactions-20 ms (If the subject reacts to the next signal faster than the previous one, the residence time of the next image on the screen is reduced by 20 ms), a pause between the presentation of signals-200 ms, the number of signals-50. The indicators are recorded: the average value of the latent period in ms, deviation in ms, number of errors, run-time test (s), minimum exposure time (ms), and exposure time of minimum exposure (s); (4) A complex of parameters of a complex visual-motor reaction that involves selecting two of the three elements in feedback mode; as the reaction time changes, the time of signal delivery changes. The "long-term variant" was used in the feedback mode, where the duration of exposure changes automatically depending on the corresponding reactions of the subject. After providing the correct answer, the duration of the next signal is reduced by 20 ms, and after an incorrect response, the duration increases by 20 ms. The range of the signal exposure change during the test subject's operation is within 20 to 900 ms with a pause between exposures of 200 ms. The correct answer is to press the left (right) mouse button when a certain image is displayed, or during a pause after the current exposure. In this test, the time for achievement the minimum exposure of the signal and the time of the minimum exposure of the signal reflect the functional mobility of the nervous processes. The number of errors reflects the strength nerve processes; the lower the value, the higher the mobility and strength of the nervous system. In addition, the total time of the test reflects a combination of strength and mobility of the nervous system. The duration of the initial exposure is 900 ms, the magnitude of the change in the duration of the signals with correct or erroneous reactions is 20 ms, the pause between the presentations of signals is 200 ms, the number of signals is 120. The indicators are fixed: the average value of the latent period (ms), deviation (ms), number of errors, test runtime (total test time) (s), minimum exposure time (ms), and exit time to minimum exposure (s).
Statistical Analysis
The computer programs Microsoft Excel-2016 Data Analysis (Version 16, Microsoft, Las Vegas, NV, USA) and SPSS-17 (Version 17.0.3, International Business Machines, Armonk, NY, USA) were applied for statistical processing of the obtained data. We determined the average arithmetic value, the mean square deviation S (SD), and statistical significance according to Student's t-test for each indicator. The dispersion was also analyzed. We determined the influence of the functional class of athletes on the reaction rate in various test modes. We also determined the effect of the upper and lower extremity damage degree on the reaction rate in various test regimes. The degree of influence was considered reliable at a significance level p < 0.05.
Each subject in this file corresponds to a row in the Excel (Microsoft, Las Vegas, USA) Table S1. The columns of the Table S1 show the data of each subject and the results of the tests.
SPSS-17 (Armonk, New York, USA) was used for statistical data processing. Since it is difficult to maintain long names of indicators in the SPSS program, all indicators were abbreviated. Explanation of abbreviations is presented in the Appendix A.
We formulated two assumptions: (1) Psychophysiological indicators significantly differ in athletes of different functional classes, and (2) psychophysiological indicators vary in athletes with different degrees and patterns of lesions of the musculoskeletal system.
To verify these assumptions, we used the following statistical methods: (1) Analysis of the reliability of differences in the indicators of Paralympians in the functional two functional classes according to Student's t-test (file: Stst.1.sav; file: Interpretation of notation in the program SPSS.docx, Figures S1-S4; file: Stst_T-test_klass.spv). (2) Analysis of the influence of the Paralympic functional class on psychophysiological indicators (file: Stst.1.sav; file: Interpretation of notation in the program SPSS.docx, file: Stst_Gen_Mod_klass.spv). (3) Analysis of the reliability of differences in the indicators of groups of athletes with different levels and patterns of lesions of the musculoskeletal system. In this case, more than two independent samples were compared. Therefore, analysis of variance (ANOVA) was used (file: Stst.1.sav; file: Interpretation of notation in the program SPSS.docx, file: Stst_ANOVA_Inc.spv). (4) Analysis of the impact of the degree and nature of damage to the musculoskeletal system of the Paralympians on psychophysiological indicators. For this, the following actions were performed in SPSS-17 (Armonk, New York, USA) (file: Stst.1.sav; file: Interpretation of notation in the program SPSS.docx, file: Stst_Gen_Mod_inc.spv).
We applied single-factor multivariate dispersion analysis. The indicators of psychophysiological testing were used as dependent variables. The functional class of athletes was applied as the independent variable. To study the influence of the upper or lower extremity damage degree on psychophysiological indicators, we used the point value of the extremity damage degree as an independent variable.
Results
The performed study confirmed the presence of a significant influence of the athletes' functional class on the stability of the reaction rate at p < 0.05 (indicator "Reaction of choice 2-3", deviation, ms). In athletes in the 10 functional class, the stability of the reaction is significantly higher in comparison with athletes of the 6 functional class (Tables 1 and 2). We also detected a significant effect of the functional class on the time to reach the minimum signal exposure (p < 0.05) ("Reaction choice in feedback mode, exit time to minimum exposure, s"). In athletes in the 10th functional class, the time for reaching the minimum signal exposure was significant in comparison with athletes in the 6th functional class (Tables 1 and 2). We determined the influence of the musculoskeletal system disorder degree on psychophysiological indicators using multidimensional single-factor dispersion analysis. Tables 3-5 present of psychophysiological indicators for the athletes in each group according to the developed scale. The obtained data shows that as the dysfunction of the musculoskeletal system increases, the psychophysiological indicators tend to worsen. Some exceptions are the participants with an assessment of dysfunctions of 3 and 8 points (Tables 3-5).
The latent period of a simple visual-motor reaction increases with increasing dysfunction of the musculoskeletal system, with the exception of participants with an assessment of dysfunctions of 3 and 8 points ( Table 3). The number of errors and the stability of the reaction in the test "simple visual-motor reaction" do not have a pronounced tendency to deteriorate with increasing dysfunction of the musculoskeletal system. In the test "choice reaction of 2 out of 3 objects", the psychophysiological indicators tended to deteriorate with increasing degree of dysfunction of the musculoskeletal system (Table 4), with the exception of participants with an assessment of dysfunctions of 8 points, which can be associated with the individual congenital typological features. The choice reaction is key in table tennis [19]; therefore, the identification of a tendency for the results of this test to deteriorate with increasing dysfunction of the musculoskeletal system is an important result for practical work and the implementation of an individual approach for the training of athletes. We observed the most pronounced tendency deterioration in psychophysiological indicators with increasing dysfunction of the musculoskeletal system in athletes in tests on the reaction of choice in feedback mode (Table 5). This test reflects the mobility of nerve processes [20], which is very important for success in table tennis. Dispersion analysis with an independent variable "The musculoskeletal system damage degree" showed a significant effect of this indicator on all the studied psychophysiological functions of athletes (Tables 3-6). As the damage of the locomotor apparatus increases, psychophysiological functions deteriorate at p < 0.05, p < 0.001 (Tables 3-6). The lowest rates were observed in athletes with impaired movements of both lower extremities.
The obtained data show that with an increase in the volume of the affected muscles, the psychophysiological functions of athletes decrease. Since motor functions are regulated by the nervous system, an increase in the volume of affected muscles should influence the mobility of the nervous processes. Many diseases associated with disorders of the musculoskeletal system also include disorders of the nervous system. Therefore, in the training process, it is important to consider not only their functional class, but also the nature of the musculoskeletal system damage, the volume of the affected muscles, and the fact that damage to the lower extremities affects psychophysiological indicators more than the damage to the upper extremities or unilateral damage.
Discussion
In this study, we confirmed our hypothesis concerning the influence of musculoskeletal system disorder degree on the psychophysiological functioning of Paralympians in terms of the characteristics and degree of motor disorder of the upper and lower extremities. The hypothesis was partially confirmed regarding the influence of belonging to a certain functional class of Paralympians in table tennis. The first assumption that psychophysiological indicators significantly differ in athletes of different functional classes was partially confirmed. The second assumption of psychophysiological indicators differing in athletes with different degrees and nature of the musculoskeletal system was fully confirmed.
The obtained data allow us to conclude that elite athletes, Paralympians specializing in table tennis, do not differ according to their functional class in most psychophysiological indicators. The significant impact of the functional class was revealed only by two indicators: "Choice reaction 2-3, deviation (ms)" and "Choice reaction in feedback mode, time to minimum signal exposure, s". In the test use to determine the choice of 2 out of 3 objects, it is necessary to press the left mouse button when the geometric shape appeared on the screen and to press the right mouse button when the image from the animal world. We determined the average test run time for each test participant based on 50 attempts. We also determined the number of errors and the standard deviation for 50 attempts for each test participant. A significant impact of the functional class of athletes was found for indicators "Choice reaction 2-3, deviation (ms)" and "Choice reaction in feedback mode, time to minimum signal exposure, s". Table 1 shows that the standard deviation was smaller for athletes in the 10th class. This means that the stability of the latent time of a compound reaction depends on the functional class of Paralympians athletes in table tennis.
Another indicator that shows the influence of athletes' functional class on the test result is "Choice reaction in the feedback mode, time to reach the minimum signal exposure, s". This test was performed in feedback mode: the faster the subject reacts to the signal, the faster the next signal appears. The faster the participant reaches the individual maximum when performing this test (i.e., the faster they reach smallest signal exposure time), the higher the mobility of the nervous processes. The obtained data confirms that athletes with minimal musculoskeletal system damage have a maximum reaction rate faster than athletes with more serious disorders. We proved that the musculoskeletal system also affects the mobility of the nervous processes because the nervous system regulates muscle activity. Therefore, muscular system disorders change the working of the nervous system. In Paralympic athletes, some disorders initially affect both the muscular system and the nervous system (e.g., cerebral palsy). Therefore, identifying the effect of the musculoskeletal system disorders on the working of the nervous system is important.
However, according to the obtained data, the other studied parameters are not influenced by the athletes' functional class (Table 2). This may be due to the fact that athletes who are similar in their ability to hold a tennis racket and stand near the tennis table are combined in one functional class. However, these athletes may differ in the nature of the musculoskeletal system disorder and the volume of the affected muscle. These factors are related to the nervous system activity and should affect the psychophysiological indicators. Therefore, we analyzed the influence of the degree and nature of the musculoskeletal system disorder on the psychophysiological functions of Paralympians. The musculoskeletal system disorder degree, in this case, was determined by a specially developed scale based on the existing scales in rehabilitation: muscular strength [35], balance assessment [36], functional capacity determination [37], risk of falls [38], and other [39]. In our opinion, these scales provide a more differentiated assessment of the musculoskeletal system functioning in comparison with the functional classification of Paralympian athletes in table tennis, since the functional classification is based mainly on the principle of considering the ability to play table tennis without a detailed consideration of the nature of the impairment [30].
Therefore, we further aimed to identify the influence of the nature of the musculoskeletal system disorder on a special scale that we developed on the basis of the scales existing in rehabilitation.
We revealed that belonging to a certain functional class of Paralympians only affects the stability of the reaction rate and the achievement time for the minimum signal exposure in the test for the choice reaction rate in feedback mode at p < 0.05. In this test, the signals are delivered the faster, the shorter is the reaction time of the participant to the signal. The faster the subject reacts, the faster the next image appears. The faster an athlete reaches their minimum signal exposure time, the higher the mobility of their nervous system. This means that in the central nervous system, the switching of the work from some nerve centers to others is faster. Dispersion analysis showed that athletes in the 10 functional class respond faster compared to athletes in the 6 functional class. In addition, athletes of the 10 functional class have a higher stability in the reaction rate to visual stimuli. However, no significant effect was revealed between athlete's functional class and reaction time, number of errors, or stability in a simple reaction to a visual signal. In addition, there was no significant effect of the athletes' functional class on the reaction time or in the number of errors in the choice reaction of two objects out of three. The same was true for the test of the choice reaction in feedback mode: the reaction time, the number of errors, the stability of the reactions, and the minimum signal exposure time do not depend on the functional class of athletes. The obtained data are partially consistent with studies by Van Biesenet et al. [20] and Santos et al. [24]. Our studies showed that only a small part of the psychophysiological functioning depends on the functional class of table tennis Paralympians. However, when expressing functional disorders irrespective of the functional class of Paralympians, but on conditional scores of damage of the upper and lower extremity, a significant effect of the disorder degree on all studied psychophysiological indicators was revealed with p < 0.001, p < 0.05 with respect to damage of the upper and lower extremity. This means that the speed of reaction to a visual signal, the number of errors during the test of reaction speed, and the mobility of nervous processes in table tennis Paralympians depend on the upper and lower extremity damage degree, but practically does not depend on the functional class of Paralympians in table tennis. The worst psychophysiological results were observed in athletes with disabilities in both lower extremities. Unilateral damage to the extremities and congenital underdevelopment of the extremities had a lesser effect on the psychophysiological functions [44][45][46].
The obtained data are new in the study of psychophysiological functioning of Paralympic athletes. We revealed that a higher damage degree of the upper and lower extremities influences the psychophysiological functions of Paralympic table tennis athletes in comparison with the influence of functional classification. Our results support the need to consider the characteristics of the upper and lower extremity damage in the functional classification of Paralympians. In addition, the functional classification of Paralympic table tennis athletes should consider the level of psychophysiological functioning.
These provisions are important for the competitions in the Paralympian sport, in particular, table tennis. The obtained data are important for structuring the training process of Paralympians. The results that show the influence of the upper and lower extremities damage degree on the psychophysiological functioning indicate the need for an individual approach to table tennis athlete training. The results show that when training Paralympics athletes in table tennis, it is important to consider not only their functional class but also the upper and lower extremities damage degree and the level of psychophysiological functioning. Our data complete the concept of an individual approach to sports with these provisions [47][48][49].
The obtained data contribute to the study of the relationship between motor and psychological functioning, showing that disorders of the motor apparatus are interrelated with the deterioration of the nervous system. The malfunction of the lower extremities has a more pronounced effect on the working of the nervous system compared with disorders of the upper extremities and unilateral damage to the musculoskeletal system.
The results also confirm the integrity of the body functioning and the connection between consciousness and motor actions [50]. Restriction of motor actions affects the functioning of the nervous system and consciousness. In turn, disorders of the nervous system in the form of cerebral palsy affect psychophysiological functioning (reaction rate and the mobility of the nervous system) and the locomotor apparatus. Damage of the lower extremities is associated with a more expressed decrease in psychophysiological functioning compared with upper extremities disorders, unilateral damage to the extremities, and congenital anomaly of the extremities.
Conclusions
Belonging to a certain functional class of athlete's influences the stability of the reaction rate and the time to reach the minimum signal exposure in the speed test for a choice reaction with feedback. Athletes of in the 10 functional class are reliably faster in the minimum signal exposure test compared with athletes in the 6 functional class.
There was no significant effect of athlete functional class on the reaction time, the number of errors, and stability in a simple reaction to a visual signal. In addition, there was no significant effect of functional class on the reaction time or the number of errors in the choice reaction of two objects out of three. The same is true for the test of the choice reaction with feedback: the reaction time, the number of errors, the stability of the reactions, and the signal exposure time do not depend reliably on the functional class of athletes.
The speed of reaction to a visual signal, the number of errors during the test for reaction speed, and the mobility of nervous processes in Paralympian table tennis athletes depend on the degree of damage to the upper and lower extremities. The worst results in psychophysiological indicators were found in athletes with disabilities in both lower extremities. Unilateral damage to the extremities and congenital underdevelopment of the extremities had less effect on the psychophysiological functions.
The revealed higher degree of influence of the upper and lower extremities degree of damage on the psychophysiological functions of Paralympians in table tennis in comparison with the influence of the functional class indicates the need to consider the functional classification of these athletes including features of the damage to the upper and lower extremities. In addition, in the functional classification of Paralympian table tennis athletes should consider the level of psychophysiological functioning of athletes.
When training Paralympic athletes in table tennis, it is important to consider their functional class as well as the upper and lower extremities degree of damage and the level of psychophysiological functioning.
Limitations
The results of this study apply only to athletes who specialize in table tennis. The subjects in this study were Paralympians who compete in international competitions: the World Cup, the Paralympic Games. The results of this study do not apply to beginner athletes with disorders of musculoskeletal system, as well as to athletes without disorders of musculoskeletal system. The results of this study do not apply to Paralympic other sports. The study of the characteristics of the reaction rate in various modes of testing athletes with disorders of musculoskeletal system in other sports requires additional research.
Acknowledgments:
The study was conducted in accordance with research work funded by the state budget of the Ministry of Education and Science of Ukraine for 2017-2018. "Theoretical and methodological foundations of the application of information, biomedical and pedagogical technologies for the realization of individual physical, intellectual and spiritual potential and the formation of a healthy lifestyle" (state registration number 0117U000650).
Conflicts of Interest:
The authors declare no conflict of interest. Table A1. Minimum damage for athletes who compete in a standing position with cerebral palsy, amputations, and other lesions of the musculoskeletal system [30].
Class Damage
Class 10 (minimal disabilities standing classes) • single stiff ankle • amputation of forefoot through all metatarsals (minimal 1/3 of foot amputated) • hip (sub)luxation • moderate to mild reduction of ROM in the major joints • polio: loss of 10 points in muscles strength in one lower extremity distributed over the whole leg • polio: 10 points of loss over two legs is not considered to meet the minimal disability or Very mild impairment of playing arm • finger amputation/dysmelia with functional grip (more than 4 phalanges loss-thumb not taken in consideration) • stiff wrist with functional grip • weakness of the hand or a joint of the arm ITTF PARA • single BE with a stump length not longer than 2/3 of forearm (the forearm = the length of the ulna) • brachial plexus lesion with some residual functions • dysmelia or similar disabilities not longer than 2/3 of the forearm or Moderate impairment of the trunk • stiffness (ankylosing spondylitis) • extreme curvatures of the back (kyphosis, scoliosis, kyphoscoliosis, hyperlordosis) • fusion • muscular dystonia with effects on the spine or Any disability with comparable functional profile Nanism is recognized as a disability • athletes with Nanism start in class 10 but as a result of other impairments, they may be considred for a lower class, e.g., normally a player with a single BK amputation is class 9 but Nanism plus BK amputation is class 8 • body length: male: 140 cm and less female: 137 cm and less • arthrogryposis playing arm and leg(s) or both arms and leg(s) • muscular dystrophy of limbs and trunk or other neuromuscular disability of comparable impairment profile • incomplete spinal cord injury of comparable profile • a player with the handle of the racket in his or her mouth • any disability with comparable functional profile | 2019-03-11T17:22:42.245Z | 2019-02-26T00:00:00.000 | {
"year": 2019,
"sha1": "c8f23a1e5dc52009dd5d6027115203538dd0eec8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4663/7/3/55/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8f23a1e5dc52009dd5d6027115203538dd0eec8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
6794775 | pes2o/s2orc | v3-fos-license | Development of a robust pH-sensitive polyelectrolyte ionomer complex for anticancer nanocarriers
A polyelectrolyte ionomer complex (PIC) composed of cationic and anionic polymers was developed for nanomedical applications. Here, a poly(ethylene glycol)–poly(lactic acid)–poly(ethylene imine) triblock copolymer (PEG–PLA–PEI) and a poly(aspartic acid) (P[Asp]) homopolymer were synthesized. These polyelectrolytes formed stable aggregates through electrostatic interactions between the cationic PEI and the anionic P(Asp) blocks. In particular, the addition of a hydrophobic PLA and a hydrophilic PEG to triblock copolyelectrolytes provided colloidal aggregation stability by forming a tight hydrophobic core and steric hindrance on the surface of PIC, respectively. The PIC showed different particle sizes and zeta potentials depending on the ratio of cationic PEI and anionic P(Asp) blocks (C/A ratio). The doxorubicin (dox)-loaded PIC, prepared with a C/A ratio of 8, demonstrated pH-dependent behavior by the deprotonation/protonation of polyelectrolyte blocks. The drug release and the cytotoxicity of the dox-loaded PIC (C/A ratio: 8) increased under acidic conditions compared with physiological pH, due to the destabilization of the formation of the electrostatic core. In vivo animal imaging revealed that the prepared PIC accumulated at the targeted tumor site for 24 hours. Therefore, the prepared pH-sensitive PIC could have considerable potential as a nanomedicinal platform for anticancer therapy.
Introduction
For decades, various types of drug delivery systems, including polymeric micelles, carbon nanotubes, liposomes, polymer-surfactant nanoparticles, conjugated prodrugs, and nanogels have been developed for anticancer chemotherapy, to achieve increased bioavailability of drugs, minimize side effects, control drug release into specific tissues, and enhance drug activity. [1][2][3][4][5] Among these nanosized carrier systems, polyelectrolyte ionomer complexes (PICs) have been extensively investigated for current and potential future applications in drug and gene therapy. 6,7 The nanosized PIC can be spontaneously formed in aqueous solution from double hydrophilic block copolymers containing ionic and nonionic blocks, upon electrostatic interaction between the ionic blocks and oppositely charged molecules such as genes, polyions, proteins, or surfactants. [8][9][10] The electrostatically neutralized ionic blocks lead to the formation of a hydrophobic core in aqueous solution, which can incorporate various pharmaceutical drugs through hydrophobic interactions and hydrogen bonding. In addition, hydrophilic and nonionic blocks such as poly(ethylene glycol) (PEG) can provide aqueous stability via steric hindrance on the surface of the particle and extended circulation times by avoiding rapid renal clearance and reticuloendothelial system uptake. [11][12][13] Among the various types of polyions that have been developed for drug or gene delivery, poly(ethylene imine) (PEI) has been intensively studied, since the PEI Dovepress Dovepress 704 lim et al with the highest cationic charge density has improved the endosomal escape ability, which is directly related to the efficacy of drug or gene therapy. 14,15 The PEG-PEI block copolymers exhibited improved solubility even under chargeneutralized conditions. However, the PIC system with this type of double hydrophilic block copolymer and oppositely charged molecules can be dissociated in in vivo conditions by other counterions, and showed a lower cell transfection of the therapeutic agents due to the lack of self-assembling aggregation force. 6,8 The incorporation of a hydrophobic moiety into the PIC core could overcome these drawbacks, inducing the formation of a tight core and stabilization of the nanoparticles.
In the present study, we developed a novel PIC system based on a cationic poly(ethylene glycol)-poly(lactic acid)poly(ethylene imine) triblock copolymer (PEG-PLA-PEI) and anionic poly(aspartic acid) (P[Asp]). The hydrophobic PLA block in the triblock polycation was able to provide increased colloidal stability by localizing to the middle layer of PIC and enhancing cell interactions and tissue permeability of the delivery platform. 6,10,16,17 The fact is that PIC complexed with PEG-PLA-PEI and P(Asp) at various ratios of cationic PEI and anionic P(Asp) blocks (C/A ratios) shows the pH sensitivity by the protonation and deprotonation of the carboxyl groups in P(Asp) and the amine groups in the PEI blocks. pH-sensitive nanosystems could be used as a cancer reversal strategy through the exploitation of their favorable properties such as improved stability at physiological pH, reduced toxicity, and the controlled release of therapeutic agents at extracellular tumor pH (pH ex =~6.5-7.2) or endosomal pH (pH en #6.5). 18,19 Here, a PIC based on PEG-PLA-PEI and P(Asp) was evaluated for its pH-sensitive anticancer nanomedicinal potential using doxorubicin (dox) as an anticancer model drug. 20 The MW of P(Asp) was ~4,000 Da (degree of polymerization =35).
synthesis of Peg-Pla-PeI triblock copolymers
The synthetic scheme for the polyelectrolyte triblock copolymers, PEG-PLA-PEI, prepared by multistep synthesis is described in Figure 1A. First, the PLA-PEG diblock copolymer was synthesized by the ring-opening polymerization of L-lactide, initiated by the hydroxyl group of PEG in the presence of Sn[Oct] 2 as a catalyst, as described in previous studies. 20,21 The MW of the prepared PLA-PEG diblock was ~11,000 Da, determined by the 1 H-NMR spectra obtained using a 300 MHz Gemini 2000 NMR instrument (Varian Medical Systems, Palo Alto, CA, USA). For the synthesis of PEG-PLA-PEI copolymer, the terminal hydroxyl group of PLA-PEG (0.18 mmol) was carboxylated with succinic anhydride (0.36 mmol), DMAP (0.18 mmol), TEA (0.18 mmol), and pyridine (0.18 mmol) in DCM (30 mL) at room temperature for 1 day. After the reaction, carboxylated PLA-PEG was obtained following reprecipitation from excess diethyl ether. In order to conjugate the PLA-PEG to PEI, the carboxylated PLA-PEG (0.16 mmol) was activated using NHS (0.2 mmol) and DCC (0.2 mmol) in DCM at room temperature for 1 day. After carrying out the reaction, PEG-PLA-PEI was synthesized by a coupling reaction of PEI in DMF and MeOH (1:1) with the activated PLA-PEG, using simple DCC and NHS chemistry.
acid-base titration
The titration plots of PLA-PEG, NaCl, P(Asp), PEG-PLA-PEI, and PEI were performed using the potentiometric titration method. The block copolymers (or NaCl as a control) dissolved in deionized water (2 mg/mL) were adjusted to pH 11 with 1 N NaOH. These solutions were titrated by the stepwise addition of 0.1 N HCl to obtain the pH titration profile. 24 International Journal of Nanomedicine 2016:11 submit your manuscript | www.dovepress.com Dovepress Dovepress 705 ph-sensitive PIc for anticancer nanocarriers Preparation of PIc based on Peg-Pla-PeI/P(asp) PIC composed of PEG-PLA-PEI and P(Asp) were prepared by mixing 0.1% (wt/vol) aqueous solution of PEG-PLA-PEI with a solution of P(Asp) at various CA ratios (ratio between the nitrogen atom of the cationic polymer and the carboxyl group of P[Asp]), followed by vortexing for 10 seconds and incubation for 20 minutes at room temperature.
Particle size and zeta potential measurement using dynamic light scattering The effective hydrodynamic diameters (D eff ) and zeta potentials of the nanocomplex solution (0.1 mg/mL) were measured by photon correlation spectroscopy using a "Zetasizer Nano-ZS" (Malvern Instruments, Malvern, UK) equipped with the multiangle sizing option BI-MAS (Brookhaven Instrument Corp, NY, USA). The software provided by the manufacturer was used to calculate the D eff and zeta potential values. The average D eff and zeta potential values were calculated from three measurements performed on each sample (n=3).
cac analysis
Critical association concentration (CAC) was determined by fluorescence measurement using a Scinco FS-2 fluorescence spectrometer (SCINCO Co. Ltd., Seoul, South Korea) as described in previous studies. 16 This spectrofluorometer is equipped with polarizers for excitation (334 nm) and emission (372 nm) light beams. Pyrene was used as a fluorescent probe. The various concentrations of nanocomplex sample (from 10 -4 g/mL to 10 -8 g/mL) were prepared at different ratios using water, mixed with pyrene (at a concentration of 6.0×10 -7 M), and stirred overnight at room temperature. CAC values were determined by plotting the ratios of I 1 (intensity of the first peak) to I 3 (intensity of the third peak) of the emission spectra profiles against the log 10 values of the nanocomplex
Morphology of PIc
In order to observe the morphology of PIC, a dilute PIC solution (0.1 mg/mL) of the samples was placed onto a glass slide and dried in vacuo. The morphology of PIC was imaged using field emission scanning electron microscopy (FE-SEM, SIGMA, Carl Zeiss Meditec AG, Jena, Germany).
Preparation and characterization of dox-loaded PIc
The dox-loaded PIC system was prepared using the bottom flask method. 20,21 Before loading dox into the complex system, dox⋅HCl was stirred with TEA (2 mol) in DCM overnight to obtain the dox base. PEG-PLA-PEI/P(Asp) (10 mg) and dox (1 mg) were dissolved in 10 mL 50:50 EtOH/DCM solution and transferred into round bottom flasks. For the preparation of the dox-loaded PIC system, the organic phase was removed using a model n-1000 rotary evaporator (EYELA, Tokyo, Japan) to form a thin film in each round bottom flask. Rehydration of the film with a borate buffer solution produced the dox-loaded PIC system, and the pH value of the micelle solution was adjusted with phosphate-buffered saline (PBS) and citric acid buffer. The concentration of dox in the micelles was determined by UV-1200 Spectrophotometer (Labentech, Incheon, South Korea) at λ=481 nm. The drug loading capacity and efficiency were calculated using the following equations:
ph-dependent drug release from micelles
For the drug release test, the dox-loaded PIC solution (10%) was transferred into Spectra/Por dialysis membrane tubing (Spectrum Laboratories, Rancho Domingues, CA, USA; MWCO 3500 Da), immersed in a vial containing 10 mL PBS (pH 9.0-4.0), and incubated in a shaking water bath at 37°C and 100 rpm. The released amount of dox from the complex system was measured at predetermined times using UV-vis spectrometry (Genesys 10 UV) at λ=481 nm. After measurement of dox release at specific times, the medium in the vial was replaced with fresh PBS to prevent drug saturation.
cell viability MCF-7 cells in growth medium were seeded at a density of 1×10 4 cells per well of a 96-well plate 24 hours prior to the cytotoxicity test. The dox-loaded complex system in RPMI 1640 medium at pH 6.0 and 7.4, adjusted with 0.1 N HCl, was prepared immediately before use. The medium was removed from the 96-well plate and the preparation was added with different dox concentrations (1-10,000 ng/mL), and incubated for 40 hours. Chemosensitivity was assessed using the CCK assay. Fresh medium (90 μL, according to the pH conditions) containing 10 μL CCK solution was added to each well, and the plate was incubated for an additional 3 hours. The absorbance of each well was read on a Flexstation 3 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a wavelength of 450 nm.
In vivo fluorescence imaging
In vivo studies were performed using 4-to 6-week-old female nude mice (BALB/c, nu/nu mice; Institute of Medical Science, Tokyo, Japan). The mice were maintained under the guidelines of an approved protocol from the Institutional Animal Care and Use Committee (IACUC) of the Chung-Ang University of Korea. All experiments were performed in compliance with the relevant laws and institutional guidelines. For near-infrared fluorescence real-time tumor imaging, Cy5.5 mono-NHS ester was reacted with the amine of ethylene imine in a solution of PEG-PLA-PEI in DMSO/ water for 1 day. The unconjugated Cy5.5 mono-NHS ester was removed by dialysis in water, and Cy5.5-labeled PEG-PLA-PEI was lyophilized by freeze-drying. For the in vivo animal experiments, KB tumor cells were introduced into female nude mice via subcutaneous injection of 1×10 6 cells suspended in PBS (pH 7.4). When the tumor volume reached 150 mm 3 , the PIC based on Cy5.5-labeled PEG-PLA-PEI/P(Asp) (C/A ratio: 8) was injected intravenously into tumor-bearing nude mice through the tail vein. A 12-bit CCD camera (Image Station 4000 MM; Kodak, New Haven, CT, USA) was used to take live fluorescence images of the mice. The optical images of Cy5.5-labeled PIC in the mouse model were taken 1, 3, 6, 9, and 24 hours following injection.
Results and discussion synthesis of Peg-Pla-PeI
The synthesis was performed using the following steps: 1) synthesis of PEG-PLA using ring-opening polymerization; 2) activation of PLA of the block copolymer; and 3) conjugation of PEI to the PEG-PLA block copolymer using the formation of an amide bond ( Figure 1A). The synthesized PLA-PEG block copolymer was analyzed by using 1 and gel permeation chromatography (GPC). In the 1 H NMR spectra of the block copolymer dissolved in CDCl 3 , the characteristic chemical shifts corresponding to both PLA (1.5 and 5.17 ppm) and PEG (3.64 ppm) were observed, and no other peaks were detected (data not shown). The MWs of the PLA blocks were calculated from the integral values of the characteristic peaks of PEG and PLA using the known MW of PEG (5 kDa), and the Mn of PLA blocks was verified by GPC. The MW of PLA was calculated to be 6,000 Da, and in the GPC curve for PLA-PEG, a single sharp peak was shown with 1.15-1.30 polydispersity. These results confirm that the PEG reacted with L-lactide successfully and no homopolymerization of L-lactide occurred during the reaction.
The structure of the triblock copolymer was confirmed by the 1 H NMR spectra ( Figure 1B). In the 1 H NMR spectra of the block copolymer dissolved in D 2 O, the peak at 3.6 ppm was assigned to protons of the PEG block, and peaks b and c at 1.2 and 5.4 ppm were attributed to the PLA block of CH (δ=5.4) and CH 3 (δ=1.2). Peak d at 2.5-3.2 ppm was assigned to protons of PEI. The MW of PLA and PEI in the triblock copolymer can be obtained from the integral ratio of peak a (OCH 2 CH 2 , δ=3.6) to peak b (COCH(CH 3 )O-, δ=1.3) and d (N(CH 2 CH 2 NH 2 )CH 2 CH 2 NH 2 , δ=2.5-3.2). The MW of PEI blocks, calculated from the integral values, was 10,000 Da and the MW of the final triblock copolymer was ~21,000 Da.
Titration of the synthesized Peg-Pla-PeI
The pKa values of the components of the complex system were measured by the potentiometric titration method, and the acid-base titration profiles of the molecules are plotted in Figure 1C. Compared with NaCl and PEG-PLA without charged groups, other polyelectrolytes revealed a buffering zone depending on their structures. The apparent pKa of P(Asp) was ~6.2, with a very narrow buffering zone. However, PEI showed a broad buffering zone due to the cooperation between the high-density amine groups. Interestingly, by grafting PEI to PEG-PLA, the buffering capacity was outstandingly increased, however, slightly lower than PEI alone. This demonstrates that PEI was successfully conjugated to PEG-PLA. The pKb value of PEG-PLA-PEI was similar to that of PEI; however, the titration curve showed a slight change as a result of the hydrophobicity of PLA. 25 Therefore, PEG-PLA-PEI has a more similar electron effect to PEI rather than to PEG-PLA, leading to a behavior similar to PEI, which shows the facilitation of osmotic swelling and rupture of endosomes by the proton sponge effect. In addition, drug-loaded PIC could release the drug into the cytosol from PIC. 15,[26][27][28] characterization of PIc PEG-PLA-PEI itself did not form aggregates due to the hydrophilicity of PEI and PEG compared with the hydrophobicity of PLA, and was soluble in aqueous solution. However, the complexes formed with cationic PEI and anionic P(Asp) by electrostatic interaction possessed a hydrophobic compartment due to neutralization, which could provide a driving force to form micelle-like aggregation. PIC is formed with hydrophobic cores from PLA and complexes with PEI and P(Asp), and hydrophilic coronas from PEG and unreacted PEI. 29,30 The particle sizes of PIC at different C/A ratios, measured using dynamic light scattering (DLS), are presented in Figure 2A. PIC particles of different sizes were formed by mixing PEG-PLA-PEI and P(Asp). PIC at a 1:1 C/A ratio formed huge unstable particles ~940 nm in size. As the amount of added P(Asp) increased, the particle sizes of PIC drastically decreased, and the PIC with a C/A ratio of 8 formed ~130 nm particles. In contrast, a decrease in P(Asp) led to a slight decrease in the particle sizes of PIC, which may be due to the repulsive forces as a result of the negative charge. In addition, PIC formation in aqueous solution was examined using a fluorescence technique in the presence of pyrene as a probe. In PIC at a C/A ratio of 2, 4, and 8, the CAC values were 2.3, 6.5, and 10.3 μg/mL, respectively (Table 1). These results suggest that with a low amount of P(Asp), charged PEI and P(Asp) blocks could not provide enough hydrophobicity to form a hydrophobic core; however, with a high amount of P(Asp), both the neutralized PEI blocks as well as the P(Asp) blocks could be involved in the formation of the hydrophobic core. Nevertheless, the increased CAC values, despite the decrease in particle sizes due to the interaction between the PLA block and the neutralized P(Asp) and PEI block, may be due to the increase in hydrophilicity as the charge of PEI block increases at a high C/A ratios. However, when compared with other types of amphiphiles (critical micelle concentration =5-1,000 μg/mL) or low-MW surfactants (eg, sodium dodecyl sulfate [SDS] =2.0 mg/mL), PICs have very low CAC values. 31,32 This indicates that the micelle-like structures of PIC have high stability in in vivo conditions, since sudden dilution upon injection can destabilize drug-loading micelles at concentrations below their critical micelle concentration.
The surface charge of particles is an important determinant in particle stability and the electrostatic interaction with cells. Thus, the zeta potentials of PIC at different C/A ratios were measured ( Figure 2B). The zeta potentials of PIC show that as the C/A ratio increases, the values of PIC were initially negative, with approximately -15 mV at a C/A ratio of 0.125, and became positively charged particles with a zeta potential of +4.5 mV at a C/A ratio of 8. Interestingly, the value of PIC at a C/A ratio of 2 shows a neutral charge compared with a negative charge at a C/A ratio of 1 (-8 mV). This may be due to the PLA blocks present in PEG-PLA-PEI. [33][34][35] The zeta potential of micelles prepared with PEG (5 kDa)-PLA (6 kDa) was -12.5±3.4 mV. A positive zeta potential of the complexes was more favorable to ensure the uptake of nanoparticles into cells, since a positive surface charge could allow an electrostatic interaction between the negatively charged cellular membranes and the positively charged complexes.
For the potent drug delivery platform, the PIC system with a C/A ratio of 8 and a small particle size, positive zeta potential, and comparatively low CAC value was selected for further studies.
ph sensitivity of PIc
The pH sensitivity of drug delivery systems (DDS) is a significant property for targeting the extracellular pH of cancers and for triggering the drug release from DDS at lower than physiological pH, such as that in endosomes or lysosomes. 36,37 In the present study, PIC with a C/A ratio of 8 was studied with respect to particle size and zeta potential in relation to pH-sensitive properties (Figure 3). At pH 8-9, the particle sizes were ~160-130 nm, which were slightly higher than those of PIC at pH 7.4 (126 nm). The slight increase in particle size may result from the decrease in hydrophobicity through the deprotonation of the P(Asp) blocks and protonation of the PEI blocks. The zeta potential at pH 9 is negative values, reflecting the predominant deprotonation of P(Asp) and PEI. At pH 8, the zeta potentials became positive, and the particle sizes of PIC increased due to an increase in the protonation of PEI blocks, which could then associate with the negatively charged P(Asp) blocks. The decreased pH (pH 7-4) due to acidic conditions, compared with physiological pH, induced an increase in particle sizes and positive zeta potentials of PIC. This suggests that the increased particle sizes and positive zeta potentials of PIC may be due to the loosening of particle formation by the lower electrostatic interactions between protonated PEI blocks (positive) and P(Asp) blocks (neutral).
Development of a ph-dependent nanomedicine
Based on the results shown earlier, a pH-dependent anticancer nanomedicine was developed using dox as a model drug. Dox was incorporated into the PIC (C/A ratio: 8) using the flat bottom flask method, and the unloaded drug was removed by filtration at 0.45 μm. The loading capacity of PIC for dox was 8.3% and the particle size was ~130 nm, with a narrow size distribution (Figure 4) in the DLS measurement.
The size and particle distribution of dox-loaded PIC were smaller and narrower than those of empty PIC, as a result of the formation of a compact core in PIC by the addition of the hydrophobic drug, dox. 21 The morphology under FE-SEM investigation revealed regular spherical discrete particles with smooth surfaces, and the particle sizes were relatively similar to the results obtained by the DLS technique. The sizes of dox-loaded PIC did not change for more than a week (data not shown).
The release behavior of drug-loaded PIC with 8.3% drug loading was studied at different pH ( Figure 5A). The dox release from PIC at different pH showed a burst effect in the very initial states, which may result from the release of drug located at the surface of PIC. 38,39 For the first 4 hours, even though only 20% of the loaded drug was released from PIC at physiological pH and pH 9.0, 40% was released at acidic pH. For 24 hours, the amount and rate of dox release from PIC at pH 7.4 and pH 9.0 were much lower than those at pH 4.0 and pH 6.0. This result demonstrates that the hydrophobic drug was tightly incorporated into the hydrophobic core and was released by simple diffusion at physiological pH. At acidic pH, however, PIC initiates the disintegration of the core composed of PLA blocks and neutralized PEI/P(Asp) complexes by protonation of the charged molecules, resulting in rapid drug release from the loosened core of PIC. These results are consistent with the cell viability study shown in Figure 5B. The pH-sensitive cytotoxicity of dox-loaded PIC against MCF-7 cells was compared at pH 6.0 and pH 7.4. Under acidic conditions, the anticancer activity of PIC against the MCF-7 cells was enhanced compared with that at pH 7.4. The IC 50 values of the complex system were 129.7 ng/ mL and 239.7 ng/mL at pH 6.0 and pH 7.4, respectively. However, the cellular uptake of dox at the different pHs appeared similar (data not shown). These suggested that the acidic pH condition could trigger the drug release from PIC and the increased dox near MCF-7 cells could enhance the anticancer toxicity. 19 These drug release profiles and cell viability support the notion that PIC may be useful for targeting cancer microenvironment-associated pH, and be considered for the triggering of drug release at endosomal/ lysosomal pH.
The tumor targeting ability of PIC was evaluated in tumor-bearing nude mice by high-resolution fluorescent imaging using Cy5.5-labeled PIC (Figure 6), revealing that by 24 hours, PIC gradually accumulated at the tumor sites.
From these overall results, we were able to hypothesize a concept of the nanosized pH-sensitive PIC (Figure 7). The cationic PEG-PLA-PEI and anioic P(Asp) can be complexed by electrostatic interactions, resulting in the formation of a stable hydrophobic core with PLA and the neutralized blocks, and a hydrophillic corona with PEG and the charged polyelectrolyte blocks at physiological pH. The hydrophobic core can provide a pool for hydrophobic and charged drugs. When the environmental pH changes Abbreviations: cac, critical association concentration; PeI, poly(ethylene imine); Pla, poly(lactic acid); Peg, poly(ethylene glycol); P(asp), poly(aspartic acid); c/a ratio, ratio of cationic PeI and anionic P(asp) block. to acidic conditions, the deprotonated polyelectrolytes can initiate protonation, leading to the destabilization of the hydrophobic core formed by the electrostatic interactions (concept box in Figure 7). In vivo, the PIC in normal blood vessels can be circulated without recognition by the immune system due to the nanosize and surface charge properties, 40 and the decrease in opsonization by PEG. At the tumor site, the leaky vasculature allows an increase in PIC accumulation via the enhanced permeability and retention effect. 41 The extracellular pH of the tumor (pH ,7.0) can trigger drug release by destabilization of PIC through the protonation of the PEI and P(Asp) blocks. Furthermore, PIC can enter the tumor cells by endocytosis and accelerate drug release at endosomal pH (pH ,6.0) due to drastic destabilization. In addition, PIC with mobile cations may evade exocytosis and rupture the endosome by the proton sponge effect, which can increase cytotoxicity.
Conclusion
Here, the newly designed PIC formed from positively charged PEG-PLA-PEI and negatively charged P(Asp) was successfully prepared. PIC spontaneously self-assembled in aqueous solution at various C/A ratios through electrostatic interactions. When the C/A ratio was increased, the CAC and zeta potential values were increased; however, the size decreased up to 150 nm. Such a self-aggregated system at a C/A ratio of 8 showed pH-dependent particle properties with respect to size and zeta potential. The PIC nanomedicine prepared using dox as a model anticancer drug showed pH-dependent drug release and cytotoxicity. Overall, the 150 nm sized PIC nanomedicine was stable, with a low level of drug release at physiological pH, such as in the systemic circulation, and a rapid disintegration and drug release at acidic pH, such as extracellular tumor pH or endosomal pH. The animal imaging following intravenous administration of PIC revealed accumulation by passive targeting at the tumor site. In vivo tumor inhibition studies using these systems are in progress to demonstrate the enhanced antitumor activity by comparing with other nanosystems. In conclusion, the PIC nanomedicine may have considerable potential as a novel class of DDS for anticancer therapy. | 2017-09-10T14:14:04.009Z | 2016-02-22T00:00:00.000 | {
"year": 2016,
"sha1": "b2b8fc399145b334c15a75cc1cd492dd1127b3be",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/ijn.s99271",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34cd56c1fee453de69b9d3c2136d265c39df018e",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
234813011 | pes2o/s2orc | v3-fos-license | Sin taxes and their effect on consumption, revenue generation and health improvement: a systematic literature review in Latin America
Abstract Sin or public health taxes are excise taxes imposed on the consumption of potentially harmful goods for health [sugar-sweetened beverages (SSBs), tobacco, alcohol, among others], aiming to reduce consumption, raise additional revenue and/or improve population health. This paper assesses the extent to which sin taxes (a) can reduce consumption of potentially harmful goods, (b) raise revenue for national health systems and (c) contribute to population health in Latin America. A systematic literature review was conducted on peer-reviewed and grey literature; endpoints included: impact of raising sin taxes on consumption, ability to raise revenue for health and the possibility of population health improvements. Risk of bias for each study was assessed. The synthesis of the literature on sin tax implementation showed improvements in all three endpoints across the study countries. Following the introduction of sin taxes or by simulating their potential impact, nearly all studies explicitly reported that consumption of potentially harmful goods (mainly SSBs and tobacco) declined; revenue was found to have increased in almost all countries, suggesting that there may be additional scope for further tax increase. Simulated improvements in population health have also been shown, by demonstrating a relationship between sin tax increases and reduction in prevalence of diabetes, stroke, heart attacks and associated deaths. However, sin tax effects on health would be better quantified over the long-term. Data quality and availability challenges did place some limitations on sin tax impact assessment. Sin taxes can be effective in reducing consumption of potentially harmful goods, improve population health and generate additional revenue. Promoting further research on this topic should be a priority.
Background
Sin taxes, or public health taxes, are defined by the World Health Organisation as excise taxes targeting goods that can be detrimental to the health of the population (WHO, 2004). These goods include tobacco products, alcohol, sugar-sweetened beverages (SSBs), which are drinks with added sugar, such as soft drinks, tea, flavoured coffee, juice and sports drinks. The harmful impact of these goods is well known and is evidenced by research (Cnossen, 2005); for instance, tobacco consumption is linked to an increased risk of developing cardiovascular disease (CVD), respiratory disease, cancer and other noncommunicable and chronic diseases (U.S. Department of Health and Human Services, 2014), while elevated SSB consumption is generally associated with an increased risk of developing CVD, metabolic disease and obesity (Malik et al., 2013;Arsenault et al., 2017).
Published evidence has demonstrated the effect of sin taxes on consumer behaviour, health outcomes and on revenue generation for health systems (Wright et al., 2017). Although differences in sin tax application and outcomes are present between low-and middleincome countries (LMICs) vs high-income countries (HICs), evidence has shown that the application of these taxes can have a significant effect on consumption patterns and the well-being of the population, while being financially sustainable (Goodchild et al., 2016).
The inverse relationship between increases in sin taxes and consumption is also well established for the consumption of SSBs (Colchero et al., 2017). Research related to health and behaviour connected to SSBs intake has been conducted in HICs (Claro et al., 2012) reporting that consumption of SSBs instead of zero-calorie beverages can lead to excess weight and obesity. This has raised concerns over SSB consumption in LMIC settings where research is more limited.
From an economic standpoint, excise duties are a form of indirect taxation, in that they are levied on goods or services rather than on firms or personal incomes. This gives them greater capacity to shape consumer behaviour. Sin taxes can be applied in two different ways: per unit (defined as a fixed amount for each unit of a good or service sold, such as dollars per kilogram) or ad valorem (levied on spending and set as a percentage of the value added by a firm, as is the case of a value-added tax (VAT)). With the former, the tax is represented by a fixed amount per unit, while with the latter, the tax is made up of a fixed percentage per unit.
Sin taxes represent one way of raising revenue and, through that, creating fiscal space (FS). The revenue-generating capacity of sin taxes can help countries increase expenditure by creating additional FS (Heller, 2006), which, in turn, allows countries to direct financial resources to public spending without depressing other items of expenditure or by destabilizing budget equilibria. An analytical framework of the possible policies that can be adopted for the creation of FS in the health sector has been established (Heller, 2006;PAHO, 2015); this includes, first, the promotion of conducive macroeconomic conditions; second, a reprioritization of health expenditure; third, the improvement of efficiency in existing health expenditure; fourth, increasing the efficiency of tax collection; fifth, a recourse to external aid (grants, loans); and sixth, the creation of new tax revenues through a greater tax burden (PAHO, 2015).Latin American taxation on goods such as tobacco, alcohol and sugar, which are potentially harmful for general health, is considerably lower than the average in Organization for Economic Cooperation and Development (OECD) countries (PAHO, 2015) and, as such, represents a valid policy choice for Latin American countries, since they can simultaneously generate revenue as well as influence consumer behaviour and, by implication, population health.
Latin America is considered an area with relatively high levels of consumption of products which can prove harmful to public health (tobacco, alcohol, saturated fat). Twenty per cent of people under 20 years of age are overweight or obese in the region (Cominato et al., 2018), while this percentage exceeds 50% among Mexican and Peruvian adults (Kain et al., 2014;Batis et al., 2016;Colchero et al., 2017). Furthermore, an overall high prevalence in tobacco consumption is recorded in the region: only Ecuador, Peru, Bolivia and Paraguay report a consumption of <500 cigarettes per capita per annum, while in all other Latin American countries tobacco consumption ranges between 500 and 1500 cigarettes per capita per annum (Muller, 2008). Given the significant consumption of potentially harmful goods, the associated negative impact on health in Latin America, and considering the opportunities outlined in the FS framework (PAHO, 2015), the purpose of this paper is to assess the impact of sin tax implementation in the Latin American region. A systematic literature review is conducted for this purpose. While the impact of sin taxes has been investigated at country level in some Latin American countries (Mejia et al., 2008;Claro et al., 2012;Curti et al., 2015;Batis et al., 2016) or countries outside the Latin American region (White and Ross, 2015), including HICs (Wright et al., 2017), comparative evidence of this type of taxation at regional level, and, specifically, in the Latin American context, where there may be economic and cultural similarities amongst the countries in the region, is missing. While the effect of sin taxes in HICs is well established (Wright et al., 2017), it is unclear if these findings translate to Latin America, where there are differences in policy priorities, policy processes and fiscal commitments. There is no study that analyses and pulls together any available evidence on the impact of sin tax introduction in Latin America, a continent dominated by middle-income countries, where public investment in health is in the majority of cases low as proportion of gross domestic product (GDP) and where increases in spending are required in order to comply with universal health coverage pledges. The paper, therefore, contributes to the discussion of whether sin taxes have any effect on tax revenue and consumption of potentially harmful products, impact health impact and, broadly speaking, contribute to healthcare financing.
Approach and endpoints
A systematic literature review (SLR) has been conducted to investigate the impact of sin taxes in the Latin American region. The geographical scope of the study included the South American continent, the Spanish-speaking countries of continental central America and excluded the Caribbean region. Three endpoints were considered: first, a consumption endpoint, examining whether the application of excise taxes has had any effect on the demand for goods (i.e. SSBs, unhealthy food, tobacco, alcohol); second, a revenue endpoint,
KEY MESSAGES
• This is the first systematic literature review assessing the effect of sin taxes on consumption, fiscal space generation and their impact on population health in Latin America. • Reduction in harmful goods consumption (81% of studies), positive effects on revenue generation (71%) and on health outcomes (82%) are key outcomes. • There is still room for further tax increases where sin taxes have been adopted. • Further research is needed to improve data collection for a more comprehensive analysis of the impact of sin taxes which aimed to determine whether sin taxes can generate additional financial resources or FS for countries, and what priorities are defined for subsequent spending; and, third, a health impact endpoint, whose objective was to determine the role of sin taxes in changing the prevalence of diseases related to the consumption of harmful goods (i.e. CVD, diabetes, respiratory system disease, cancer and other non-communicable and chronic diseases, cardiometabolic problems, obesity or being overweight).
Search strategy and eligibility criteria
The SLR was performed according to the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., 2019). Both peer-reviewed and grey literature sources were searched. The following databases were searched for relevant peer-reviewed literature: PubMed, ProQuest, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and EconLit. The goal of the grey literature search was to identify publications from intergovernmental organizations that were relevant to and/or offered information and insights on our endpoints. This search strategy included all terms for sin taxes used in Latin American countries, the range of different goods on which taxes are usually applied, and all countries within the Latin American region. The intervention related to the application of sin taxes on harmful goods such as tobacco or high energy density foods, with different outcomes, all subsequently classified under one of the three defined endpoints. Relevant publications in English and Spanish were included. The exclusion steps considered were (1) exclusion of duplicates (as soon as they were identified through the screening process), (2) exclusion of unrelated titles, (3) exclusion of unrelated abstracts and (4) exclusion through full-text analysis.
Study exclusion criteria were non-Latin American countries, previous systematic literature reviews or previous meta-analyses, books or chapters of books, dissertations and theses, presentation abstracts, studies not related to any of the considered endpoints and studies lacking any assessment of relevant taxes. The study period ranged from 1 January 2000 to 31 December 2018.
Data extraction
In accordance to Cochrane guidance (Higgins et al., 2019), a template to organize the identified information has been implemented. An initial template, drawn up in Excel, included all the studies that resulted after a first screening of duplicates and titles. This template included information on the main characteristics of each study (title, author(s) and country or location), data on the purpose of the study and the tax of interest, number of participants, participant characteristics, the investigated endpoint(s), findings of the evaluation and a brief statement on the conclusion of the study. This step has been key in assisting a further screening process through the abstract analysis and, in the final stage, through the evaluation of full-text features.
Risk of bias assessment criteria
A risk of bias assessment was performed during the research for fulltext evaluation, according to the ROBINS-I tool (Sterne et al., 2016) developed by Cochrane and the BMJ, with the goal of defining the quality of the studies. The domains included for the risk of bias assessment related to confounding, selection of participants into the study, deviations from intended interventions, missing data, measurement of outcomes and selection of the reported results. Tax implementation and simulation on 'harmful' goods (alcohol, sugar, salt, junk food (i.e. calorie-dense foods) and/or tobacco products).
Comparison
No direct comparator for this study. However, studies may identify the differences in health outcomes, consumption and revenue generation before and after the sin tax was introduced on tobacco, salt, sugar and/or alcohol, which would provide further information on the effect of sin taxes on the outcomes of interest. Outcome To investigate how the sin taxes on sugar, salt, tobacco, alcohol and calorie-dense foods can affect health outcomes, consumption and revenue generation across Latin American countries.
Study design
Peer-reviewed and grey literature will be eligible for inclusion in the review, so long as they fit into the research criteria and outcomes of interest.
Data synthesis
Findings are grouped under the three endpoints, (1) effect on consumption, (2) effect on revenue generation and (3)
Study characteristics
The PRISMA flowchart (Figure 1) shows the number of studies included in our review and how they are arrived at. In the initial stage of the systematic review, 1321 studies were found across all databases. Following the screening process and by applying the exclusion criteria, 34 studies were included in the review.
Of the 34 included studies, 27 addressed the consumption endpoint, 6 the health endpoint and 10 the revenue generation endpoint; 9 studies addressed multiple endpoints. There were no randomized control trials (RCTs) amongst the included studies.
With regards to the intervention, 13 studies focused on SSBs and high energy density foods. This included the excise tax on SSBs (1 peso/L) and the 8% sales tax on foods implemented in Mexico in January 2014 and the SSBs excise tax in Brazil. Twenty-three studies were related to the taxation of tobacco products. Countries involved in the analysis included Mexico, Argentina, Brazil, Uruguay, Ecuador, Peru, Colombia and Panama. Two studies analysed the intervention on a continental and multi-country level (Garcés et al., 2014;Goodchild et al., 2017). Studies on tobacco focused mainly on the change in demand for tobacco, the impact on price caused by the tax implementation and the main features related to the demographic and epidemiological context in which these policies are operating. Alcohol was assessed in just one study, together with the analysis of tobacco demand in Ecuador (Chavéz, 2016).
The SLR included mostly observational studies and to a lesser extent narrative reviews. Most of the included literature focused on studies analysing consumption, and the main goods of interest were, first, tobacco, its demand, and the role of illicit trade and, second, SSBs and their impact on all three endpoints. Studies displayed significant variety in the population included, data sources, and evaluation methods for the specific tax of interest, as well as the evaluation of the specific tax of interest. Country differences in taxation systems, sin tax structure and levels of stakeholder involvement have added complexity to our analysis. The dominance of observational studies and the absence of other study designs (e.g. RCTs) is the result of the type of argument addressed and the requirement of wide population cohorts, which represent the national trend and must not be criticized as a source of low-quality evidence (Pindyck et al., 2018). Table 2 outlines the characteristics of included studies (endpoint, publication outlet, national setting, population, data sources indicator of interest).
Effect on consumption
SSBs and unhealthy foods The effect of sin taxes on consumption of SSBs was addressed by 12 studies. Nine of these were related to the implementation of SSBs taxes in Mexico, two focused on taxation of high-sugar content beverages in Chile and one investigated the potential relationship between SSB prices and levels of consumption in Brazil.
The literature focused on Mexico due to the high levels of SSB consumption. Before tax implementation, Mexico had the highest worldwide soft drinks consumption (163 litres per capita) in 2011 (Colchero et al., 2016b). In January 2014, Mexican government introduced a tax of 1 Mexican peso per litre on all sugary nonalcoholic beverages, i.e. sodas, flavoured waters, sweetened dairies, teas and energy drinks with added sugars, but excluded drinks consisting of 100% juice and beverages with artificial sweeteners (Claro et al., 2012). This caused an 11% price increase in carbonated SSBs and circa a 10% price increase in non-carbonated SSBs, compared with prices in 2013 (Colchero et al., 2016a). At the same time, Mexico introduced an 8% ad valorem tax on non-essential highly energy-dense foods (with at least 275 calories per 100 g) (Colchero et al., 2016b).
Six studies analysed the changes in consumption caused by the implementation of the SSB tax (1 peso/l) in Mexico. The common aim of these studies was to understand how consumer behaviour would change following the tax introduction. This was achieved by investigating different data sources, notably, Nielsen's Mexico Consumer Panel services (henceforth Nielsen Panel), that collects data on households' monthly purchases and covers 63% of the Mexican population, and the Mexican National Health and Nutrition Survey based on questionnaire responses and manufacturing sector data, particularly the 'Economic Behaviour of the Industries in the Country' (EMIM) database. All six studies highlighted that the introduction of the specific SSB tax increased the price of SSB products approximately by 10% in 2014 compared with 2013. Results from one study (Colchero et al., 2017) showed a decrease in SSB purchases of 5.5% in 2014 and 9.7% in 2015 (average reduction of À7.6% in 2014-15) compared with the 2012-13 period. Another study (Colchero et al., 2016b) based on the same source found a change in SSB purchases of À6% in 2014 compared with 2012-13. The reduction was higher in low socioeconomic status (SES) groups, relative to medium and high SES groups (À9.1% vs À5.5% vs À5.6%, respectively). Another study (Ng et al., 2019), based on the Nielsen Panel, divided the study population in four groups encompassing all possible consumers of taxed and untaxed beverages: (1) those who had higher (H) purchases of taxed (T) beverages and lower (L) purchases of untaxed (U) beverages (HTLUand whose consumption choices were considered unhealthier), (2) those who had higher (H) purchases of taxed (T) and higher (H) purchases of untaxed (U) beverages (HTHU-whose consumption choices were also considered unhealthier), (3) those who had lower (L) purchases of taxed (T) and lower (L) purchases of untaxed (U) beverages (LTLU-whose consumption choices were considered healthier) and (4) those who had lower (L) purchases of taxed (T) beverages and higher (H) purchases of untaxed (U) beverages (LTHU-whose consumption choices were also considered healthier). The study compared the pre-tax behaviour of these groups with their consumption levels after the SSB tax implementation. Among others, results showed that, following the SSB tax implementation, the HTLU-unhealthier and HTHU groups (both considered to be 'unhealthy' in their consumption choices), reduced their consumption of taxed beverages both in absolute and relative terms and, at the same time, increased their consumption of untaxed beverages. It has been shown that the greatest effect of this consumption shift from taxed to untaxed beverages was observed in the lowest socioeconomic group. A further study (Colchero et al., 2016a) using an alternative data source, notably, manufacturing industry data (EMIM) analysed the changes in SSB and plain water sales in 2014 and 2015 (using the pre-tax period, 2007-13, as a counterfactual). Results suggested a decrease in SSB per capita sales of 7.3% and an increase of 5.2% in plain water per capita sales in the 2014-15 period compared with the counterfactual, reporting an association of the tax implementation with the changes in per capita sales. Overall, results of the studies assessing SSB tax implementation in Mexico reported a decrease in the consumption of taxed SSBs, and that the tax mildly shifted purchases towards untaxed beverages or other products. Some studies (Colchero et al., 2016b(Colchero et al., , 2017Wright et al., 2017) pointed out that effects of tax implementation may be more substantial in the long-term rather than the short-term. This would be because human habit formation is gradual and changing behaviour in light of increased taxation may take time to shape (Colchero et al., 2017;Wright et al., 2017). Additionally, following tax implementation consumers may switch to cheaper untaxed beverages and this pattern could be better seen over the longer term (Colchero et al., 2016b). The results (measures, intervention and counterfactual) included in the above studies were adjusted for different indicators, mainly seasonality of beverage consumption and socioeconomic factors. Without such adjustments, the results would have been biased by temporary factors.
Ortega-Avila et al. (2018) examined how the implementation of the tax was perceived by a cohort of adolescents. A qualitative study explored the awareness and perception on the introduction of the SSB tax within a cohort of Mexican adolescents, reported most of them were unaware of this policy and that they perceived the 1 peso/ l increase as not high enough to shift their preferences and SSB consumption patterns. For those interviewed, alternatives to costly SSB products would mainly be homemade drinks. The study underlined that the impact of the tax could be misperceived by some segments of the population and that this would represent a limitation in changing citizens' attitudes towards these products. Another study (Á lvarez-Sá nchez et al., 2016) focused on the awareness of Mexicans on the SSB tax introduction. Based on questionnaire survey data of >6,000 adults, the study found that people's awareness and decrease in consumption were directly proportional, i.e. people who were aware of the tax introduction were more inclined to decrease their SSB intake.
Three studies (Batis et al., 2016;Taillie et al., 2017;Herná ndez et al., 2019) focused on the 8% ad valorem tax on non-essential and energy foods in Mexico. One study (Batis et al., 2016) analysed the difference in the volume purchase of taxed and untaxed packaged food between observed data in 2014 and their respective (1994,1996,1998,2000,2002,2004,2005) National counterfactual . The study showed that, in 2014, the consumption on purchased food was 467 g/per capita/year, compared with the 492 g consumed food predicted by the counterfactual, with the mean volume of taxed food purchases decreasing by 5.1%. At the same time, non-significant variation was found between observed and counterfactual volumes of untaxed food purchases. A difference in consumption between SES groups was detected as well.
For the low SES, there was a decrease of 10.2%, while for medium SES the decrease stood at 5.8%. Interestingly, no change in consumption was found in the high SES. However, the study pointed out that it was difficult to infer a causality between the tax implementation and the consumption changes due to database limitations in terms population representativeness (data mainly concentrated in urban areas), and the 2 years' counterfactual could be considered limited in evidencing changes in consumption patterns. Results from the second study (Herná ndez et al., 2019) were in the same direction, recording a decrease of 5.3% on taxed food purchases in 2014-16 compared with 2008-12. At the same time, untaxed food consumption increased by 2.8% during the same period. The last study focused on the 8% ad valorem tax in Mexico (Taillie et al., 2017) and was based on the Nielsen's panel. It analysed how different types of households (low/high income) and consumers (with healthy/unhealthy behaviours or diet) reacted to this tax, by implementing a pre-post study design (2012, prior to tax implementation, to 2015, post-tax implementation). The study reported that the total volume of taxed products purchased declined by 4% in 2014 and by 14.2% in 2015, while the untaxed purchase changes were higher in 2014 (þ2.8%) but declined in 2015 (À4.9%). The household subgroup analysis reported that, in the post-tax implementation period (2014-15) compared with the pretax period (2012-13), the low-income household group consumption decreased by 1.3%, the high-income household consumption (i.e. those purchasing a lot of both taxed and untaxed products), decreased by 1.2%; consumers, whose consumption patterns were considered to be 'unhealthy' (i.e. consuming more taxed products and less untaxed products) decreased their total consumption by 4.9%, while consumers whose consumption patterns were considered to be 'healthy' (i.e. consuming more untaxed products and less taxed products) registered no differences in the post-tax period. Overall, the study reported a higher decrease in the second year after the implementation, compared with the first. The authors argue that this could be caused by many factors, probably by a gradual shift in consumer habits or by awareness campaigns on the harmful health impact of these products. The major gap between healthy and unhealthy households in consumption patterns might be explained by the fact that healthy consumers are already less inclined to buy harmful foods compared with those used to buy them. The study confirmed a trend of reduction in the consumption of energy-dense ultra-processed foods after tax implementation in Mexico.
Two studies (Caro et al., 2018;Nakamura et al., 2018) analysed the impact of the "Impuesto Adicional a las Bebidas Analcoh ò licas" (IABA) related to SSBs in Chile, which was implemented in October 2014. Specifically, in 2014, there was an increase in the tax rate from 13% to 18% on beverages with high levels of sugar (H-SSBs), defined as beverages with >6.25 g of sugar per 100 ml. Conversely, there was a tax decrease for beverages containing <6.25 g of sugar per 100 ml. Both studies showed a decrease of H-SSBs consumption in the post-increase period, compared with the pre-increase period. Caro et al. (2018) reported a monthly per capita decrease in H-SSBs purchases of 3.4% by volume, and 4% by calories, while the volume of L-SSBs increased of 10.7%, based on a post-increase period from November 2014 to December 2015 and a pre-increase period, as counterfactual, from 2013 to October 2014. Nakamura et al. (2018) also reported an H-SSBs monthly purchased decrease of 21.6%, by comparing the post-increase period (November 2014 to December 2015) to a pre-increase period that started in 2011. However, both studies agreed that the small increase in the SSB tax did not impact the population significantly, and that based on the small cohort observed and the short post-tax period it was not possible to assess the causal effect of the tax.
In addition to the research focusing on Mexico and Chile, another study (Claro et al., 2012) attempted to evaluate price and income elasticity related to SSBs in Brazil. Although, strictly speaking, not a taxation study, the study simulated the effects on consumption of a 1% increase in price and 1% increase in income and analysed SSB taxation practices in Brazil; the study reported that a 1% increase in the price would cause a 0.85% reduction in SSBs product consumption. Additionally, changes in family income would influence SSBs consumption: for a 1% increase in family income there would be a corresponding 0.41% increase in SSBs consumption. Overall, poor households in Brazil would be more than twice as likely, relative to wealthy households, to change their consumption patterns if price and income changed. The study, however, underlined how these estimates were based only on home food and beverage consumption, approximately accounting for 76% of total household expenditure, leaving almost a quarter of purchasing patterns unaccounted for by the analysis.
Tobacco
Fifteen studies evaluated various aspects of tobacco use, i.e. the effect of tax implementation on consumer behaviour, the role of illicit tobacco product consumption, how price and income elasticity were shaped in each country and how elasticity could potentially change or was found to change following tax implementation. Mexico was included in four studies; the country dealt with a tobacco-related reform process which commenced after the ratification of the Framework Convention on Tobacco Control (FCTC) in 2004 and lasted for nearly a decade. Mexico is considered to be a country with a heavy burden of tobacco-related ill-health, reporting a smoking rate of 14.5% among Mexican adults (WHO, 2015). Three of the identified studies (Saenz-de-Miera et al., 2010;Guerrero-Lopez et al., 2013;Reynales-Shigematsu et al., 2015) focused on the effect of the new tax structure (updated to 2011) on tobacco consumption levels, through country-level surveys and self-reported price of cigarettes by consumers. The research mainly underlined how smoking rates declined by 30% during 2002-15, how adolescent and adult groups reduced tobacco consumption in response to the specific excise tax introduction, and how the reform process uniformly affected all sociodemographic groups.
A narrative review on Argentina (Goodchild et al., 2016) reported that tobacco affordability rose by 100% between 1997 and 2007, whilst the country experienced sharp economic growth. The study offered significant insights on how the introduction of an excise tax on tobacco would significantly reduce smoking prevalence (it was assumed that a 10% price increase would reduce the prevalence by 3%). Another study (Ferrante et al., 2007) used a tobacco policy simulation model to evaluate how policies introduced in Argentina, relating to advertising, promotion and sponsorship bans, would have an effect on consumption. The study reported that these policies, regardless of the low level of taxes on cigarettes compared with HICs, produced a relative reduction in tobacco consumption in 2004 compared with 2001.
The literature also provides evidence on the extent of 'illicit consumption' of tobacco products and the effect of overall illicit smoking prevalence. Illicit consumption refers to consumption of tobacco products not legally purchased (e.g. counterfeit cigarettes). Three studies (Iglesias, 2016;Iglesias et al., 2017;Szklo et al., 2018), all from the Brazilian context, estimated how illicit cigarette consumption changed after the excise tax implementation in 2012, using national surveys (GATS-Brazil, Vigitel). The studies researched how the excise tax implementation impacted the overall proportion of illicit cigarette use among smoking population or on illicit smoking prevalence, looking at the general population or focusing on adults aged 18 years or older (see Table 2). All studies showed a reduction in smoking prevalence, but at the same time, an increase in illicit consumption from 16.9% in 2008 to 32.3% in 2013 was observed and continued to grow until 2016, when the estimated proportion of illicit consumption reached 42.8%. Curti et al. (2015) analysed whether a price increase in tobacco products would encourage smokers to consume cheaper tobacco products in Uruguay, by switching their consumption to illicit tobacco products. The study reported that a 10% price increase would increase by 4.6% the probability of consuming roll-your-own cigarettes over more expensive manufactured legal cigarettes, suggesting that it is relevant to narrow different tobacco product prices in order to successfully reduce overall consumption.
The last point of the tobacco consumption analysis is related to the price and income elasticity of demand, whether the demand for tobacco products is elastic or inelastic and whether tobacco products are normal and necessary goods. Data from five countries (Argentina, Colombia, Ecuador, Mexico, Peru) were identified and based on the evidence provided, both price and income were found to shape household or individual behaviour. Specifically, across all five countries, demand for tobacco products was found to be inelastic (with price elasticity of demand <À1, indicating low responsiveness to price changes; e.g. a 10% increase in the final price of tobacco products would result in a decrease in consumption by <10%). This could occur for various reasons, mainly related to consumer information on the new price, the level of addiction or lack of awareness of the risk related to tobacco products. In terms of the responsiveness of the demand for tobacco products to a change in income, captured by the income elasticity of demand, the evidence from all five countries showed that with an increase in income, tobacco consumption increased less than proportionally. The reported results confirmed that tobacco products are normal goods (income elasticity of demand being >0, with consumers raising consumption levels as their purchasing power increases) (Pindyck et al., 2018); they were also found to be 'necessities' (income elasticity of demand being >0 but <1) (Table 3).
Alcohol
Only one study (Chavéz, 2016) analysed alcohol consumption, and the effect of price elasticities of demand for tobacco and alcohol. The study reported a higher effect based on the price elasticity of demand for tobacco (À0.87) compared with alcohol (À0.44). The study also assessed the elasticity compared with Chilean total expenditure based on the quantity and quality of the goods, finding that the elasticity of alcohol consumption relating to total expenditure was 0.41 (compared with 0.5 for tobacco consumption), meaning that the variation in the quantity of consumed alcohol was relatively inelastic compared with the tobacco when total expenditure increased. If total expenditure declined, high-quality cigarettes and alcohol consumption would also decline, the latter being more sensitive to expenditure changes.
Effect on revenue generation
Tobacco Nearly all studies on revenue generation (9 out of 10) focused on revenues from tobacco taxation. Two studies approached this topic by considering multiple Latin American countries. One of them (Goodchild et al., 2017) examined the effect of tax increases on weighted average prices, revenue generation and volume. On average, a 50% tobacco tax increase across the Latin American region would raise weighted average tobacco product prices by 28%, generate US$7 million revenue (þ32%), and reduce the volume consumed by 7%; this trend would be traced in nearly all Latin American countries. The other study that considered the entire region (Garcés et al., 2014) did not analyse a potential implementation but, rather, compared how Central American countries adapted to the FCTC directives. The analysis showed an overall gap that needed to be filled, due to the political and economic complexity of the area, and a lack in prioritization of research on legislation related to tobacco.
Six studies analysed the revenue effect of tobacco taxation at country level. Two of these (Iglesias, 2016;Iglesias et al., 2017) studied how the implementation of two alternative taxation systems (either an ad valorem, or a mix of specific and ad valorem) allowing manufacturers of tobacco products to choose from in the Brazilian context impacted fiscal revenue and, as a consequence, changed levels of illicit consumption. In the Brazilian tobacco tax reform tobacco producers could choose between two regimes: a general regime, similar to the taxation system prevailing since 1999, where the ad valorem rate would have been 45% of the consumer price; and a special regime with a mix of specific and ad valorem rates. The latter has a lower ad valorem rate that could not be higher than 15%. Results were uniform in both studies: although revenue collection more than doubled in the observed period (2006-13) in absolute terms, sin tax introduction led to an increase in the illicit market, both in absolute terms and proportionally to the legal market (illicit daily tobacco consumption increased from 16.6% in 2008 to 31.1% in 2013). Based on that, the study concluded that it would be possible to increase revenue from taxation, despite the increase in the illicit market. A simulation study (Jimenez-Ruiz et al., 2008) estimated that, with other factors being constant, a 10% price increase of tobacco products would yield an increase in revenue by 15.7% in Mexico. Another study (Rodriguez-Iglesias et al., 2017) reported that despite the changes in real income and the final prices of cigarettes, even a 100% price increase in a low-revenue scenario would be beneficial for revenues and sustainable for the market. A study sampled 15 countries (including Brazil, Mexico and Uruguay from Latin America) to analyse the range of prices paid for cigarettes (Kostova et al., 2014) and suggested that a uniform high excise tax would be more likely to reduce the range of cigarette prices compared with a tiered tax structure (i.e. where cheaper cigarettes are taxed at a lower rates than more expensive cigarettes) in each of the study countries, all of which were LMICs. Levels of excise tax are one the main components of tobacco prices and price ranges of tobacco products can determine purchase levels. Bardach et al. (2016) adopted a microsimulation model to assess, among other things, smoking impact on costs associated with a set of cardiovascular, pulmonary and oncology diseases and found that, with a 50% price increase of tobacco products, Peru would collect 3.14 billion of Peruvian Sol (equivalent to US$1.05 billion) in the 10 years following the price increase. Finally, a study (James et al., 2019) researched how a tax increase in Colombia could potentially impact revenue generation. The tax increase, legislated in December 2016, tripled the specific excise taxes and increased VAT by 3%, leading to a 70% relative price increase in tobacco products. Based on a simulation and following the introduction of the new increases, the net annual gains in tax revenue were estimated at COP$1.26 billion (approximately US$364 million) compared with the pre-tax net annual gains (2016) over a 20-year period.
Sugar-sweetened beverages
The only study (Sánchez-Romero et al., 2016) addressing the effect of a nationwide SSB tax on consumption simulated how a potential reduction in SSB intake, following a tax increase, beyond revenue generation, would impact on direct diabetes healthcare costs in Mexico in terms of generating potential healthcare cost savings. The simulation was based on two different scenarios,notably a 10% and a 20% reduction in SSB consumption, also taking into account any potential replacement for calorie compensation. Simulation results reported that, with a 10% reduction in SSB consumption, 983 million international dollars would have been saved over a period of 9 years, while a 20% reduction would have led to a saving of 1.9 billion international dollars.
Sugar-sweetened beverages
The only included study for this endpoint analysed the sin tax impact on health in Mexico (Sá nchez-Romero et al., 2016). The Mexican population suffers from high rates of diabetes, excess weight and obesity, and cardiometabolic problems, all of which are strongly associated with increased SSB intake (Sá nchez-Romero et al., 2016). In order to quantify how excise taxes on SSBs could lead to changes in health outcomes, Sá nchez-Romero et al. (2016) simulated the effects of two scenarios, a 10% and a 20% reduction in SSB consumption, both with a 39% calorie compensation (i.e. still receiving 39% of daily calorie intake through non-SSB foods or drinks), and their impact after 10 years. Results in both scenarios showed a significant reduction in the number of people affected by diabetes, suffering a stroke or a heart attack and an overall reduction in deaths, particularly in the 35-49 age group.
Tobacco
The impact of tobacco on health outcomes was addressed by five studies.
A study on Peru (Bardach et al., 2016) estimated that in 2015, 31% of all deaths ($16,833 out of 54,301) in the country were associated with tobacco consumption. The study calculated that a 25% price increase in tobacco through taxation could reduce the number of deaths by 6,695 over a period of 10 years; a 50% price increase would potentially avoid 13,391 deaths, while a 100% price increase would avoid 26,782 deaths over 10 years. A study on Argentina (Ferrante et al., 2007) developed a simulation model to assess how tax increases in tobacco retail prices would impact avoidable deaths. Two tax increase scenarios were adopted: one at 75% (compared with taxation at 68% in 2007, leading to an overall 28% price increase) and one at 85% (with a final price increase of 113%). With a 75% increase, 1,899 deaths per year would be avoided over a 20year period (2004-24), and a further 2,911 deaths would be prevented in the 2024-34 period. With an 85% increase, 7,581 deaths per year would be avoided until 2034. In the context of Mexico, despite the ratification of FCTC, the number of deaths associated with tobacco consumption increased from 47,800 to 56,800 in the 2002-13 period (Reynales-Shigematsu et al., 2015). Through the use of the SimSmoke model, it was estimated that the implemented policies in Mexico (taxation, health warnings, smoke-free air laws, advertising restrictions), would prevent 3,000 deaths in 2013, and contribute to an overall reduction in the death rate by 10,800 in the 2002-13 period. Additionally, the model predicted that the current regulation would prevent 826,000 smoking-related deaths by 2053. Smoking ban regulation and tobacco tax increase were tested by a study (Jan et al., 2014) for association with the risk of having an acute myocardial infarction (AMI) in Panama. The smoking ban was issued in May 2008 while the tax increase was implemented in November 2009. The study set two pre-tax periods (May 2008to April 2009and May 2009to November 2009) and a post-tax period (December 2009 to December 2010) of intervention as periods of observation and was based on hospital admission data. Results showed that the relative risk of having an AMI was similar in all three periods (first period: 0.982; second period: 1.049; third period: 0.985), underlining how these two policies had no short-term effect on CVD prevalence. A micro-simulation model set in Peru estimated that a 50% price increase in tobacco products would avoid nearly 14,000 deaths, 6,210 cardiovascular events and 5,361 new cancer cases over a period of 10 years (Bardach et al., 2016). Finally, evidence from Colombia (James et al., 2019), simulating whether the 2016 average price increase in cigarettes might result in additional life-years gained (LYG), found that over a period of 20 years the impact would be 191,000 additional LYG, of which 50% would come from the two lowest income quintiles and only 28% from the the highest income quintile. Table 4 shows the low, medium, high and unclear risk of bias occurring in each domain and categorizes high risk of bias in subcategories. Each sub-category has a number that is included in the risk of bias table and represents the specific type of risk of bias. Due to the nature of the included studies the ROBINS-I tool was adopted, specifically designed to assess risk of bias in non-randomized studies. Twenty-eight out of 34 studies reported at least a medium/unclear or high risk of bias in at least one of the seven dimensions we have considered (confounding; selection of participants; intervention classification; deviation from intended intervention; missing data; outcome measurement; and selection of reported results). Most of the medium/high risk of bias were related to the outcome measurement (13 studies reported high risk, while 5 reported medium/unclear risk), followed by missing data (10 studies reported high risk, 2 reported medium/unclear) and deviation from intended intervention (9 studies reported high risk while 2 reported medium/unclear). Conversely, only 2 studies reported risk of confounding bias (1 high risk and 1 medium/unclear), and 3 reported intervention classification bias (0 high risk and 3 medium/unclear). Results showed a relevant presence of moderate or high risk of bias specifically in the missing data and the outcome of measurement domains. Missing data bias was primarily due to the lack of information on geographical coverage, production chain (manufacturer or retailer data), economic and social indicators. Bias in outcome measurements, due to self-reported data and underestimation of intervention and/or comparators, were often linked to a vague composition of data. A more detailed description of risk of bias is available in Table 4 (and more detailed information is provided in Supplementary Appendix Table SA1).
Discussion
This SLR identified and assessed the impact of sin taxes on goods that are considered to be harmful from a public health perspective in Latin American countries from 2000 to 2018, by analysing the evidence based on three endpoints: effect on consumption, effect on revenue and health impact and is the first that is doing so in the Latin American region. Twenty-three out of 27 studies examining consumption effects confirmed that the application of a sin tax was inversely related to consumption levels. In the case of SSB tax in Mexico and its effect on consumption, this has been analysed by seven studies, six of them confirming the inverse relationship between tax introduction and consumption levels. Evidence from 10 studies analysing the revenue endpoint is aligned in supporting excise tax implementation or increase in the region to support additional revenue generation in a sustainable manner, providing, among others, case studies focused on Argentina, Brazil, Colombia, Mexico and Peru. Finally, five out of six studies focusing on the likely impact on health showed through a series of simulation models that potential sin tax implementation or increase would avert thousands of deaths, particularly from CVD and cancer, as well as lead to hundreds of thousands of additional LYG in a relatively short timeframe. Table 5 provides a summary of sin tax effect(s) or impact(s) and the extent of the effect(s) or impact(s) reported by each study. None of the studies reported a negative effect or impact on any of the three endpoints. Results and conclusions on the association between sin tax implementation or increase and decrease in the consumption of harmful goods for public health, improved population health conditions or new sources of revenue in Latin America are aligned and compatible with findings from the literature in other geographical areas. An earlier systematic review (Wright et al., 2017) with different criteria analysing 102 studies, focused on how consumption levels and revenue generation could be affected by public health taxes. This review did not focus on a specific geographical area, but the vast majority of the studies included came from HICs. Nevertheless, it confirmed the effectiveness of sin taxes as a tool for reducing harmful goods consumption, while revenue collection would be dependent on a variety of factors, e.g. the effectiveness of taxation in changing behaviour. Another recent systematic review (Redondo et al., 2018) has analysed results from 17 studies examining how the impact of taxes could shape SSB consumption. Likewise, the inverse relationship between SSB consumption and taxation levels was confirmed. Our study reinforces all these findings particularly with Source: The authors from the literature.
The table above summarizes the risk of bias level of each study. Green, yellow and red dots, respectively, indicate low, moderate/unclear and high risk of bias according to each domain.
Source: The authors from the literature. After 1 year of SSBs tax implementation purchases of taxed beverages decreased on average by 6% (compared with the counterfactual), with all the socioeconomic groups reducing their purchases levels. Study reports an association between SSB tax implementation, reductions in taxed beverages purchases and increases in untaxed beverages purchases.
No SSBs Mexico Positive SSBs tax led to a decrease in taxed beverages purchases of 5.5% in 2014 and 9.7% in 2015 compared with the pre-tax period . Findings support the association between SSB tax implementation and decrease in taxed beverages purchases. Curti et al. (2015) No Tobacco excise tax Uruguay Neither positive nor negative Ten per cent price increase of tobacco prices would increase by 4.6% the probability of consuming rollyour-own cigarettes over more expensive manufactured legal cigarettes, suggesting that it is relevant to narrow different tobacco products prices in order to successfully reduce overall consumption Ferrante et al.
Mexico
Positive Following the implementation of the energy-dense nutrient poor-foods tax there was a decrease of 5.4 g/ week per capita in taxed food consumption compared with pre-tax period (2008, 2010 and 2012). The study reports how tax implementation has been effective in reducing taxed food purchases.
Mexico
Positive Total price elasticity is À0.52, meaning that a 10% in the cigarette price would lead to a À5.2% decrease in average cigarette consumption. Higher prices would reduce household smoking largely in terms of smoking prevalence.
Maldonado et al.
No Tobacco Colombia Positive Tobacco demand is sensitive to price and income. Demand price elasticity is À0.78, while income elasticity is 0.61. Study supports a higher taxation on tobacco to reduce consumption in the country.
Martinez et al.
No Tobacco tax Argentina Positive Long-term income elasticity was 0.43, while own-price elasticity was equal to À0.31, meaning that a 10% increase in the growth of real income led to an increase of tobacco consumption of 4.3%, while a 10% price increase produced a fall of 3.1% in cigarette consumption. Short-term income elasticity was 0.25, while short-term own-price elasticity of cigarette demand was À0.15. Study suggests how findings on elasticity provide positive evidence on tobacco tax increases.
Nakamura et al.
No SSBs Chile Mildly positive After tax implementation, there was a significant decrease in the monthly purchased volume of higher-taxed sugary soft drinks by 21.6%. Reduction in soft drink purchasing was higher in higher socioeconomic groups and in higher pre-tax purchasers of SSBs. However, evaluation did not involve a randomized design, therefore results cannot demonstrate a causal inference. Ng et al. (2019) No SSBs Mexico Positive After SSBs tax implementation, there has been a reduction in consumption of SSBs, particularly among high SSBs purchasers, compared with the pre-tax period.
No SSBs
Mexico
Neither positive nor negative Limited awareness among adolescents of the SSB tax implementation. Impact of the tax could be misperceived by some segments of the population and that this would represent a limitation in changing citizens' attitudes towards these products.
Reynales-Shigematsu et al. Neither positive nor negative Illicit cigarette use increased in Brazil, both overall and across two socioeconomic groups of smokers who did not stop smoking, after a new cigarette excise tax implementation. Illicit consumption needs to be carefully considered as a potential consequence of excise tax increase or implementation.
Taillie (2017) No SSBs Mexico Positive Total volume of taxed products purchased declined by 4% in 2014 and by 14.2% in 2015, while the untaxed purchase changes were higher in 2014 (þ2.8%) but declined in 2015 (À4.9%). The study confirmed a trend of reduction in the consumption of energy-dense ultra-processed foods following tax implementation.
(continued) Excise taxes implementation or increase on tobacco products is encouraged in Latin America, as tobacco taxes in the region are overall far lower than the levels recommended by FCTC.
Goodchild et al. Positive Developed a simulation model to assess how tax increases in the retail price of tobacco would impact avoidable deaths. Two tax increase scenarios on the retail price were adopted: one at 75% (compared with taxation at 68% in 2007, leading to an overall 28% price increase) and one at 85% (with a final price increase of 113%). With a 75% increase, 1899 deaths per year would be avoided over a 20-year period , and 2911 deaths would be prevented in the 2024-34 period. With an 85% increase, 7581 deaths per year would be avoided until 2034.
(continued) regards to the decrease in consumption and, additionally, expands the research rationale by investigating the potential association between sin tax introduction and likely health outcomes. However, our study also portrayed a very complex context in which the policy-making process faced many obstacles to achieve the ideal tax reforms required for this purpose. Latin America consists primarily of middle-and upper middle-income countries, with significant consumption of sugar, alcohol and tobacco. Despite high rates of tobacco consumption, tobacco taxation is generally underutilized compared with taxation levels in HICs (Sandoval et al., 2016). Retrospective analysis of sin tax introduction and simulations confirmed that the current level of taxation in the region could be increased considerably and this could lead to a sustainable generation of FS. In this sense, countries in the region could effectively pursue one or more of the ways proposed in the FS analytical framework, e.g. introduce or raise taxation levels whilst also trying to improve healthcare efficiency. The extent to which sin taxes can successfully fund health care depends on many factors, including the type of sin tax, the response of consumption to price increases, captured by the price elasticity of demand, income levels, the burden of disease, the extent to which relevant taxes are hypothecated (earmarked) and, interestingly, the broader political consensus among stakeholders on choices related to public expenditure (Clements and Gupta, 2012), which, in turn shapes the political feasibility of introducing additional taxes. Lack of consensus has been showcased as an important factor in the Argentinian context, where the lobbying power of tobacco producers has diverted the government from adopting the measures included in the FCTC despite wide smoking prevalence in the country and the elevated burden of disease directly or indirectly attributable to tobacco (Mejia et al., 2008). Argentina, with one of the lowest tobacco prices in the world (Rodriguez- Iglesias et al., 2018) also experienced an increase in affordability over the last decade. Brazil is the third major producer of tobacco in the world (Gigliotti et al., 2014), and is also facing extensive levels of tobacco lobbying. This can cause tensions among stakeholders and influence, or even shape, taxation policy.
Many of the included studies explicitly reported how even a strong tax increase in some products that are classed as (potentially) harmful would lead to a rise in total tax revenue, therefore, it would be an efficient way to raise revenue. However, it has also emerged that in some cases, particularly as concerns tobacco and alcohol, an increase in taxation would not automatically generate a certain amount of revenue, since levels of consumption might be different from expected or because the illicit market could grow and replace the legal market, at least in part. Consequently, there are broader considerations shaping the discussion around the introduction of sin taxes, in this case, law enforcement to counter the effects of illicit trade. On the other hand, the long-term health consequences of continued consumption of tobacco, alcohol or sugary drinks can be considerable. Countries like Mexico face significant health challenges related to diabetes, with the highest prevalence among OECD countries (Levy et al., 2018), obesity and CVD, some of which is attributable to high consumption of SSBs over long periods of time.
Guidelines from inter-governmental organizations on sin tax implementation have been only partially followed by Latin American countries. The WHO FCTC (2003) and the MPOWER Report (2008) state that an increase in taxes on cigarettes, country promotion of bans on advertising, laws on smoke-free areas, health warnings, media campaigns and policies for treatment cessations, if applied in a systematic way, would significantly reduce tobacco consumption rates in adults (Paoletti et al., 2012;Levy et al., 2018). In particular, article 6 of the FCTC reports that the increase in tobacco Simulated the effects of two scenarios, a 10% and a 20% reduction in SSB consumption, both with a 39% calorie compensation (i.e. still receiving 39% of daily calorie intake through non-SSB foods or drinks), and their impact after 10 years. Results in both scenarios showed a significant reduction in the number of people affected by diabetes, suffering a stroke or a heart attack and an overall reduction in deaths, particularly in the 35-49 age group.
a A positive effect in the consumption section means that there is a decrease in consumption after tax implementation, while a negative effect means that no decrease in consumption is detected. With regards to the health and revenue sections, a positive effect means that there are health improvements or new revenues with tax implementation and vice versa. Source: The authors from the literature.
price through excise taxation is the most cost-effective single measure in order to reduce the demand for tobacco and contribute smoking cessation improvement (WHO, 2003). These international guidelines are interfacing with a complex regional scenario, which is characterized by a particularly challenging epidemiological reality, significant levels of production of alcohol, tobacco and sugar and a timid political consensus over guidelines such as the ones by WHO in some countries. The consumption level of harmful goods, the related burden of disease and the difficulties in the tax structure reform in Mexico were a clear example of how consumer habits, state of health and state regulation can have significant impact on outcomes in the health of the population and in the long-term sustainability of the health system. At the same time, even with a partial reform compared with what FCTC recommended, evidence from Mexico showed how a wider approach that included taxation and an organized set of other measures, could lead to a sensible improvement in all the endpoints considered.
The role of data collection and research related to sin taxes and their impact represented another relevant point which emerged from our study. Funded studies included in our systematic literature review received grants only from government organizations [e.g. National Institutes of Health (USA), the Brazilian Ministry of Health], international organizations (e.g. the World Bank), non-Latin American non-governmental organizations (e.g. Bloomberg Philanthropies) or academic institutions (e.g. University of South Carolina). Of course, in many countries, general research may be conducted by manufacturing industries (Chavéz, 2016;Iglesias, 2016) and, as such, could represent a source of bias as it represents corporate interests. Academic research leveraging country-level data appears to be limited as the only relevant sources are national surveys, often not including rural areas or relying on self-reporting methods. This creates a high risk of bias for researchers and policymakers and has been already highlighted with regards to beverage industry statistics, which can be misleading and 'fail to account for population or economic growth' (Colchero et al., 2017). The same was observed in the case of tobacco and how industry lobbying activities can strongly influence policy-making. Studies exist discussing how the tobacco industry concurred with the nonimplementation of tobacco taxes in many parts of the world, despite the robust scientific evidence supporting their implementation (Jha and Chaloupka, 2000). The case of Argentina may represent the most telling example in Latin America of the relationship between the state and the tobacco industry that is weighted in favour of manufacturers' aims.
For these reasons, the implementation of sin taxes varies across settings based on the specific targeting of goods, the effective amount of tax, the choice between 'per unit' vs 'ad valorem' taxes, and the use of potential FS created beside the underlying broader rationale that justifies sin tax implementation. Research has emphasized that the specific country framework with regards to overall health state, socioeconomic composition, consumer habits, policy-making processes and orientations determines the most effective pathway for a successful sin tax implementation in the case of SSBs (Brownell et al., 2009;Claro et al., 2012). That said, a specific definition of what products are targeted is necessary to avoid side-effects in consumption, such as provoking the use of other similar harmful goods (i.e. goods of dubious quality that might not be captured by the tax reform, such as low-quality foods or beverages).
The definition of the appropriate amount of tax is another controversial decision. Research on cardiovascular risk in young adults (Duffey et al., 2010) concluded that only high rates of taxation would produce a significant change in consumption; this is consistent with recommendations made by many of the studies in this study.
The decision of adopting a 'per unit' vs 'ad valorem' tax is usually at the forefront of the debate. A per-unit sin tax is easier and more flexible to implement from a government regulation perspective than an ad valorem tax; generally, LMICs are encouraged to implement per-unit taxes because of limitations in law enforcement or administrative capacity. The disadvantages of this type of tax relate to the frequency and timing of revisions in order to ensure the tax remains effective. In fact, manufacturers can try to adjust the burden of the tax on some segments of the production process and, through that, reduce the price increase of the product to the consumer. At the same time, consumers can shift their consumption to lowerpriced goods or to other similar products, keeping in mind, however, that for some products (e.g. tobacco and alcohol) substitution may be difficult. An ad valorem tax has its advantages because it easily adjusts to inflation changes, it is more visible and directly payable to the authorities.
Notwithstanding the discussion on the relative merits of per unit or ad valorem taxes, the predicted revenue from the tax is uncertain and requires careful monitoring. An analysis of European Union countries has demonstrated that per-unit taxes have a better yield than ad valorem taxes on retail cigarette prices (Delipalla and O'Donnell, 1998;Goodchild et al., 2017). However, the overall preference of one tax over the other depends on specific country features and the specific objectives of policy-makers. Other studies (Wright et al., 2017;Whitehead et al., 2018) suggest that specific taxes would be more effective in reducing the consumption of certain goods since an ad valorem tax could potentially shift consumption to cheaper and lower-quality goods, or induce manufacturers to reduce prices in order to maintain consumption levels. A final, broader, point of discussion relates to the implications of sin tax introduction on choice and on its regressive nature. Two studies (Brownell et al., 2009;Claro et al., 2012) claim that the benefits of sin taxes, especially on health outcomes, outweigh their dis-benefit on choice. With regards to their regressive nature, products subject to sin taxes, such as cigarettes, tend to cause greater harm to lower socioeconomic groups, and although the latter are impacted financially more heavily than higher socioeconomic groups, the incentive to behavioural change is greater.
Study limitations
The results of this study reflect Latin American countries' economic, political and epidemiological features and reality; therefore, the results may not be generalizable to other geographical regions. Furthermore, the analysis compared countries with significant differences in regulation, epidemiological frameworks and economic conditions, while many studies analysed policy changes in just a few countries. Additionally, most studies analysed sin tax impact within a relatively short space of time and lack long-term evidence. Despite all the above, the broader results of this study are consistent with most of the recent academic literature and underline many potential benefits of sin tax implementation in middle-income countries.
Conclusion
This study has confirmed the role of sin taxes in the Latin American context as a valid policy option for reducing consumption of harmful goods, generating additional revenue and potentially improving health outcomes. The majority of studies reported that implementation of sin taxes in Latin America resulted in reductions in harmful goods consumption, increases in revenue generation and a positivealbeit simulated-effect on health outcomes. The results on the risk of bias assessment and the analysis of the included studies suggested that future work on this topic would require more accurate data collection processes that go beyond weak study designs that may be susceptible to high risk of bias. This would require an increase in efforts to promote research and address stakeholder interests. Apart from improving data collection, a broader general effort is necessary in producing research on this topic; Latin American countries are gradually investing more in health and are aware of the costs associated with tobacco, alcohol and sugary beverages, but are still far from reaching HIC levels in terms of investment in health and tax intervention to mitigate the negative effects of these products.
Supplementary data
Supplementary data are available at Health Policy and Planning online.
Conflict of interest statement. None declared.
Ethical approval. No ethical approval was required for this study. | 2021-04-23T06:17:04.766Z | 2021-04-22T00:00:00.000 | {
"year": 2021,
"sha1": "518dd44da3793e2a69c8e484b7c39699a9ebc1b2",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/heapol/article-pdf/36/5/790/38463107/czaa168.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "891be9a926d098f385c9f08aac5f6933e1361b55",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
213518558 | pes2o/s2orc | v3-fos-license | Occupying the intersection: RuPaul’s celebration of meritocracy
RuPaul’s Drag Race is an intersectional show on multiple levels: it broaches new forms of representation as well as new televisual culture.To us, the international mainstream success of the programme is a celebration of diversity and a clarion call for a new world that is not predominantly White or heteronormative. Drag Race makes you dream of a different kind of television landscape that seems to be on the verge of becoming a reality. In this intervention we want to expand on that: how is this show—a monument to trans and queer representation— twined with how television is changing? And seemingly in contrast to this: might its new-found mainstream success also obscure how its politics of representation can be problematic?
heteronormative. Drag Race makes you dream of a different kind of television landscape that seems to be on the verge of becoming a reality. In this intervention we want to expand on that: how is this show-a monument to trans and queer representationtwined with how television is changing? And seemingly in contrast to this: might its new-found mainstream success also obscure how its politics of representation can be problematic?
This intervention comes in two parts. The first part uses Drag Race to identify how television has changed not just technologically as a platform and in its business models, but also ideologically. The second part tackles the problematic parts of RuPaul, RuPaul's Drag Race and drag. Drag does not make everybody happy, nor are Ru's strong neoliberal views about making your way and owning your future entirely comfortable. A third issue is how RuPaul and the show handle diversity and altercations that involve body politics (Strings and Bui, 2014). While a social activist in her own way (Raymond, 1994), Ru has taken a long time to speak out on behalf of trans people and when she finally did, it was not what people had expected her to say.
The trans moment of television
The art of drag satirises gender; RuPaul's Drag Race does television viewers the camp service of satirising television as a medium. For one thing, the show disproves of lingering connotations of television as a medium that makes viewers passive. The sheer volume of viewers, contestants and additional television content made by its host, illustrates how television has become a cross-media mode of storytelling where professionals and television lovers challenge one another in a process that takes place across different screens and platforms. This helps television transition away from its former 'feminine' inscription as passive (see Newman and Levine, 2012: 20). It is no longer a medium ruled by paternalist public service broadcasters nor can it be identified with the non-offensive content of commercial stations seeking to maximise audience figures or by its key women-addressed genre of the soap opera (see Modleski, 1984;Newcomb, 1974).
Drag Race occupies different 'spaces' as Annette Hill (2017) puts it, one of which is a real-life political space. The election of Donald Trump has spurred RuPaul to give interviews to news media and speak at gay pride rallies on LGBTQIþ representation and rights in current day America. Contestants are social activists too: Bob the Drag Queen is a Black Lives Matter advocate, Carmen Carrera and Gia Gunn are trans activists and Nina West is a LGBTQIþ-youth charity founder. Occasionally, discussion of politics enters the television programme both in serious forms, for example in the discussion of the nightclub shooting in Orlando in 2016 in the third episode of season 9, and in comedic forms, as in 'Trump the Rusical', which was the main challenge in the fourth episode of season 11.
Televisual space is Drag Race's most prominent and complex space: this is real-life entertainment cast and produced for television, which spoofs but also is reality television. It is a televisual space that Ru makes good use of as a celebrity. As Misha Kavka argues when writing about industry convergence shows, this is television crossed with consumer and leisure industries (here drag as performance art) (Kavka, 2011: 77). It is 'celevision', the multiplication of screens allowing television a multi-and cross-media presence, linked by the figure of the celebrity and providing individuals with seemingly effortless social mobility and for us, the viewers, forms of deep affective intimacy (Kavka, 2016: 297). No wonder that for Ru to be a television celebrity success, she needs to spark controversy and be both lovable and hateable. This is exactly where television is able to produce economic value (Kavka, 2011: 87) and where incidentally one's heart may be broken by one's hero/heroine.
Drag Race shows how television has become 'post-television': it has moved towards a more personal experience across platforms and can no longer be identified as foremost a family medium. It no longer needs to practice suffocating heteronormativity. It encourages active viewerdom of many different guises and offers layered and wideranging affective links between media and ideology.
Drag and gender, race and reality TV: Occupying the intersection As much as we love drag culture and commercial television, both have their dark sides. The art of drag has long been criticised from a feminist perspective as a sexist representation of traditional femininity by men with masculine privilege (Taylor and Rupp, 2004: 115). According to Rusty Barrett, 'Feminist scholars have traditionally argued that drag is inherently a misogynistic act, primarily because it represents a mockery of women or, at the very least, a highly stereotyped image of femininity and womanhood ' (2017: 38). While neither of us agrees with this reading of drag culture-we both see drag as a challenge to hegemonic gender ideals-it has to be said that definitions of femininity in RuPaul's Drag Race are surprisingly rigid.
Drag comes in many guises. Common distinctions are between high and low camp, between camp and fish queens. High camp is pure imitation while low camp allows performers their own style and creativity. Ru is a low-camp queen of the glamour camp kind (Zervignon, 2002). The provokingly controversial term 'fish' is part of a slightly different distinction where fish denotes real-life likeness (for the queen to represent a convincing woman), and camp the artier and politically provoking forms of drag, in which queens forgo the perfect female illusion in order to fit their act.
Drag Race has historically not encouraged camp drag. Competing queens who do not follow the hegemonic 'fish' ideal of drag have been admonished since the start of the show by Ru and the judges: this is not the kind of femininity we are looking for in Drag Race. Cisgender 'correct' representations of femininity are also 'a thing' in the workroom and 'Untucked' discussions among candidates. Fights between 'fish' and 'camp' queens have been a staple of high drama in the show since it started.
Similarly, highly problematic policing of the female body is also part and parcel of how transgender candidates in RuPaul's Drag Race have been treated over the years. Ru has likened transgender drag queens to athletes who use doping during sports events. Allowing them to compete would '[change] the concept of what [Drag Race] is doing' (Aitkenhead, 2018), implying that trans contestants would have an unfair advantage. Until recently, candidates were not allowed to be in transition. This meant candidates had to stop their transitioning to be able to compete, which, of course, did allow for their emotional coming out as trans women on the show. While this was amazing reality television, it was also painful disciplining of bodies and gender expression of trans candidates. Drag is meant to produce strong gender identities as performance, no matter what body is underneath the outfit. Recent seasons have had competitors who identify as trans women, such as Peppermint and Gia Gunn, as Ru seems to have altered the rules after the backlash of her controversial statements. What exactly the rules are now, we could not say.
It is interesting that Ru argued that trans women competing would be making use of an unfair advantage. Throughout her career RuPaul has preached meritocratic ideals and making good use of your assets (Charles, 1995). Ru does not believe in complaining about one's position or lack of means, nor has she ever believed that intersectional identities speak of oppression and structural inequality. If Ru was able to overcome issues of race and sexuality and find ways to satisfy mainstream audiences (read: White straight), everybody else can too. When Chi Chi DeVayne in season eight dared complain that her lack of resources excluded her from buying the expensive designer gowns she felt were expected by the judges, she was told that she simply needed 'to make it work'.
Ru's denial of structural inequality and meritocratic convictions prohibit her from thinking like the intersectional hero she is for us. (Indeed, research into meritocratic convictions illustrates that those who hold these simply have no truck with intersectional understanding, see Cech and Blair-Loy, 2010;Crenshaw, 1991;Littler, 2017). While Drag Race is the ultimate case for 'post-television' as a hopeful and exciting multiple transition, it is at the same time limited by the ambitions that created it. Discussing early seasons of Drag Race, Sabrina Strings and Long T Bui (2014) point out that lighterskinned queens were far more likely to win. In addition, Drag Race has encouraged queens to play on racial stereotypes, following the adage that this makes for strong (reality) television. What Skeggs and Helen Wood (2012: 136) have called the pedagogical invitation of reality television (of which they are critical for its disciplining of lower-class culture) extends in an unfortunate camp reversal to what is ultimately racist stereotyping.
Likewise, in the early seasons Ru's allegiance to commercial television was a great joke. Drag Race looked like a parody of capitalist entertainment. A small group of sponsors would mostly make products available: vodka, make-up, a vacation. Prizes you could not be sure anyone would really want to win. The camp tone and feel of the show allowed the prizes and commercials for the sponsors to be hilariously funny in their own right. The more successful the show has become, the less easy it is to read this as parody, which, in a sense, compromises how we watch it: the show has become what it promised to satirise for so long.
Conclusion
Ultimately our issue with RuPaul and RuPaul's Drag Race is that we have decided to champion someone who polices femininity, condoned forms of racist logic and said that she does not believe in structural inequality. While for us RuPaul and RuPaul's Drag Race are intersectional politics come alive, we may be engaging in a form of selfcongratulatory leftist politics that tries to appropriate minority culture. Even worse, we might be seen as denying the show and its creators their definitions of themselves in a flagrantly patronising neocolonialist move. While we might want to see Ru and the show in intersectional terms, we acknowledge that the purpose of the show is not intersectional at all: it is commercial television and it celebrates neo-liberal meritocratic ideology.
So there we are: we enjoy the media products created by the RuPaul conglomerate in a most unironic manner and are critical of RuPaul's Drag Race and our own viewer motives and judgements. It helps somewhat to recognise that even progressive media texts have their problematic aspects. It is a bit like Jade's tucking failure in season one: 'Interesting to see such a beautiful woman with such a big dick' (Edgar, 2011: 133). When we venture out of our self-congratulatory bubble (look at us being 'woke' viewers), we can see both beauty and awkwardness in the drag that commercial television likes. The RuPaul we know will not care either way: as long as our watching is paying her bills, she ain't paying these bitches no mind.
Authors' Note
Joke Hermes is also affiliated with Inholland University, Netherlands.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article. | 2019-11-28T12:36:21.691Z | 2019-11-22T00:00:00.000 | {
"year": 2019,
"sha1": "55c6d044086a693acc4bc97187cb16013f9e74ed",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1749602019875864",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "06a4dc2e9e523a7aafe31e92e799702c81c5e795",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
243893887 | pes2o/s2orc | v3-fos-license | Genetic Diversity of Hepatitis E Virus Type 3 in Switzerland—From Stable to Table
Simple Summary The main hosts of hepatitis E virus (HEV) genotype 3 are porcine species. Transmission of the virus to humans, for example via undercooked meat, may cause acute or chronic hepatitis. To determine sources and routes of infection, comparing the viruses present in humans to the ones present in main hosts is a helpful tool. However, it requires knowledge of the genetic diversity of the circulating viruses. Therefore, we tested Swiss pigs and wild boars for HEV and determined the virus subtype and part of its genome. In addition, we determined the HEV subtype present in 11 positive meat products. One pig liver from the slaughterhouses (0.3%) and seven livers from a carcass collection (13%) as well as seven wild boar livers (5.8%) were found HEV positive. The same virus subtypes were found in Swiss pigs, wild boars, and meat products. Most of the viruses belonged to a Swiss-specific cluster within the subtype 3h. In addition, one pig liver and one wild boar liver were found positive for 3l and two meat products from Germany for 3c. Our data indicate that Switzerland has its “own” HEV viruses that circulate independent from the rest of Europe. Abstract Hepatitis E caused by hepatitis E viruses of the genotype 3 (HEV-3) is a major health concern in industrialized countries and due to its zoonotic character requires a “One Health” approach to unravel routes and sources of transmission. Knowing the viral diversity present in reservoir hosts, i.e., pigs but also wild boars, is an important prerequisite for molecular epidemiology. The aim of this study was to gain primary information on the diversity of HEV-3 subtypes present along the food chain in Switzerland, as well as the diversity within these subtypes. To this end, samples of domestic pigs from slaughterhouses and carcass collection points, as well as from hunted wild boars, were tested for HEV RNA and antibodies. HEV positive meat products were provided by food testing labs. The HEV subtypes were determined using Sanger and next generation sequencing. The genetic analyses confirmed the predominance of a Swiss-specific cluster within subtype HEV-3h in pigs, meat products, and wild boars. This cluster, which may result from local virus evolution due to the isolated Swiss pig industry, supports fast differentiation of domestic and imported infections with HEV.
Introduction
The family of Hepeviridae includes two genera; the genus Piscihepevirus, which contains the single species Piscihepevirus A, also known as cutthroat virus of salmonids, and the genus Orthohepevirus, whose members infect mammals and birds and form the four genera Orthohepevirus A, B, C and D (https://talk.ictvonline.org/taxonomy, accessed on 5 October 2021). The most important species for human health is Orthohepevirus A, which is divided into 8 genotypes (HEV-1 to 8) [1][2][3][4]. The single-stranded positive-sense RNA genome is around 7.2 kb long and contains three open reading frames (ORF) of which the first and longest encodes the non-structural proteins, the second the capsid protein and the third a small protein that seems to be multifunctional and is, for example, associated with the release of the quasi-enveloped form of the virus [2,5].
In Europe, HEV-3 is considered the most important cause of locally acquired hepatitis [6,7]. In contrast to HEV-1 and 2, which are restricted to humans, HEV-3 is transmitted zoonotically. The main reservoir are porcine species with the domestic pig playing the most important role, but wild boars are also known to represent reservoir hosts [8,9]. Other animals such as deer and rabbits may also be sources of infection [10,11]. Several studies have indicated that pigs become infected early in life, once the maternal antibodies have sufficiently decreased. Three-month-old pigs seem to be the main virus shedders [12]. Virus is shed primarily via faeces for up to seven weeks, while viremia is usually more shortlived (1-3 weeks) but longer persistence in the liver is observed. Meat products containing liver are therefore considered a higher risk than muscular meat [13]. While naturally or experimentally infected pigs show no clinical signs and only histological changes in the liver [14][15][16], consumption of raw or undercooked meat, particularly liver, or direct contact with infected animals can lead to disease in humans [17,18]. The majority of infections have a subclinical or mild course. However, while no clear association to specific HEV-3 subtypes are observed, certain risk factors such as age over 50 and male gender, as well as immunosuppression, increase the risk for acute or, even more feared, chronic hepatitis, that may ultimately end in cirrhosis [19]. In addition, extra-hepatic manifestation such as neuralgic amyotrophy are frequently observed [20].
HEV has been reported in humans in several European countries since 2018, including Switzerland [21]. The seroprevalence observed in Swiss blood donors is with 20.4% comparable to other European countries, as is the percentage of antibody positive pigs (58.1%) and wild boars (12.5%) in studies from 2018 (humans) and 2014 (pigs and wild boars) [22,23]. Interestingly, in 2017 the full genome of HEV from a Swiss patient was determined and found to be of genotype 3, but with less than 88% nucleotide identity compared to published strains. It was therefore hypothesized to represent a potential "new" HEV-3 subtype [24]. In the same year we found closely related sequences in a human patient, the associated meat product and in a pig liver [25,26]. A recent publication has shown that this specific type of virus seems to be the most prevalent in Swiss patients [27]. However, the occurrence and diversity of HEV strains present in different potential sources of infection along the food chain in Switzerland is still unknown.
Molecular epidemiology has become an indispensable tool in determining routes and sources of infection in human and animal viral infections. Determination and comparison of viral variants, sero-or genotypes, has proven vital, not only in the current coronavirus pandemic but also in allowing us to link the new introduction of serotype 4 to emerging outbreaks of dengue virus in Indonesia, or recognize frozen berries as the source of a multistate outbreak of hepatitis A in Europe [28][29][30]. Furthermore, substantial sequence databases have helped tracing chains of (re-)infection of bovine viral diarrhea virus and supported the eradication of this epizootic cattle pathogen in Switzerland and Scotland [31,32]. Along this line, the European Centre for Disease Prevention and Control (ECDC) has initiated the HEVnet network in order to share molecular and epidemiological data on HEV globally and learn more about circulating HEV strains in Europe and the epidemiology of the virus [33]. However, the value and usefulness of such a platform, be it on the international or national level, depends on knowledge of the viral genetic diversity and epidemiological data.
The aims of this study were therefore, (i) to gain further evidence for the potential presence of a Swiss specific HEV subtype and its prevalence in different hosts; (ii) to assess which other HEV subtypes are present; and (iii) to determine the extent of diversity within Animals 2021, 11, 3177 3 of 18 these subtypes. Therefore, we followed the food chain and sampled pigs of different ages, wild boars, and various meat products sold in Switzerland. We confirmed previous seroprevalence numbers for pigs and wild boars and found that the majority of viral genome sequences belonged to a genetic cluster of exclusively Swiss sequences within subtype 3h, formerly known as 3s(p). In contrast, common European subtypes such as 3c were only detected in imported meat products, indicating that HEV in reservoir species in Switzerland may circulate independently from the rest of Europe.
Livers from Pigs at Slaughter
Liver samples from pigs at the timepoint of slaughter, which is normally at around six months of age in Switzerland, were collected between May and June 2018 in the three major pig slaughterhouses in Switzerland. Of the total yearly number of pigs slaughtered around 50% are processed in the slaughterhouses in Zurich, Courtepin and Basel (Personal communication R.S., March 2019) (a map indicating the location of the slaughterhouses is provided as supplementary Figure S1). In Zurich, 74 animals were sampled, in Basel 58 and in Courtepin 60. Additionally, 105 confiscated liver samples were collected by slaughterhouse staff members from Courtepin for this work. Confiscated livers are assigned as not being fit for human consumption, e.g., due to macroscopic lesions, such as parasite infections in the liver. The livers were individually packed in plastic bags, sealed, and stored at 4 • C for maximum one day until being transported to our Institute, where they were stored at −20 • C until further processing. Collection was performed over several weeks to sample multiple slaughtering batches.
Livers, Feces, and Diaphragm from Pigs from Carcass Collection Points
Since pigs shed HEV mainly around three to four months of age, pigs younger than the slaughtering age are a more likely source of HEV. In addition, older animals may represent a reservoir for HEV. In Switzerland, animal carcasses below a weight of 200 kg must be disposed of in communal carcass collection points (CCP). In regions with a high density of pig farms, dead pigs of all ages, but most frequently young animals, can be found in these containments. Therefore, 54 animals of variable ages were sampled in two CCP in the canton Lucerne, more exactly in Hochdorf (n = 24) and Knutwil (n = 30), in March and August 2018 ( Figure S1). The canton Lucerne is the main pig breeding and fattening area in Switzerland [23]. As sample material, a piece of the liver, the diaphragm and faeces from the colon were collected (in this order) on site from each animal individually. Different knives were used for each sample material and were disinfected for at least 15 min in 70% Ethanol and rinsed with hot water between different animals. The weight and age of the dead pigs was estimated. After transfer to our laboratory, the samples were labelled and stored at −20 • C before further processing.
Wild Boar Samples
From December 2017 to March 2019, a total of 75 liver samples were collected by hunting societies from the canton Schaffhausen (SH) in 14 different hunting grounds. Another 46 liver samples originated from the canton Ticino (TI) ( Figure S1). These animals were shot in September 2018. The samples were individually packed in plastic bags, sealed, stored at 4 • C for max. 2 d and then transported to our laboratory where they were stored at −20 • C until usage.
In Switzerland, all wild boars meant for human consumption need to be tested for the zoonotic parasite Trichinella spiralis and therefore muscle tissue samples, normally from the diaphragm, are sent to one of the official testing laboratories by the hunters. What is left after testing is stored at −20 • C for a couple of weeks and then discarded. We used these archived samples, i.e., the meat juice available after defrosting the diaphragm samples, to test for HEV antibodies. The samples from the cantons Zurich (ZH) and Aargau (AG) were provided by the Institute of Parasitology of the University of Zurich, the samples from Schaffhausen (SH) by the Cantonal Veterinary Department Schaffhausen, and the samples from Basel-Landschaft (BL) and Solothurn (SO) by the Laboratory of Veterinary Diagnostics in Chur. In total 141 diaphragm samples from SH, 87 from AG, 64 from ZH, 92 from BL, and six from SO were received between November 2018 and June 2019. For some of the diaphragm samples (n = 55) no information regarding the origin of the samples was available.
Meat Products
The diagnostic laboratory of the Federal Food Safety and Veterinary Office (FSVO) in Berne provided a total of 21 food samples that were tested positive for HEV between March 2016 and November 2018 [34]. From the Cantonal Laboratory of the canton Ticino, six HEV-positive mortadella di fegato sausages initially tested in 2016 and 2017 were supplied to our lab for genetic analysis of the virus. A list of all food samples included in this study is provided in Supplementary Table S1.
RNA Extraction
The QIAgen Viral RNA Mini Kit (Qiagen, Hombrechtikon, Switzerland) was used to extract the RNA from liver, faeces, diaphragm, and chunky meat products such as coarse sausages. For highly processed meat products such as liver patés, Trizol LS (ThermoFisher, Reinach, Switzerland) was used for RNA extraction.
The viral RNA mini kit was performed according to the manufacturer's instructions, using 140 µL input volume and 50 µL nuclease-free water for RNA elution. The following sample-type specific preparation methods were used. From the frozen liver and diaphragm tissue 30 mg were weighed in a 2 mL safe-lock Eppendorf tube. In the next step, 200 µL of nuclease-free water and a 5 mm steel bead (Qiagen, Switzerland) were added to the tube. Samples were then homogenized for 30 s at 25 Hz in the Tissue Lyzer II (Qiagen, Hombrechtikon, Switzerland). After a three-minute centrifugation step at 16,000× g, the supernatant was used in the QIAgen Viral RNA Mini Kit. For the RNA extraction from the faecal samples, 100 mg of faeces were weighed in a 2 mL Eppendorf tube and the 10-fold volume of phosphate-buffered saline (PBS) added to the tube. Samples were then homogenized for 30 s at 25 Hz in the Tissue Lyzer II (Qiagen, Hombrechtikon, Switzerland). After a 5 min centrifugation step at 16,000× g the sample was ready to be extracted. Salamitype sausages such as mortadelle were dissected manually to separate fat from meat/liver chunks. If these chunks were quite fresh, they were treated similar to raw tissue samples. If they were dry and hard (e.g., in salsiz sausages) 500 mg meat was soaked in 500 µL water in a 2 mL tube and pre-homogenized using the Tissue Lyzer without bead for 1 min at 25 Hz. In the next step, a 5 mm steel bead was added, and the soaked material homogenized for 1 min at 25 Hz. Of the resulting squish 100 mg was transferred to a new tube and, after addition of 200 µL of water and another steel bead, homogenized again in the Tissue Lyzer for 1 min at 25 Hz. After the subsequent 3-min centrifugation step at 16,000× g, the supernatant was used in the QIAgen Viral RNA Mini Kit.
From the highly processed and fat-rich meat products such as liver patés, 200 mg were mixed with 700 µL of PBS in a 2 mL tube and a 5 mm steel bead was added. Homogenization was performed by running the Tissue Lyzer for 1 min at 25 Hz. After the samples were centrifuged for 3 min at 16,000× g, 250 µL of the supernatant were mixed with 750 µL Trizol LS and the RNA pelleted according to the manufacturer's recommendation. The air-dried pellets were resuspended in 100 µL nuclease-free water (ThermoFisher, Reinach, Switzerland).
Real-Time RT-PCR
Real-time reverse-transcription PCR (rt RT-PCR) was performed on a Quant Studio 7 or Quant Studio 3 Real Time PCR System (Applied Biosystems, Waltham, MA, USA) following the protocol described by Garson et al. [35] which represents an adap-Animals 2021, 11, 3177 5 of 18 tation of the PCR primers and probes originally described by Jothikumar et al. [36]. For the PCR reaction, the QuantiTect Probe RT-PCR Kit (Qiagen, Hombrechtikon, Switzerland) was used as recommended by the manufacturer. To control for successful RNA extraction porcine 12S rRNA was measured by real-time RT-PCR using the same cycling conditions and reagents as for HEV and using the forward and reverse primer (p12S_F 5 -CCACCTAGAGGAGCCTGTTCTATAA-3 ; p12S_R 5 -GGCGGTATATAGGCTGAATTGG-3 ) at 0.4 µM and the probe (p12S_P 5 -FAM-CGATAAACCCCGATAGACCTTACCAACCC-TAMRA-3 ) at 0.2 µM. The RNA was added in a 1:100 dilution to the reaction (1 µL in 20 µL final reaction volume).
ORF2 Typing Nested RT-PCR and Sanger Sequencing
To determine the HEV genotype and subtype of positive samples a broad reactive nested typing RT-PCR was performed [37]. The first step in this protocol is to convert the viral RNA into cDNA. This was carried out using the RevertAid H-minus First Strand cDNA synthesis kit (Thermo Fisher Scientific, Reinach, Switzerland). The resulting cDNA was directly used in the first, outer PCR reaction, followed by the second, inner PCR reaction. In both steps the HotStarTaq DNA Polymerase (Qiagen, Hombrechtikon, Switzerland) was used following the recommendations of the manufacturer. Of the second PCR product, 5 µL were mixed with 1 µL loading dye and run on a 1.5% agarose-gel. If there was a clear, single band the rest of the PCR product (45 µL) was purified using the QIAquick PCR Purification Kit (Qiagen, Switzerland). The kit was used according to the manufacturer's instructions, and DNA was eluted in 30 µL of the elution buffer included in the kit. If several bands were visible, the correct one was cut from the gel using a sterile scalpel blade, and DNA was extracted using the QIAquick Gel Extraction kit (Qiagen, Hombrechtikon, Switzerland). The DNA concentration of the sample was determined by the NanoDrop system (Thermo Fisher Scientific, Reinach, Switzerland). The forward and reverse sequencing primers were used for bi-directional sequencing (Microsynth GmbH, Balgach, Switzerland). After removing the primers, the sequence was 493 nucleotides long and part of the ORF2 of the HEV genome (position 5962 to 6454 of reference genome NC_001434). To determine the HEV geno-and subtype, the sequences were submitted to the online HEVnet typing tool (https://www.rivm.nl/mpf/typingtool/hev/, accessed on 10 August 2021) and phylogenetically analyzed. All sequences were submitted to the HEVnet sequence repository as well as to GenBank (accession numbers MZ923532-MZ923556).
Next Generation Sequencing
Samples that were successfully subtyped by the ORF2 typing nested RT-PCR were subsequently subjected to next generation sequencing (NGS) to gain more information of the genome, ideally the full-genome sequence. Sample preparation and sequencing was performed following a method previously developed in our laboratory [38]. In summary, an enrichment for encapsidated viral nucleic acids was performed, followed by sequence independent single primer amplification and paired-end short read sequencing on an Ilumina NextSeq machine 500 with 2 × 150 bp read length for the majority of the samples. An Ilumina NovaSeq machine with 1 × 100 bp read length was used for one wild boar liver (WB74) and two meat products (BLV01185 and BLV01189). Quality control and screening to a database containing 61'620 complete viral genomes was performed as previously described [38]. Since the HEV genotype and subtype was already known from the ORF2 typing PCR based sequences, the NGS reads were subsequently aligned to HEV-3 subtypespecific databases containing all officially assigned full reference genomes of the respective HEV-3 subtypes [39] using the SeqMan NGen software from the DNAstar Lasergene Genomic suite (DNASTAR, Madison, WI, USA). The SeqManPro software was used to visualize the aligned reads and to generate and download the contigs. The contigs were blasted (https://blast.ncbi.nlm.nih.gov/Blast.cgi, accessed on 20 October 2021) to find the closest related publicly available HEV strain, and the reads were re-aligned and contigs generated against this reference alone using the SeqManPro software again. Finally, the Animals 2021, 11, 3177 6 of 18 complete and almost complete (>95%) contigs were screened for ORFs using the Clone Manager 9 Professional Edition software (Sci Ed Software LLC, Westminster, CO, USA).
Phylogenetic Analysis
Phylogenetic analysis of the ORF2 sequences (493 nucleotide (nt) length) was performed using the MEGA X software [40]. After multiple sequence alignment by MUSCLE a Maximum Likelihood (ML) Tree with 1000 bootstraps, based on the Tamura-Nei model, was drawn. Besides the 26 own ORF2 sequences, the respective genome region of the recommended single representative of each HEV-3 subtype was included [39,41]. In addition, we added all HEV-3 reference genomes, as assigned by Nicot et al. [39], of the subtypes found in this study. However, while this was possible for 3h (n = 17) and 3l (n = 6) the total number of reference genomes appointed to 3c (n = 117) was too high for optimal visualization of the tree. Therefore, we selected three reference genomes most closely related to each of our two 3c sequences for the final tree (n = 6).
For the four NGS derived complete and almost complete genome sequences the same single full-length representatives were included in the ML tree but only the 17 references for 3h were included as the NGS derived genomes all belong to subtype 3h. After alignment by MUSCLE, the full-length genomes were shortened to match the 5' and 3' ends of the partial genomes resulting in genome lengths varying between 6738 and 7138 nt. The shortened sequences were re-aligned and used for the ML tree as described above. For all sequences the GenBank accession numbers (MZ923532-MZ923556) are indicated in the phylogenetic trees.
Following the method described by Nicot et al. [39], pairwise genetic distances were calculated in MEGA X after MUSCLE alignment, including 532 (near) full-genome references encompassing all available genotype-3 sequences and a set of 29 non-HEV-3 references [41]. The subtype demarcation cut-off of 0.093 was applied to confirm subtype assignment and to compare genetic distances of the new full-genome sequences and different clusters within subtype 3h. For visualization by boxplots the NCSS 10 statistical software (NCSS, LLC, East Kaysville, UT, USA) was used.
Antibody Detection
All samples originating from pigs and wild boars were tested for antibodies against HEV with the PrioCHECK HEV Ab porcine ELISA Kit (Thermo Fisher Scientific, Reinach, Switzerland). This indirect ELISA is suitable for porcine serum and meat juice samples. We used it for juices gained after defrosting diaphragm ('diaphragm juice') and liver samples ('liver juice'). The ELISA was performed according to the manufacturer's instructions. Optical densities were read in an ELISA reader (Sunrise Tecan, Tecan group Ltd., Männedorf, Switzerland) at 450 nm with the reference filter set at 620 nm and results interpreted as described in the manual.
To statistically compare antibody prevalence of wild boars from different cantons, the NCSS 10 statistical software was used (NCSS, LLC, East Kaysville, UT, USA). A contingency table provided evidence for the over-all difference. Subsequent pairwise comparisons were carried out using Chi-square statistics. Two-sided p-values ≤ 0.05 were considered significant. Canton Solothurn was excluded from the Chi-square statistics due to the small sample size.
Prevalence of HEV RNA and Anti HEV-Antibodies in Domestic Pigs and Wild Boars
Of the pig livers collected at the timepoint of slaughter, only one out of the 192 tested samples meant for human consumption was HEV positive (Figure 1). Additionally, 105 confiscated livers not fit for consumption were tested, but none of them contained HEV RNA. Overall, 59.4% of the tested samples were antibody positive, ranging from 46.7% in Courtepin (n = 60), 60.3% in Basel (58) to 68.9% in Zurich (n = 74).
Prevalence of HEV RNA and Anti HEV-antibodies in Domestic Pigs and Wild Boars
Of the pig livers collected at the timepoint of slaughter, only one out of the 192 tested samples meant for human consumption was HEV positive (Figure 1). Additionally, 105 confiscated livers not fit for consumption were tested, but none of them contained HEV RNA. Overall, 59.4% of the tested samples were antibody positive, ranging from 46.7% in Courtepin (n = 60), 60.3% in Basel (58) to 68.9% in Zurich (n = 74). (Table S2). From the seven positive animals the faecal and diaphragm samples were also tested for HEV. In all cases the faecal samples resulted in the lowest Ct values followed by the livers (Figure 2). The diaphragm samples tested positive in only four out of the seven pigs and had the highest Ct values. (Table S2). From the seven positive animals the faecal and diaphragm samples were also tested for HEV. In all cases the faecal samples resulted in the lowest Ct values followed by the livers (Figure 2). The diaphragm samples tested positive in only four out of the seven pigs and had the highest Ct values. HEV RNA was detected in seven out of 121 tested liver samples from hunted wild boars in the canton Schaffhausen (SH) (n = 75) and Ticino (TI) (n = 46), resulting in an overall RNA prevalence of 5.8% ( Figure 1 and Table 1). However, all seven positive animals originated from SH; the RNA positivity was therefore 9.3% in this canton and 0% in Ticino. Regional differences were also observed regarding seroprevalence, as summarized in Table 1. In total, 566 liver and diaphragm samples were examined for antibodies and the overall percentage of positive animals was 12.7%. The highest seroprevalence (28%) was observed in the animals hunted in 14 hunting grounds in SH where we had a collab- HEV RNA was detected in seven out of 121 tested liver samples from hunted wild boars in the canton Schaffhausen (SH) (n = 75) and Ticino (TI) (n = 46), resulting in an overall RNA prevalence of 5.8% (Figure 1 and Table 1). However, all seven positive animals originated from SH; the RNA positivity was therefore 9.3% in this canton and 0% in Ticino.
Regional differences were also observed regarding seroprevalence, as summarized in Table 1. In total, 566 liver and diaphragm samples were examined for antibodies and the overall percentage of positive animals was 12.7%. The highest seroprevalence (28%) was observed in the animals hunted in 14 hunting grounds in SH where we had a collaboration with the hunting societies and received fresh livers for PCR analysis. The second highest percentage (16.3%) was observed when testing the diaphragm juice from the Trichinella control in SH (44 hunting grounds), followed by the canton Zurich (ZH) (15.6%). The seroprevalence was considerably lower in animals shot in TI (6.5%), Aargau (AG) (4.6%) and Basel-Landschaft (BL) (7.6%). The differences were statistically significant between SH (total) and AG (p = 0.0004), SH and BL (p = 0.0068), SH and TI (p = 0.0323), and ZH and AG (0.0253). Information regarding the age of the animals was available for 68 of the 75 liver samples from Schaffhausen. The majority of the analyzed samples were from young boars (<1 year, n = 32), followed by juveniles (n = 19) and adults (>2 years, n = 17). Two of the seven RNA positive animals belonged to the group of young boars, three were juvenile and in two cases the age was unknown (Table S3). Of the 21 antibody positive animals, six were adults, nine juveniles and five young boars. Hence, the group of juvenile wild boars constituted the largest number not only of virus-but also of antibody positive animals. Interestingly, five of the seven RNA positive animals were also antibody positive: three juveniles, one young boar and one of unknown age (Table S3).
HEV Subtyping by Sanger Sequencing
The subtyping PCR was successful for 15 out of 16 HEV positive samples from pigs and wild boars and for 11 out of 27 different meat products ( Table 2, Table S1). According to the HEVnet online typing tool, all subtyped samples originating from Switzerland belonged either to the formerly proposed subtype 3s(p) or, in two cases, to the formerly proposed subtype 3o(p). According to the recently published demarcation cut-off for HEV-3 subtypes, 3s(p) and 3o(p) full-genome sequences are assigned to the established subtypes 3h and 3l, respectively [39]. The sequences of two meat products produced in Germany were clearly assigned to the subtype 3c. The sequences from liver and faeces of pig KW13 were identical, therefore only one was submitted to GenBank and used for phylogenetic analysis. Highest number of quality controlled NGS reads aligning to a HEV reference genome and the respective genome coverage of this reference. 2 Swiss cluster within subtype 3h according to phylogenetic analyses. 3 Ct values of meat products as provided by the laboratory of initial testing.
NGS and Phylogenetic Analyses
To gain more information on the genome of the viruses and confirm the subtyping result gained by the partial ORF2 sequence, NGS was performed on all samples where ORF2 sequencing was successful and sufficient sample material was available. The total read numbers after QC ranged between 3 and 10 Mio reads per sample, while the number of reads aligning to a HEV reference genome was highly variable as was the genome coverage ( Table 2). The highest read numbers and best coverage was observed in the six wild boars with an average of 75.2% genome coverage, one complete sequence and two sequences with >95% coverage. No HEV reads were detected in the sample from the single positive pig from the slaughterhouse. In contrast, with exception of KW20, all samples from pigs from the CCP contained HEV reads. The average genome coverage was 64.9%, and one sample was nearly fully covered (KW13, 98%). In contrast, the meat products performed rather poorly in NGS. Of the seven samples included, four failed completely and the other two resulted in very low read counts and poor coverage of 1.4% and 23.4%. No clear correlation between Ct value and NGS performance was visible ( Table 2). An overview of the coverage pattern is provided in supplementary Figure S2.
For two of the four sequences with >98% coverage, WB33 and WB40, all three ORF described for HEV were fully covered and coded for the expected number of amino acids. In case of WB36, 92 nt were missing at the 3'end, which leads to a truncated ORF2 product of only 657 instead of 660 amino acids. In contrast, the sequence of sample KW13 lacked 127 nt at the 5'end, resulting in an ORF1 product of only 1670 amino acids instead of 1703. The sequences of the three wild boars were nearly identical, with only a dozen of mismatches between them over the whole genome.
The ML tree based on the 493 nt long ORF2 fragment confirmed the subtype allocation by the HEVnet typing tool. The majority of sequences from pigs, wild boars and meat products (22 out of 26) grouped within subtype 3h and formed a well-supported monophyletic cluster, henceforth named 3h_s, clearly distinct from the classic 3h sequences (Figure 3). Only two sequences within 3h_s were found identical: KW13F and BLV01126, from a pig and a salsiz sausage, respectively, while all other 3h_s sequences differed for at least one nt. The tree showed also that the cluster 3h_s may be separated into two branches. Both contain sequences of domestic pigs and meat products, but only one includes official 3h reference genomes, while the other contains most of the wild boar sequences, which clustered closely together. Two sequences, one wild boar and one pig, were confirmed to belong to subtype 3l, as proposed by the online typing tool. The assignment of sequences from two meat products from Germany to subtype 3c was also confirmed, and they were shown not to be closely related. [39,41]. The evolutionary history was inferred by using the Maximum Likelihood method and Tamura-Nei model [42]. The tree with the highest log likelihood from 1000 bootstrap replicates is shown. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. Bootstrap values above 75% are shown. For the subtypes 3h and 3l all previously assigned genomes, for 3c a selection of six assigned genomes and for all other subtypes the single representative reference genomes as previously suggested [41] were included.
Another ML tree was calculated including the four complete and almost complete sequences (>98% coverage) originating from three wild boars and one pig from CCP. While the three sequences from the wild boars group very closely together, the sequence from the pig is a little further away, but all sequences clearly belong to the cluster of Swiss sequences within 3h, 3h_s ( Figure 4). As already seen in the phylogenetic tree of the partial Figure 3. Phylogenetic tree of a 493 nt long fragment of the ORF2 including the 26 sequences resulting from this study (colored dots; accession numbers provided after sequence name) and 42 previously assigned HEV-3 reference genomes [39,41]. The evolutionary history was inferred by using the Maximum Likelihood method and Tamura-Nei model [42]. The tree with the highest log likelihood from 1000 bootstrap replicates is shown. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. Bootstrap values above 75% are shown. For the subtypes 3h and 3l all previously assigned genomes, for 3c a selection of six assigned genomes and for all other subtypes the single representative reference genomes as previously suggested [41] were included.
Another ML tree was calculated including the four complete and almost complete sequences (>98% coverage) originating from three wild boars and one pig from CCP. While the three sequences from the wild boars group very closely together, the sequence from the pig is a little further away, but all sequences clearly belong to the cluster of Swiss sequences within 3h, 3h_s ( Figure 4). As already seen in the phylogenetic tree of the partial ORF2 sequences, the 3_s sequences form an own cluster containing two branches and are distinct from classic 3h sequences (3h_cl) and the former 3k(p) sequence (3h_k). [39,41] were included. Where necessary, genomes were shortened to identical starting and ending nucleotides (6738-7138 nt length). The evolutionary history was inferred by using the Maximum Likelihood method and Tamura-Nei model [Tamura K. and Nei M. (1993)]. The tree with the highest log likelihood from 1000 bootstrap replicates is shown. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. Bootstrap values above 80% are shown. The distinct clusters within 3h, namely the classic 3h genomes, the formerly proposed subtypes 3s(p) and 3k(p) are encircled and named 3h_cl, 3h_s and 3h_k, respectively.
To quantify the genetic relatedness and confirm subtype assignment, pairwise-distances were calculated including 532 reference sequences from all subtypes of HEV-3, single representatives of genotypes one to four, and our four (near) full genomes. Only members of subtype 3h showed genetic distance values below the cut-off of 0.093 when compared to the new sequences, confirming assignment of all four genomes to this subtype ( Figure 5, Table S4). Interestingly, KW13 was shown to be more closely related to the already published 3h_s reference genomes and to the classic 3h sequences than the wild boar representative, WB33 ( Figure 5). The genetic distance values of WB33 are only just below the cut-off when compared to 3h_cl and above compared to 3h_k. The pairwise distance to 3l, the most closely related subtype after 3h, is nearly identical for WB33 and KW13 and clearly above the demarcation cut-off. For the subtype 3h all official references, for all other subtypes the single recommended representatives [39,41] were included. Where necessary, genomes were shortened to identical starting and ending nucleotides (6738-7138 nt length). The evolutionary history was inferred by using the Maximum Likelihood method and Tamura-Nei model [Tamura K. and Nei M. (1993)]. The tree with the highest log likelihood from 1000 bootstrap replicates is shown. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. Bootstrap values above 80% are shown. The distinct clusters within 3h, namely the classic 3h genomes, the formerly proposed subtypes 3s(p) and 3k(p) are encircled and named 3h_cl, 3h_s and 3h_k, respectively.
To quantify the genetic relatedness and confirm subtype assignment, pairwise-distances were calculated including 532 reference sequences from all subtypes of HEV-3, single representatives of genotypes one to four, and our four (near) full genomes. Only members of subtype 3h showed genetic distance values below the cut-off of 0.093 when compared to the new sequences, confirming assignment of all four genomes to this subtype ( Figure 5, Table S4). Interestingly, KW13 was shown to be more closely related to the already published 3h_s reference genomes and to the classic 3h sequences than the wild boar representative, WB33 ( Figure 5). The genetic distance values of WB33 are only just below the cut-off when compared to 3h_cl and above compared to 3h_k. The pairwise distance to 3l, the most closely related subtype after 3h, is nearly identical for WB33 and KW13 and clearly above the demarcation cut-off. WB33 (dark grey) full genomes to all full-length reference sequences of subtypes 3h and 3l. Separate distances were calculated for the three clusters within 3h that relate to the classic 3h genomes (3h_cl) and the formerly proposed subtypes 3s(p) (3h_s) and 3k(p) (3h_k) (Figure 4). The number of available references for each subtype/cluster are indicated below the names. The subtype demarcation cut-off of 0.093 is indicated by a horizontal line. The whiskers extend to 1.5x interquartile range (IQR) from the box edge. Outliers are marked by circles. The notches are constructed using the formula: Median +/− 1.57 × (IQR)/√n.
Domestic Pigs
When screening 192 livers from the three largest pig slaughterhouses in Switzerland, only a single sample was virus RNA positive, resulting in a prevalence of 0.3% (Figure 1). This is somewhat lower but comparable in range to previous findings in Switzerland where 1.3% (two out of 160 livers) were found positive [13]. Individual RNA prevalence values may vary substantially between different studies and countries, with a range of 1-89% [12]. However, samples from pigs at slaughter tend to be relatively low, e.g., 4% in France and 3% in UK [43,44]. In contrast to the RNA prevalence, HEV-specific antibodies were found in 59.4% of the Swiss slaughterhouse samples. This finding confirms a previous observation of 58.1% seroprevalence in Swiss pigs at slaughtering age sampled in 2006 and 2011 [23] and underpins the finding that most pigs will become infected relatively early in life and have already developed an immune response and cleared the virus at the slaughtering timepoint. Early slaughtering timepoint and late infection, e.g., due to prolonged protection by high levels of maternal antibodies, were shown to be risk factors for pigs to be HEV positive at slaughter [45,46]. Seroprevalence of slaughtering pigs in Europe ranges between 30-98% [12]. The prevalence in Swiss pigs is therefore somewhere in the mid-range and seems to be rather stable over time.
A much higher percentage, 13%, of the 54 tested animals from CCP was found virus positive in the liver. The weight of four of the seven positive animals was between 20 and 25 kg; the remaining three animals were estimated to be 50, 60 and even 100 kg (Table S2). These results confirm the findings of others, that the peak of viremia in pigs is around three to four months of age, but older, chronically infected animals, may serve as virus reservoirs [46]. In all seven cases, the diaphragm was found least positive, with Ct values of over 35 in four cases and a negative result in the three others. Similar findings were Figure 5. Boxplot visualization of the pairwise distance values of comparing KW13 (light grey) and WB33 (dark grey) full genomes to all full-length reference sequences of subtypes 3h and 3l. Separate distances were calculated for the three clusters within 3h that relate to the classic 3h genomes (3h_cl) and the formerly proposed subtypes 3s(p) (3h_s) and 3k(p) (3h_k) (Figure 4). The number of available references for each subtype/cluster are indicated below the names. The subtype demarcation cut-off of 0.093 is indicated by a horizontal line. The whiskers extend to 1.5x interquartile range (IQR) from the box edge. Outliers are marked by circles. The notches are constructed using the formula: Median +/− 1.57 × (IQR)/ √ n.
Domestic Pigs
When screening 192 livers from the three largest pig slaughterhouses in Switzerland, only a single sample was virus RNA positive, resulting in a prevalence of 0.3% (Figure 1). This is somewhat lower but comparable in range to previous findings in Switzerland where 1.3% (two out of 160 livers) were found positive [13]. Individual RNA prevalence values may vary substantially between different studies and countries, with a range of 1-89% [12]. However, samples from pigs at slaughter tend to be relatively low, e.g., 4% in France and 3% in UK [43,44]. In contrast to the RNA prevalence, HEV-specific antibodies were found in 59.4% of the Swiss slaughterhouse samples. This finding confirms a previous observation of 58.1% seroprevalence in Swiss pigs at slaughtering age sampled in 2006 and 2011 [23] and underpins the finding that most pigs will become infected relatively early in life and have already developed an immune response and cleared the virus at the slaughtering timepoint. Early slaughtering timepoint and late infection, e.g., due to prolonged protection by high levels of maternal antibodies, were shown to be risk factors for pigs to be HEV positive at slaughter [45,46]. Seroprevalence of slaughtering pigs in Europe ranges between 30-98% [12]. The prevalence in Swiss pigs is therefore somewhere in the mid-range and seems to be rather stable over time.
A much higher percentage, 13%, of the 54 tested animals from CCP was found virus positive in the liver. The weight of four of the seven positive animals was between 20 and 25 kg; the remaining three animals were estimated to be 50, 60 and even 100 kg (Table S2). These results confirm the findings of others, that the peak of viremia in pigs is around three to four months of age, but older, chronically infected animals, may serve as virus reservoirs [46]. In all seven cases, the diaphragm was found least positive, with Ct values of over 35 in four cases and a negative result in the three others. Similar findings were previously made in Spain [47] and are in support of the risk assessment of meat products that classify products containing pig liver as higher risk compared to porcine muscular meat [13].
Wild Boars
In addition to domestic pigs, wild boars are known to be an important source of infection. Overall, we found 5.8% and 12.2% of RNA-and antibody positive animals, respectively. A nearly identical value for the antibody prevalence (12.5%) was found in samples collected by hunters of 10 different cantons in 2008, but no study on virus prevalence has been carried out yet and no HEV sequencing data was available of Swiss wild boars [23]. However, our data may not be representative for all of Switzerland. While we received diaphragm juice samples from most of the cantons with high wild boar density [48], this sample material is not ideal for detecting circulating viruses, as seen in the CCP samples. In addition, the low number of antibody positive animals in many cantons indicated that the likelihood to find HEV RNA positive samples among the 445 diaphragm juices was small. We therefore refrained from testing the diaphragm juice for viral RNA. However, we received 121 livers for RNA detection from two cantons, SH and TI, which are in the very north and south of Switzerland, respectively, and have both a high wild boar density. Interestingly, not a single liver of the 46 samples from TI was found HEV positive, in contrast to seven out of 75 in SH (Table 1). Significant differences were also observed in the percentage of antibody positive animals between different cantons, with higher numbers of around 15% in the east (SH and ZH) compared to more westerly cantons where it was around 4%. However, more samples also from the French speaking areas and more representative numbers of samples would be necessary to confirm this trend. In Germany, spill-over of HEV from wild boars to deer species was observed [49]. While we did not specifically investigate deer samples, none of the 14 incidentally provided roe deer livers from hunting grounds where HEV positive wild boars were identified were RNA or antibody positive (data not shown). Our data indicate that HEV may circulate in the wild boar population primarily within relatively small-scale geographic patterns and virus prevalence may vary from 0% to 26% (Table 1, Table S3, Figure S3). It was recently shown that HEV seroprevalence in wild boar populations are influenced not only by population density but also by environmental factors such as rainfall and proximity to marshland and may fluctuate considerably over time [50]. It is therefore impossible to draw conclusions on the whole wild boar population from geographically and temporally limited HEV data.
Partial ORF2 Sequences
To determine the circulating HEV genotype and subtypes we used a 493 nt long partial ORF2 PCR product that has been described before [37]. We chose this method since it is widely used by national HEV reference and research laboratories, as shown by an interlaboratory HEV typing comparison test and, hence, facilitates international comparison of the sequences [51]. However, examples of recombination-events within the HEV genome have shown that relying on the partial ORF2 sequence alone may not always provide correct classification [52][53][54]. Therefore, to assign novel sequences reliably to the 11 official subtypes, ideally the full genomes should be used [39]. However, the full-genome Swiss sequences analysed to date have not provided evidence for recombination, and the partial ORF2 sequences resulted always in the same tree topography as the full-length sequences [25,26].
While nearly all positive samples from pigs and wild boars were successfully sequenced, only 40% of the meat products provided a sequence. It has been reported before that sequencing of meat products, particularly highly processed ones, is challenging, probably due to inhibitors and degradation of RNA [34,55]. While our methodology has previously been successfully used for full-genome sequencing of HEV from a mortadella pork sausage [25], further optimization of the RNA extraction method may be necessary for different product types. Importantly, the resulting RNA should be suitable not only for real-time RT-qPCR but specifically for sequencing, which, as our data show, not always coincides.
Overall, 84.6% of the 26 sequences from this study were assigned to subtype 3s (p) (now part of 3h) by the HEVnet online typing tool and were confirmed by a ML tree. The tree clearly shows that the Swiss sequences form a distinct cluster, representing the former 3s(p) subtype, here named 3h_s. Until now, HEV-3h_s has exclusively been reported in Switzerland, not only in pigs, wild boars, and meat products, but also in humans [27]. This finding is quite remarkable as, to our knowledge, there are no other country-specific subclusters of HEV-3 reported that are predominant "from stable to table". As stated previously, the unique situation in Switzerland is most likely attributed to a high degree of self-sufficiency regarding Swiss pork consumption and the fact that Switzerland is not part of the European Economic Area [27]. In England and Wales, for example, the sequences found in humans are more closely related to porcine sequences from mainland Europe rather than the UK; most likely due to a high degree of imported pork [56]. Furthermore, due to the high animal health status of pigs in Switzerland, the import of live animals is strictly limited. Additionally, Swiss pigs are only moved within Swiss borders, while pigs born in the European Union may be transported across borders, e.g., for fattening and/or slaughtering, which may contribute to the exchange of HEV subtypes.
In one wild boar and one pig HEV-3l (former 3o(p)) was detected. This subtype seems to be relatively rare, only six annotated full-genome references are publicly available. Four of these originate from France (3× human, 1× porcine), the remaining two from Italy (2× porcine) [39,57]. Due to the limited number of 3l sequences, we cannot conclude on any geographical pattern within Switzerland. However, compared to 3h, in Swiss sequences form our own cluster, this is not the case for 3l, which speaks against geographical clustering. Subtype 3l sequences have also occasionally been detected in human patients in Switzerland and, hence, are probably not rare in Swiss pigs [27]. Interestingly, the wild boar positive for HEV-3l was shot in a hunting ground that directly borders Germany ( Figure S3). It would be interesting to know the HEV genotypes present in the wild boar population in Southern Germany and if there are 3l or even 3h_s subtypes present.
We have not discovered any of the HEV-3 subtypes in Swiss pigs that are predominant in other European countries (i.e., 3c, 3e, 3f). However, we have analysed only a limited number of sequences. A broader screening of Swiss pig herds and wild boar, e.g., using faecal samples, would be necessary to also detect rarer HEV-3 subtypes.
Overall, sequences of pigs and meat products are evenly distributed over two branches within 3h_s, while most wild boar sequences seem to form their own small cluster within one of the branches. However, this may be attributed to the fact that all wild boars originated from the same geographical area and most of them even from the same hunting ground (Table S3). Still, we can assume that the same HEV subtypes circulate in pigs and wild boars. Interestingly, a different situation was observed for atypical porcine pestivirus, where Swiss-specific sequences circulate only in domestic pigs [58], while wild boars harbour other, more "European" sequences (personal communication, Matthias Schweizer, Institute of Virology and Immunology, Bern, October 2021).
Direct comparison of our sequences to previously published human sequences is limited due to the different sequencing approaches used [27]. However, 58.9% of the HEV-3 sequences of human origin (n = 95) belonged to subtypes also detected in Swiss pigs (54.7% to 3s(p) and 4.2% to 3o(p)). In addition, HEV-3f (7.4%) and 3a (2.1%) were found in humans but so far not in Swiss pigs or wild boars. Interestingly, also 3ra, originating from rabbits (3.2%), was found in human patients. Another 28.4% of sequences could not be assigned due to the relatively short amplicon used for sequencing [27].
NGS Derived Sequences
In addition to the partial ORF2 sequences, we subjected the samples to NGS in order to gain longer sequences, which would be helpful for molecular tracing. As expected, this has worked better for fresh sample material such as liver samples than for the meat products. Interestingly, the Ct value was not a reliable indicator of sequencing success. Nevertheless, in 30% of the samples, the coverage was over 80%, which allows more reliable characterization of the genome. Unfortunately, NGS of neither of the two samples containing 3l sequences resulted in high coverage. However, four 3h_s sequences were covered between 95.6% and 100% and could be compared to other (near) complete genomes by means of a phylogenetic tree and pairwise distance calculation. As seen with the partial ORF2 sequences, the Swiss 3h sequences formed a distinct cluster, branching into two subclusters. The pairwise distance calculation confirmed the four new Swiss sequences to be assigned to subtype 3h. However, the three wild boar sequences seem to be not only more distantly related to the previously described 3h_s reference genomes but also to the classic 3h sequences from France ( Figure 5). It will be interesting to see if the assignment to subtype 3h will hold true for future sequences of the Swiss cluster. Since the three clusters within subtype 3h are not only genetically distinct but differ also regarding geographical distribution and epidemiology it may be helpful to use a specific nomenclature, e.g., including the names of the formerly proposed subtypes, for molecular epidemiological purposes.
Conclusions
We have confirmed the predominance of a Swiss-specific cluster within HEV subtype 3h in Swiss pigs and meat products and shown that pigs and wild boars share the same subtypes. The cluster 3h_s, which may result from local virus evolution due to the isolated Swiss pig industry, has so far only been reported in Switzerland. Hence, its determination enables differentiation of domestic and imported infections with hepatitis E virus. Therefore, assignment of HEV sequences into epidemiologically related clusters below the subtype level can be important. However, while the diversity within 3h_s was high, more data is necessary to conclude the farm specificity of single sequences and the suitability of the partial ORF2 sequence for molecular tracing of HEV in Switzerland.
Supplementary Materials: The following Supplementary Materials are available online at https: //www.mdpi.com/article/10.3390/ani11113177/s1, Figure S1: Map of Switzerland, Figure S2: NGS coverage pattern, Figure S3: Map of the hunting grounds in canton Schaffhausen; Table S1: List of all food samples, Table S2: Results and information of all samples from carcass collection points, Table S3: Results and information of wild boar livers from canton Schaffhausen, Table S4 Institutional Review Board Statement: Ethical review and approval were waived for this study, due to using post-mortem sample material from animals slaughtered or hunted for human consumption or material from carcass collection points.
Data Availability Statement:
The sequences generated in this study are available on NCBI Gen-Bank (https://www.ncbi.nlm.nih.gov/genbank/, accessed on 20 October 2021) under the accession numbers MZ923532-MZ923556. NGS raw data used to generate the four full-length sequences are deposited in NCBI Sequence Read Archive (SRA) as BioProject PRJNA772545 (BioSamples SAMN22376915 to SAMN22376918). Supplementary material showing detailed results of this study are available online.
Acknowledgments:
We thank Florence Nicot from CHU de Toulouse for providing the full-genome sequence collection and helpful support with the pairwise-distance calculations. Furthermore, we are very grateful to the hunting societies of the canton Schaffhausen for providing the majority of wild boar livers and the cantonal veterinary officer of the canton Schaffhausen, Peter Uehlinger and his team, for storing and providing the diaphragm samples and supporting the study. We would also like to acknowledge Jon Paulin Zumthor and Felix Grimm from the Laboratory of Veterinary Diagnostics in Chur and the Institute of Parasitology of the University of Zurich, respectively, for providing diaphragm samples. We also highly appreciate the help of all staff from the slaughterhouses, and particularly the commitment of Clemens Bauer, Serafin Blumer and Marc Henzi from Zürich, Basel and Courtepin, respectively. | 2021-11-10T16:31:31.385Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "50a146f07ae6c5637b8b8331be79095c339d7fcc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/11/3177/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60cd2616989aeddb5f586d498e51161e20aa970c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
219182330 | pes2o/s2orc | v3-fos-license | Embedding High-Level Knowledge into DQNs to Learn Faster and More Safely
Deep reinforcement learning has been successfully applied in many decision making scenarios. However, the slow training process and difficulty in explaining limit its application. In this paper, we attempt to address some of these problems by proposing a framework of Rule-interposing Learning (RIL) that embeds knowledge into deep reinforcement learning. In this framework, the rules dynamically effect the training progress, and accelerate the learning. The embedded knowledge in form of rule not only improves learning efficiency, but also prevents unnecessary or disastrous explorations at early stage of training. Moreover, the modularity of the framework makes it straightforward to transfer high-level knowledge among similar tasks.
Introduction
Deep reinforcement learning (Mnih et al. 2013) has been successfully applied in many dynamic decision making scenarios. However, like deep learning, it suffers from problems like being brittle and not easily explainable. The training time is also often very long and suffers from "cold start"performing very badly at the beginning. Furthermore, for applications in robotics and critical decision support systems, the lack of a guarantee that the system won't do anything disastrous is also of concern.
There have been many related approaches proposed. In (Zahavy, Zrihem, and Mannor 2016), the behaviors of neural network is visualized to increase the transparency. Some other combine symbolic methods or high-level knowledge with deep reinforcement learning, such as Hierarchical Deep Reinforcement Learning (Kulkarni et al. 2016) and DSRL. Imitation Learning approaches learn directly from human. Related works can be found in (Zhang et al. 2019). We omit many other references due to the space limit.
Different from previous work, we propose a new framework named Rule-Interposing Learning (RIL) to embed human knowledge into the deep reinforcement learning. We have implemented our framework and tried it on some wellknown games such as Flappy Bird, Space War, Breakout, and Grid World. The results show that good heuristic rules work as accelerators that make DQN learn faster and safety rules work as guards that make DQN learn more safely.
To be specific, in RIL, the model randomly gets a sample from the replay memory for training, and calculates the predicted Q-value for every valid action: The agent selects a random action with probability ε, otherwise select the action with maximal Q-value. But unlike original DQN, before the execution of selected action, RIL passes the action into rule set. The rule set maintains a pool of legal actions for each rule in knowledge base R. If the selected action violates the knowledge, RIL rejects the action and suggest a new one under probability P t = p 0 · γ t , where p 0 is a given initial probability, γ is the decay rate, and t is the timestamp. After the rejection, a random legal action is selected to be executed. We demonstrate RIL's performance under two rule-interposing schemes: Acceleration rules: the rules with probability P t = p 0 · γ t where 0 < γ < 1. Given existing knowledge about the task, some explorations are unnecessary and can be pruned. As a consequence, under the instruction of these rules as a priori, a DQN learns faster. Safety rules: the rules with probability P t = p 0 · γ t where p 0 = 1 and γ = 1. In this case, the rule will be always on, overseeing the training process. Once the decision made by DQN is considered dangerous by the safety rules, it'll be rejected and replaced to a safe one given by knowledge base. Formally, for a given domain, the knowledge base R consists of rules of form (η, δ) where η is a first-order logic proposition indicating some environmental condition, and δ is a set of conditionally recommended actions, which is a subset of action space. For convenience, the two parts of a given rule r ∈ R are written as functions in the rest of the paper, denoted respectively by η(r) and δ(r). Denote activation set of rule r at timestamp t as The activation set α(r, t) contains all actions suggested by rule r at time t, and it is obviously also a subset of action space. The activation set of the entire knowledge base at time t is defined as the intersection of all non-empty activation sets of rules: Especially, given a time stamp t, if α(r, t) = ∅ for each rule r ∈ R, it means that none of the rules applies in current situation. Therefore, DQNs should explore or select an action autonomously in this case. At each timestamp t, there might be multiple non-empty activation sets.
Experiments
We implement our framework on several games as show in Figure 1. DQN model is used to compare with. For the sake of fairness, we use the same hyper-parameter setting and neural network implemented among the RIL and DQN. The network consists of three convolution layers, one hidden layer and the output layer.
In Flappy Bird, we use a rule set to tell the bird not to fly too high or too low, when it is flying across a pair of pipes. Formally, knowledge base in Flappy bird R fb = {r 1 , r 2 }, where η(r 1 ) = crossing(p u , p l ) ∧ less(distance(bird, p u ), size(bird)), and δ(r 1 ) = {flap}, and η(r 2 ) = crossing(p u , p l ) ∧ less(distance(bird, p l ), size(bird)), and δ(r 2 ) = {null}, where (p u , p l ) is the pair of pipes that the bird is flying across.
In Breakout, we use following strategy: if the ball is on the left-hand side of the paddle, then the paddle should move left, the similar when it is on the right-hand side of the paddle. Formally, the knowledge base for Breakout is R bo = {r 5 , r 6 }, where η(r 5 ) = on lef t(ball, paddle) and δ(r 5 ) = {move lef t}, and η(r 6 ) = on right(ball, paddle) and δ(r 6 ) = {move right}.
In Grid World, we use the knowledge base R gw with a single safety rule r 7 , which takes effect when the agent is in the neighbor of a trap, where η(r 7 ) = near trap ∧ trap in(directions), and δ(r 7 ) = A−{move(dir) : dir ∈ Figure 2: The result compare between RIL and DQN in four games. With the increase of the reward per episode, we set a time limit of the training stage. The reward per episode demonstrates that, within the same training time, RIL gets better performance with fewer training episodes. Besides, with safety rule set deployed in Grid World, RIL prevents the agent from disastrous explorations and gains much better performance in very early stage of training. directions}, where A is the set of all actions. The rule simply to forbid the agent to move into a trap.
The criterion that we use to evaluate the agent's performance is the average reward the agent gains in training episodes. The performance is shew in Figure 2. The plot of average reward of training episodes indicates obvious improvement on learning efficiency and exploration safety.
Conclusion
In this demonstration, we briefly introduce the RIL framework for integrating high-level rules and deep Q-learning and demonstrate the corresponding experiment result that support our idea. We believe that RIL is a general enough to be used in other deep learning algorithms. | 2020-06-03T13:09:51.273Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "eee61ed11925ca04d8d3a01e9006498cb1efb832",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/7091/6945",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82dc394ddabe8aafedee7eb3fc801354dbf4e79b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257167979 | pes2o/s2orc | v3-fos-license | Two-stage robust optimal operation of AC/DC distribution networks with power electronic transformers
Power electronic transformers (PET) are a new type of power electronic equipment with a multi-port flexible dispatch function, which can play the role of a power hub in a system composed of multiple AC-DC hybrid distribution grids for interactive sharing of power in multiple regions. In this study, a two-stage robust optimization operation model of a hybrid AC-DC distribution network with PET is proposed based on PET power transmission and transformation characteristics. The stochastic uncertainty of the distributed renewable energy output in the AC-DC grid is handled by a two-stage robust optimization method to determine the minimum total system operation cost under the worst case of distributed renewable energy output. Finally, a constrained column generation algorithm is used to solve the two-stage robust optimization model in the min-max-min form and verifies the validity of the model in this study.
Introduction
With the development of power technology, new energy sources represented by wind and photoelectricity are increasingly connected to power systems and generally belong to distributed generation (DG). The DC part of the AC/DC mixed current distribution is compatible with these DGs, and DC transmission can reduce the number of energy conversions and improve energy utilization, while the AC part is compatible with existing power equipment, saving costs. The AC-DC hybrid distribution grid also has advantages in terms of new energy consumption, peak shaving, and valley filling, and is a feasible solution to cope with future grid development.
A power electronic transformer (PET) consists of a power electronic converter and conventional high-frequency transformer, which allows for more flexible conversion of electrical energy through power electronics technology (Liu et al., 2017;Wang et al., 2017). Generally, power electronic transformers are classified into AC/AC-and AC/DC/AC-type PETs, depending on whether they contain a DC component . PET plays the same role as a traditional voltage source converter (VSC) in an AC/DC distribution network, connecting the AC and DC components of the interactive distribution network. Compared with traditional VSC, PET has some unique advantages, such as its ability to contain multiple AC-DC converter ports, connect multiple AC and DC subgrids simultaneously, control the power and voltage of each port, and achieve power quality control, fault isolation, and energy interaction between ports OPEN ACCESS EDITED BY simultaneously. Therefore, PET can play the role of an energy hub in the AC-DC hybrid distribution network. Pu et al. (2018) provided an overview of the technology and framework for the optimal operation of PET-based hybrid AC-DC systems, and illustrated the advantages of PET-based AC-DC distribution networks over other power conversion units, such as VSC-based AC-DC distribution networks. Yi and Wang, 2021 proposed a day-ahead economic operation strategy for multi-port PET-based AC-DC distribution networks, reflecting the flexible regulation capability of PET, and established a PET energy flow model. Guo et al. (2019) applied multi-port PET to AC-DC hybrid distributed energy systems, fully consuming renewable energy and reducing system operation cost by using the power regulation function of PET. However, most of the above models do not fully consider the uncertainty of the renewable energy power output, whose random uncertainty significantly affects the power interaction of PET and safe operation of the AC-DC distribution network with access to large-scale scenic power sources.
Owing to the access of a large number of distributed renewable energy sources such as photovoltaic (PV) and wind power, the power supply of the grid has more uncertainty and volatility, posing new challenges for optimal dispatching of the distribution network. The commonly used uncertainty optimization methods include stochastic and robust optimization.
The probability distribution of random variables must be set in stochastic optimization, but the assumed probability distribution model may not be able to accurately portray the variation pattern of the actual uncertainty factors when they are more complex. proposed a stochastic optimization model for the impact of new energy uncertainty on the operation results of AC-DC distribution networks containing power electronic transformers. Xu et al. (2021) combined stochastic optimization and conditional value-at-risk theory to propose a stochastic operation optimization method for active distribution networks containing smart soft switches considering risk management. Robust optimization does not require prior knowledge of the specific probabilistic prediction information of uncertain quantities and uses uncertainty sets to model uncertainty and pursue the minimum total cost of system decision options under the worst-case scenario with uncertain variables. Liu et al. (2018) considered the uncertainty of new energy and load, developed a min-max-min twostage robust optimization model, and regulated the model conservativeness by introducing uncertainty regulation parameters. Fu et al. (2019) proposed a reactive voltage control method for AC-DC distribution networks based on a two-stage robust optimization model and examined the results of the model under different prediction errors. Liao et al., 2020 proposed a two-stage robust optimization strategy for an AC-DC distribution network with an optical storage consortium and used a hierarchical approach to set two objectives to solve it. proposed a two-stage robust optimization model incorporating both distribution network reconfiguration and reactive power optimization. Zhong et al. (2022) introduced game theory into the two-stage robust optimization model for AC-DC distribution networks and constructed a master-slave game optimization model. However, few of the above models apply the two-stage robust optimization method to the optimal operation of hybrid AC-DC distribution networks with PET, and further research is required to combine the robust modeling idea with the optimal operation of AC-DC distribution networks with PET.
In this study, a two-stage robust optimal operation model of a hybrid AC-DC distribution network with PET is proposed. By connecting the AC-DC part of the distribution network and the super grid through PET, the utilization rate of distributed renewable energy is improved, and the safe and economic operation of the AC-DC distribution network is ensured. A two-stage robust optimization method is used to address the stochastic uncertainty of the renewable energy output and seek the minimum total system operation cost under the worst case scenario. Finally, a constrained column generation algorithm is used to solve the two-stage optimization model in the form of min-max-min.
Compared with the examples mentioned in the previous section, the two-stage robust optimization method used in this paper has the following advantages: first, compared with the traditional robust optimization method and the stochastic optimization method, the method used in this paper inherits the advantages of robust optimization such as strong accuracy and low out-of-bounds rate, and achieves the purpose of controlling the conservativeness of the model by adding uncertainty adjustment parameters. Secondly, compared with other two-stage robust optimization methods, the method used in this paper sets both spatial and temporal uncertainty adjustment parameters, which can control the number of bad scenarios taken in one cycle and the number of bad scenarios taken at the same time respectively, so that the conservativeness of the model can be controlled more flexibly and accurately to achieve better optimization results.
2 Two-stage robust operation model of AC/DC distribution network with power electronic transformers 2.1 Hybrid AC/DC distribution network structure with power electronic transformers The hybrid AC-DC distribution network can be divided into three parts according to its composition: AC and DC distribution networks and VSC. The model in this paper used PET to replace the traditional VSC, which connects the DC, AC, and super grids and plays the role of power conversion. In the AC part, the micro turbine (MT), AC load, and energy storage (ES) are connected, and in the DC part, the PV, wind turbine (WT), DC load, and other parts are connected. The AC and DC parts are connected to the superior grid through PET. Figure 1 shows a schematic of the AC-DC hybrid distribution network. Compared with the traditional AC-DC distribution network, that with PET can directly interact with the superior grid through PET, owing to the multi-port nature of PET, avoiding the loss caused by the interaction through the AC grid. Because the power can be freely interacted with in three ports, it improves the flexibility of power dispatching in the distribution network. Compared with traditional VSC, it improves the response speed and network flexibility and reduces the power conversion link, which is more suitable for distribution networks with uncertain DG (Pu et al., 2018;Li et al., 2021).
Equations
The optimization objective was to minimize the total operating cost during the dispatch cycle of the system. This entailed finding the operating solution with the lowest cost during the dispatch cycle by adjusting the purchased power from the higher grid, generation capacity of the micro turbine, and power of the energy storage equipment.
Among them, where f is the operation cost of the distribution network; C M and C MT are the costs of electricity purchased from the upper grid and generated by micro turbines, respectively; C ES is the cost of energy storage; and C WT and C PV are the costs of abandoned wind and light, respectively. T is the operating period; B MT , B ES , B WT , and B PV are the sets of micro turbine nodes in the distribution network, energy storage nodes, wind turbine nodes, and photovoltaic nodes. c M t is the price of electricity purchased from the superior grid at time t; P M t is the power purchased from the superior grid by the distribution grid; and c MT 1 , c MT 2 , c MT 3 are the cost coefficients of micro turbine generation. P MT i,t is the power output of micro turbine at node i at time t; c ES is the cost coefficient of energy storage charging and discharging; η is the charging and discharging efficiency; P ch i,t is the charging power of energy storage node i at time t; and P dis i,t is the discharging power of energy storage node i at time t. C WT and C PV are the wind and light abandonment penalty coefficients, respectively;P
Constraints of DistFlow branch currents in AC-DC distribution networks
The DistFlow tidal model was planned to be used for both the AC and DC parts of this model because part of this model contained non-linear terms that were not favorable for solving the model using software. To improve the solution speed, in this study, linearization and second-order cone relaxation were used to transform the model into a linear problem (Lavaei and Low, 2012), which was then solved by a commercial solver to achieve an easy solution and increase the solution speed as follows: First, linearization transformation was performed through variable substitution: The results of the second-order cone relaxation of the DistFlow power flow model of the AC/DC hybrid distribution network were as follows.
AC part: where δ(j) is the set of end nodes with j as the first node, π(j) is the set of first nodes with j as the end node, B AC is the set of AC subnetwork nodes, and L AC is the set of AC subnetwork branches. P AC ij,t , Q AC ij,t are the active and reactive power flowing from node i to node j in the AC subnetwork, respectively; r ij , x ij , and b i are the resistance and reactance of the branch ij and the shunt electrons at node i, while P AC j,t , Q AC j,t ,Ṽ AC t,j , andĨ AC ij,t are the active power, reactive power, voltage squared, and square of the current flowing through branch ij in the AC subgrid injected into node j at time t, respectively. P ACPET are the active reactive power flowing into and out of the PET AC port.
DC part: where B DC and L DC are the sets of DC subnet nodes and branches, respectively, and P DC j,t is the active power injected into node j at time t of the DC subnet. P DC ij,t is the active power flowing from i to j on DC branch ij;Ĩ
Operational constraints of distributed power generation
(1) Upper and lower limit constraints of micro turbine output: where P MTmax i and Q MTmax i are the maximum values of active and reactive power of the micro turbine, respectively. Because the step size selected in this model was 1 h, the regulation speed of the micro turbine was faster at this time scale, so the climbing constraint of the model was not considered.
(2) Wind turbine and photovoltaic output constraints: whereP WT i,t andP PV i,t are the predicted wind turbine and PV outputs, respectively.
(3) Operational constraints of energy storage: where P ch i,t , P dis i,t denote the charging and discharging power at node i at time t; P ch max , P dis max denote the maximum charging and discharging power of the energy storage device, respectively. U i (t) denotes the 0-1 correlation variable of the charging and discharging states at node i at time t, and 1 is charging and 0 is discharging. E i,t , E i max denote the existing and maximum power stored in the energy storage device at node i at time t, respectively, and η denotes the charging and discharging efficiency of the energy storage device.
Operational constraints of power electronic transformers
In this study, we considered an AC/DC/AC-type PET with a DC section, which can be connected to multiple AC/DC distribution networks simultaneously because of its multi-port feature, and realize the power interaction function between each sub-network and the higher-level network through its AC/DC ports. Considering the limitations of PET, the amount of power interaction between each port was constrained. Figure 2 shows a schematic diagram of the energy flow of the PET, where P MPET in,t and P MPET out,t are the active powers exchanged between the medium-voltage AC side port of the PET and the main network at moment t; P ACPET in,t and P ACPET out,t are the active powers exchanged between the low-voltage AC side of the PET and the AC distribution network at moment t; and P DCPET in,t and P DCPET out,t are the active powers exchanged between the low-voltage DC side of the PET and the DC distribution network at moment t (Zhang et al., 2017).
Letting the loss factor of PET be k p and simplifying PET to a node Li et al., 2019;Li et al., 2021), we obtain: The capacity constraints of PET ports are: Linearizing the non-linear term in the constraint so that it is transformed into a rotating cone constraint yields: where S MPET max , S ACPET max , and P DCPET max are the power limits of the lowvoltage AC and DC ports in the PET, respectively.
Uncertainty set of wind turbine, photovoltaic power output
Owing to the stochastic uncertainty of wind power and PV output, we considered the uncertainty set to characterize the uncertainty of the scenery output: where u WT i,t , u PV i,t are the actual wind and PV power, which are uncertainties;û WT i,t ,û PV i,t are the predicted values of wind and PV power; and Δu WTmax i , Δu PVmax i are the maximum deviation values allowed for wind and PV power, respectively.
To regulate the uncertainty of the model to control the conservativeness of the model, the time regulation parameters Γ T WT , Γ T PV and spatial regulation parameters Γ S WT , Γ S PV were introduced to represent the number of worst-case scenarios and wind turbines and photovoltaic units in the worst case simultaneously in one operating cycle, respectively. The specific expressions are as follows: where B WT i,t , B PV i,t indicate whether the ith wind power and PV unit take the worst case at time t and are 0-1 variables.
Two-stage robust optimization model
As mentioned above, the optimization objective of the proposed model in this study was to minimize the cost of running one cycle, and the objective function can be expressed in the form of Eq. 1. Without considering the uncertainty of the PV of the wind turbine, the compact form of the objective function can be expressed as: When the uncertain output of wind and light is considered, a two-stage robust optimization approach can be used to find the scenario with the lowest cost of operating one cycle when the uncertain value of the scenery output is taken to the worst operating scenario with a preset uncertainty concentration, which is mathematically represented as follows: where the outer layer is the first stage of the minimization problem with x as the optimization variable, and the inner layer is the second stage of the maximum minimization problem with u and y as the optimization variables. The first layer of the minimization problem was the objective function of this study, that is, the cost of running a cycle was minimized, and Ω(x, u) represents the feasible domain for a given set of x, u, y, whose expressions are as follows: where α, β, γ, δ, λ, and μ are pairwise vectors corresponding to each constraint matrix.
Model solving
To facilitate the solution, the above optimization model must be transformed into the form of a standard two-stage robust optimization Frontiers in Energy Research frontiersin.org model, which is a min-max-min multilayer optimization problem and is difficult to solve using general methods. In order to solve such problems, the commonly used methods are Benders decomposition method and Column and constraint generation (CCG) algorithm, and the CCG algorithm has the unique advantages of shorter computation time and fewer iterations compared with the Benders decomposition method, so the CCG algorithm is used to solve the two-stage robust optimization problem in this paper (Zeng and Zhao, 2013). The optimization problem was decomposed into a master problem and subproblem; the master problem min provided a lower bound for the subproblem max-min by calculation, whereas the subproblem provided a worst-case environment in the uncertainty set to provide an upper bound for the model, and then iterated the model several times so that the difference between the upper and lower bounds only decreased gradually. Finally, the result reached the preset convergence condition to obtain the desired optimization result. The specific process is as follows: The main problem provides the lower bound for the model as: min x,y π st.π ≥ c T y l Dy l ≥ d# Ky l 0# Fx + Gy l ≥ h# I u y l u * l # My l 2 ≤ g T y l ∀l ≤ k where k is the number of current iterations, l is the number of historical iterations, y l is the solution of the subproblem after l iterations, and u * l is the value of the uncertain variable u under the worst conditions obtained after the lth iteration.
The objective of the sub-problem was to derive the worst-case scenario with an objective function expressed as follows: With (x,u) given, the subproblem can be viewed as a deterministic problem, and the equations of the subproblem are transformed into a dual form by the method mentioned above, thus transforming the min problem into a max problem for an easy solution, and the expression obtained is max u∈U,α,β,γ,δ,ε,ϵ The results obtained from the subproblem provide the upper bound for the whole model. The specific iteration process is shown in Figure 3.
Test platform and model parameters setting
To verify the correctness and effectiveness of this two-stage robust optimization method for hybrid AC-DC distribution networks with PET proposed in this study, the YALMIP toolbox and CPLEX and Gurobi solvers were used to solve the model. The hardware platform used was AMD Ryzen 7 4800 H 2.90 GHz; 16 GB RAM. The operating system used was Windows 10, and the software was R2017b. The structure of the algorithm used in this study is shown in Figure 4.
As shown in Figure 4, this study adopted a hybrid AC-DC distribution network model combined with two improved IEEE33 node models, where the red line indicates the AC part of the distribution network, and blue indicates the DC part. The PET connects the AC-DC part as well as the main network, and even plays the role of an energy hub. The voltage of the AC part was 12.66 kV and that of the DC part was 15 kV. The limitation range of the node voltage in the distribution network was V i ∈ [0.95, 1.05]pu. The maximum value of the interaction power of the PET with the superior grid was S MPET max 12000 kVA, and the maximum value of the interaction power with the AC distribution network was S ACPET max 12000 kVA. The maximum value of the interactive power with the DC distribution network was P DCPET max 1000 kV, the loss coefficient of PET k p 0.05, and the iterative convergence accuracy of the CCG algorithm was set to ε c 0.01. The network was connected to energy storage devices ES and MT as controllable distributed power, and WT and PV as uncontrollable distributed power supplies; the specific distribution is shown in Figure 4. When the energy supply in the AC or DC sub-network is insufficient, other sub-networks or higher-level grids can supply energy to them through PET. When there is a surplus of new energy in the DC sub-network, it can also be transmitted to the AC sub-network through PET, thus realizing peak reduction and valley filling in the distribution network to maximize economic benefits. The line parameters of the IEEE33 node system are detailed by (Kashem et al., 2000). The specific parameters of some devices are listed in the following Table 1, Table 2, Table 3, Table 4.
Results of running the two-stage robust optimization model
This example sets the spatial and temporal uncertainty regulation parameters Γ S WT Γ S PV 2 and Γ T WT Γ T PV 12. Figure 5 show the prediction curves for the wind turbine and photovoltaic outputs. The peak load was generally concentrated in the midday and evening hours, and the trough in the early morning to morning hours. PV generated the most power at noon and hardly any at night, so wind power, PV, and MT must be combined to provide the power required by the load, with PET playing the role of energy router. Figure 6 and Figure 7 show the power output curves of the PET AC and DC ports and the power purchased by PET from the main grid, respectively. Figure 6 shows that during the time of high PV power generation around noon, the energy mainly flowed from the DC to the AC port, and the power purchased from the main grid during this time decreased. This reduced the costs of power purchase and abandonment penalty owing to the new energy consumption, thus achieving cost saving and new energy consumption. When there was no PV power at night, such as a DC subgrid power shortage, energy flowed from the AC to the DC port to ensure that the power was supplied to the load. Comparing Figure 5 and Figure 8, the turbine generation was higher during the 5-10-h period, reducing the micro turbine generation at this time, thus reducing the system generation cost. Additionally, the dissipation of excess wind power reduced the
FIGURE 5
Output prediction curve of wind turbine and photovoltaic.
FIGURE 6
AC/DC port power of PET.
Frontiers in Energy Research frontiersin.org cost of the wind abandonment penalty. Figure 9 shows the energy storage power curves. When the overall new energy generation of the system was too large, energy could be stored to convert the power to the AC subnetwork through PET and store the excess energy in the storage device to reduce the cost of wind and light abandonment. In addition, when the overall system power was insufficient, it could also be discharged through energy storage to reduce power purchase and generation. When the system as a whole was short of power or when the cost of purchasing power from the higher grid was lower than the cost of generating power, power could be purchased from the higher grid through the PET interaction port with the higher grid to meet the system's power demand. Figure 10 shows the voltage curve of the DC part of this model. Due the limitation of the number of graphs, we did not depict the nodal voltage curve of AC part as the AC part is similar to the DC part.
Comparison with deterministic and stochastic optimization models
The deterministic and stochastic optimization models were compared with the two-stage optimization model proposed in this study. By comparing the cost of operating the distribution network for one cycle under these conditions, the superiority of the models was verified. Furthermore, the impact of the uncertainty parameters on the conservative model was analyzed by comparing the cost of the models under different uncertainty parameters and number of iterations. The model
FIGURE 7
Purchased power to the main grid.
FIGURE 8
Power curve of micro turbine.
FIGURE 9
Energy storage curve.
FIGURE 10
Voltage curve of DC section.
Frontiers in Energy Research
frontiersin.org used in this study can be set with different uncertainty adjustment parameters for different DGs; however, for the convenience of presentation, the uncertainty adjustment parameters were the same for each DG in this study.
To verify the control effect of the uncertainty regulation parameter on the conservative type of model, several comparison tests were experimentally designed, as shown in Table 5, showing that the uncertain model was equivalent to the deterministic model when the uncertainty parameter was equal to 0. As the spatial and temporal uncertainty regulation parameters of the system increased, the number of units obtaining the worst case simultaneously and the total number of units that obtained the worst case in one operation cycle also increased. The uncertainty of the system also increased, increasing the cost of the model in one operation cycle, but the computation time and number of iterations decreased. This indicates that the more uncertainty the model considered, the worse the simulated operating conditions were, and the more conservative and costly the model was. Although the deterministic model had the lowest operating cost, it was not robust and, thus, could not cope with the uncertainty of new energy sources. The two-stage robust optimization model used in this study had a higher cost compared with the deterministic model, but it was robust because it considered the uncertainty of new energy. The larger the uncertainty parameter was, the more robust the model was, and the more it could cope with the uncertainty.
Compared with the stochastic optimization model commonly used in the literature mentioned previously, the cost of running one cycle of the stochastic optimization model was between that of the two-stage robust optimization and deterministic models. However, because the stochastic optimization model requires too many scenarios to be considered in the calculation, the calculation speed of the algorithm is slower, making its calculation time longer than that of the model used in this study. Additionally, the stochastic optimization model cannot guarantee the conservativeness of the calculation results, and the results have a certain probability of crossing the limit, which is not conducive to the safe power supply of the distribution network. The model used in this study can control the parameters of time and space uncertainty adjustment according to the actual situation to control the number of DGs of bad scenes in one cycle simultaneously, thus controlling the cost of running the model for one cycle. Therefore, the model used in this study is considered to have higher controllability and robustness when dealing with the actual problem.
Costs for different power supply configurations
The spatial uncertainty regulation parameter Γ S WT Γ S PV 2 and the temporal uncertainty regulation parameter Γ T WT Γ T PV 12. The cost is f = 1,518,673.24 when the power source in the optimization model contains both MT, ES and distributed new energy, which is used as a control group to compare and analyze the change of cost in other cases. When the model contains only MT and distributed new energy, the cost is f = 1759756.56. The reason for the increase in cost is that when there is no ES, it is not possible to reduce the peak and fill the valley, which makes the cost of wind and light abandonment penalty higher, and when the new energy output decreases, there is no energy storage to discharge, so we can only rely on MT power generation and purchase power from the upper grid, which increases the cost. When the model contains only MT, the overall power output of the system is too small to complete the power balance, resulting in the model cannot be iterated, and the cost results cannot be obtained.
Conclusion
This study established an optimal operation model of hybrid AC-DC distribution network with PET based on a two-stage robust optimization method, which considered the uncertainty of scenic power generation by using two-stage robust optimization. Based on ensuring the safety and reliability of the distribution network, the AC-DC part of the distribution network and super grid are connected by PET to improve the utilization rate of new energy and ensure the safe and economic operation of the distribution network. A comparison of the proposed model with deterministic and stochastic optimization models indicates that the model is more robust and can regulate the uncertainty of the system through the uncertainty parameters. However, the method used in this paper has the disadvantage of high cost, the next step will be to consider how to model the uncertainty of renewable energy output in an AC-DC distribution network containing PET using a data-driven approach with a large amount of historical renewable energy data.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Author contributions
The HD wrote the original draft. MW and ZT provided the supervision, review, and editing of the draft. All authors contributed to the article and approved the submitted version.
Funding
National Natural Science Foundation of China (61872230). | 2023-02-25T16:02:21.250Z | 2023-02-23T00:00:00.000 | {
"year": 2023,
"sha1": "559b0cc7228d2ace3f00367a96aee3e0b502a818",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2023.1148734/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "06012fca692d2e03d06ce08066014db389abd940",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
119247235 | pes2o/s2orc | v3-fos-license | Sensitivity for 21cm Bispectrum from Epoch of Reionization
The 21cm line brightness temperature brings rich information about Epoch of Reionizaton (EoR) and high-$z$ universe (Cosmic Dawn and Dark Age). While the power spectrum is a useful tool to investigate the EoR signal statistically, higher-order statistics such as bispectrum are also valuable because the EoR signal is expected to be highly non-Gaussian. In this paper, we develop a formalism to calculate the bispectrum contributed from the thermal noise taking array configularion of telescopes into account, by extending a formalism for the power spectrum \cite{2006ApJ...653..815M}. We apply our formalism to the ongoing and future telescopes such as expanded Murchison Widefield Array (MWA), LOw Frequency ARray (LOFAR) and Square Kilometre Array (SKA). We find that expanded MWA does not have enough sensitivity to detect the bispectrum signal. On the other hand, LOFAR has better sensitivity and will be able to detect the peaks of the bispectrum as a function of redshift at large scales with comoving wavenumber $k \lesssim 0.03 {\rm Mpc}^{-1}$. The SKA has enough sensitivity to detect the bispectrum for much smaller scales $k \lesssim 0.3 {\rm Mpc}^{-1}$ and redshift $z \lesssim 20$
INTRODUCTION
The redshifted 21cm line emission from neutral hydrogens is a promising way to probe Epoch of Reionization (EoR), Cosmic Dawn and Dark Age Pritchard & Loeb 2012) because it reflects the physical state of intergalactic gas. Actually, the brightness temperature depends on quantities crucial for the understanding of these epochs, such the neutral hydrogen fraction, spin temperature and baryon density. However, the observation of the redshifted 21cm signal is very challenging due to the presence Galactic and extragalactic foreground emissions, Low-frequency radio telescopes such as Murchison Widefield Array (MWA) (Beardsley et al. 2013), LOw Frequency ARray (LOFAR) (Jensen et al. 2013) and PAPER (Parsons et al. 2014) have started their observations and set upper bounds on the brightness temperature. The upper bounds will improve further as our understanding of the foreground proceeds and the subtraction techniques become more sophisticated. Ultimately, the Square Kilometre Array (SKA) (Dewdney et al. 2013;Mellema et al. 2013) will perform precise observations and will reveal the physical process of EoR and Cosmic Dawn.
One of the useful tools to extract information from observed data is to take the power spectrum of fluctuations in brightness temperature at a fixed redshift (frequency). This is effective even for relatively low S/N data, which could be obtained by ongoing telescopes, while making a map of brightness temperature through imaging requires much higher sensitivity the SKA is expected to have. Actually, the power spectrum of brightness temperature has been studied by many authors (Pritchard & Furlanetto 2007;Pober et al. 2014;Furlanetto et al. 2006;Baek et al. 2010;Mesinger et al. 2014;Santos et al. 2008).
When fluctuations follow Gaussian probability distribution, they can be well characterized by the power spectrum and higher-order statistics such as bispectrum and trispectrum have no further independent information. However, since reionization is a highly non-Gaussian process which involves non-linear density fluctuations, star formation and expansion of HII bubbles, the brightness temperature fluctuations are also expected to be strongly non-Gaussian . In this case, the power spectrum does not have sufficient information to describe the fluctuations and higher-order statistics have independent and complimentary information (Cooray 2005;Pillepich et al. 2007).
In this paper, we develop a formalism to calculate the errors in bispectrum measurement contributed from thermal noise. Noise estimation has been studied by many authors in case of power spectrum (Morales & Hewitt 2004;Morales 2005;McQuinn et al. 2006), and we extend the formalism given in McQuinn et al. (2006). Starting from the error in visibility obtained by a single baseline, we consider its summation over the baseline distribution in uv plane. A striking feature of thermal-noise bispectrum is that its ensemble average vanishes because thermal noise is Gaussian. Nevertheless, thermal noise contributes to the bispectrum error through its variance. Considering the variance of thermal noise error is the main extention to the previous formalism.
The structure of this paper is the following. In section 2, we define the brightness temperature, it's power spectrum and bispectrum. In section 3, we review the formalism of calculation of thermal-noise power spectrum given by McQuinn et al. (2006). Then, we develop a formalism for bispectrum and estimate thermal-noise bispectrum for several specific configuration of the wave number in section 4. The summary and discussion will be given in section 5. Throughout this paper, we assume ΛCDM cosmology with (Ω m , Ω Λ , Ω b , H 0 ) = (0.27, 0.73, 0.046, 70 km/s/Mpc) (Komatsu et al. 2011).
21CM LINE SIGNAL
In this section, we define basic quantities concerning the 21cm signal. The brightness temperature δT b is defined by spin temperature offsetting from CMB temperature, where x HI is the neutral fraction of hydrogen, δ m is the matter over density, H is the Hubble parameter and dv r /dr is the velocity gradient along the line of sight. Then we introduce fluctuation of δT b (x), where δT b is the average value of brightness temperature, x is spatial position. The power spectrum of brightness temperature is defined from its Fourier transform, δ 21 (k), as, where represents the ensemble average, k is position in Fourier space. The bispectrum B 21 can be defined in a similar way: Here the delta function forces the three wave vectors to make a triangle and B 21 is dependent on only two of the three vectors (chosen k 1 and k 2 here) due to this triangle condition.
POWER SPECTRUM SENSITIVITY
In this section, we summarize a formalism to estimate the thermal noise for power spectrum, following McQuinn et al. (2006). First, we define visibility V (u, v, ν) for a pair of antennae as, where T N is the thermal-noise temperature,n is the direction of primely beam, ν is observed frequency and W (n, ν) is a product of the window functions concering the field of view and bandwidth. The rms thermal-noise fluctuation per visibility is given by, where λ is the observed wavelength, T sys is the total system temperature, A e is the effective area of antenna, ∆ν is the width of the frequency channel and t 0 is total observing time. By Fourier transforming the visibility in the frequency direction, we obtain,Ĩ where B(≫ ∆ν) is the bandwidth, ν i is the i-th frequency channel and we define u = (u, v, η). The covariance matrix of detector noise for a single baseline is given by, where, we used the definition of power spectrum for noise temperature (Eq. 3) in third equality and assumed that the covariance vanishes when u i = u j in the last equality. Further, we assumed that the power spectrum is constant for a range where the window function have non-zero value and we pulled P N out of the integration. Then the integration of window functions can be evaluated as follows: where Ω is the field of view. Thus we obtain, On the other hand, the covariance matrix for a single baseline can be evaluated from Eq. (6), Again, we assumed that there is no correlation between the thermal noise with different u, v and ν. If multiple baselines contribute to the same pixel, the observing time is effectively increased. Here we assume that the number density of the baselines in uv-plane is constant under rotation with respect to η-axis, that is, depends only on |u ⊥ | = |u| sin θ where θ is the angle between u and η-axis. Therefore, the effective observing time t u can be written as, Here A e /λ 2 represents area per pixel on uv-plane which reflects the resolution on uv-plane and n(|u| sin θ) is the number density of baselines on uv-plane. Thus, we obtain the covariance matrix for a pixel in uvη-space, replacing t 0 with t k , as, Thus, comparing with Eq. (10) and substituting Eq. (12), we obtain, Now we convert the noise power spectrum of u space to the one of cosmological Fourier space k. Using the following relations where, H 0 is the Hubble constant, f 21 is the frequency of 21cm radiation and where Ω M is the density parameter of matter and we assumed the flat universe. Thus, we obtain, Because the power spectrum of 21cm signal is dependent only on the length of the wave vector, we take a sum of the above noise power spectrum over a spherical shell which corresponds to the same k. First, we consider an annulus with radial width ∆k and angular width ∆θ. Noting that the baseline distribution is assumed to be uniform in an annulus, the number of pixels in the annulus is, where V = λ 2 x 2 y/A e is the observed volume in real space, (2π) 3 /V is the resolution in Fourier space and the other factor, 2πk 2 sin θ∆θ∆k, is the annulus volume in Fourier space. Then the noise power spectrum ruduces by a factor of 1/ √ N a . Next, we consider a sum over θ. Taking ∆k = ǫk, where ǫ is a constant factor, the spherically averaged sensitivity is given by, where k * is the longest transverse wave vector, which corresponds to the maximum baseline length. The lower limit of the integral corresponds to the pixel size.
BISPECTRUM SENSITIVITY
In this section, we estimate the bispectrum from the thermal noise in a similar way in the previous section. However, we should notice that, because the thermal noise is Gaussian, its bispectrum is actually zero. Nontheless, its statistical fluctuation, that is, its variance is non zero and contributes to the noise to the bispectrum signal. Thus, the calculation in this case is more subtle than that of the power spectrum, although we can use similar techniques as we see below. In Saiyad Ali et al. (2006), an order estimation for the thermal noise bispectrum has been done without considering this fact and also the baseline distribution.
covariance of bispectrum
Remembering the definition of the bispectrum in Eq. (4), the covariance of the bispectrum can be defined by, where each bispectrum satisfies the triangular condition (u 1 + u 2 + u 3 = 0 and u 4 + u 5 + u 6 = 0) and comes from the fact that there is no correlation unless the two triangles, (u 1 , u 2 , u 3 ) and (u 4 , u 5 , u 6 ), coincide. Next, we consider ensemble average of the product of six noise intensities, denoted as C B , (24) To proceed further, we substitute Eq. (31) and consider the first term in Eq. (23).
) This is non-zero only when u 1 ≈ u 4 and u 2 ≈ u 5 (and then u 3 ≈ u 6 from the triangular conditions). If these conditions are satisfied, where we used Eq. (9) and and assumed Cov(B N (u 1 , u 2 , u 3 )B N (u 4 , u 5 , u 6 )) is approximately constant within the window function. Thus, taking other terms in Eq. (23) into account, we have, On the other hand, the product of six noise intensities can also be calculated as follows.
where we used Wick theorem (Joachimi et al. 2009) in the second equality and Eq. (13) in the last equality. Thus, from Eqs. (28) and (29), we obtain, Converting the argument from u to k, we finally obtain, This equation corresponds to Eq. (13) for the power spectrum, if we substitute Eq. (12).
spherical average
In this subsection, we take a sum of the noise bispectrum over spherical shell as we did for the power spectrum in the previous section. However, the situation is much more complicated in the case of bispectrum, because |k 1 |, |k 2 | and |k 3 | can be all different with each other in general so that we must consider two spherical shells with the radius |k 1 | and |k 2 |, while |k 3 | is determined by the triangular condition, k 1 + k 2 + k 3 = 0. In this paper, we calculate the noise bispectrum for equilateral type (|k 1 | = |k 2 | = |k 3 |) and isosceles type (|k 2 | = |k 3 |) and define K ≡ |k 1 | and k ≡ |k 2 | = |k 3 |.
First, as in the case of the power spectrum, k 1 can run over a spherical shell with radius k which can be parametrized by two of the spherical coordinate of k 1 , (θ 1 , φ 1 ). Further, for a fixed k 1 , there is a rotational degree of freedom for k 2 with respect to k 1 , which is denoted by an angle α with 0 ≤ α < 2π. Thus, we need to integrate the covariance matrix in Eq. (31) with respect to θ 1 , φ 1 and α. Noting that the covariance matrix does not depend on φ 1 , the weight of the integration, which corresponds to Eq. (20), is given by where the first factor comes from the sum for k 1 over the spherical shell and the second factor takes the rotational degree of freedom of k 2 for each k 1 into account. Here θ 2 is the polar angle of k 2 and γ is the angle ∂k 2 /∂α and ∂k 2 /∂θ 2 . ∆θ 2 is the width of the annulas of k 2 when k 1 is fixed, which we set equal to the resolution in Fourier space, 2π/V 1/3 . It is convenient to express θ 2 by θ 1 , α and the angle between k 1 and k 2 denoted as β. Noting k 2 can be express as k 2 = k(cos θ 1 cos α sin β + sin θ 1 cos β, sin α sin β, − sin θ 1 cos α sin β + cos θ 1 cos β), we obtain, cos θ 2 = − sin θ 1 cos α sin β + cos θ 1 cos β Then, setting ∆k = ǫk and ∆K = ǫK, the bispectrum variance due to the thermal noise is written by an integrate with respect to θ 1 and α, This is a general expression for isosceles-type bispectrum. For equilateral type, we just set K = k and β = 2π/3.
estimation of noise bispectrum
To calculate the bispectrum sensitivity, we need the number density of baselines on uv-plane. In this paper, we consider expanded MWA, LOFAR and SKA. The expanded MWA will have 500 antennae within a radius of 750 m with r −2 distribution (Bowman et al. 2006). LOFAR has 24 antennae within a radius of 2000 m with r −2 distribution (van Haarlem et al. 2013). SKA will have 466 antennae within 600 m with r −2 distribution, 670 antennae within 1000 m, 866 antennae within 3000 m (Dewdney et al. 2013). For simplicity, we assume that the antennae density is constant between 600 m to 1000 m and 1000 m to 3000 m, respectively. We list parameters in table 1. Further, we assume t 0 = 1000 hour for the total observing time and 6 MHz bandwidth.
For comparison, we show the bispectrum of 21cm signal from the epoch of reionization, using a public code, 21cmFAST (Mesinger et al. 2011). This is based on a semi-analytic model of reionization and we can obtain 3D brightness temperature maps at arbitrary redshifts. We set the simulation box to (200 Mpc) 3 with 300 3 grids and take a set of model parameters as (ζ, ζ X , T vir , R mfp ) = (31.5, 10 56 /M ⊙ , 10 4 K, 30 Mpc). Here, ζ is the ionizing efficiency, ζ X is the number of X-ray photons per solar mass, T vir is the minimum virial temperature of halos which host stars and R mfp is the mean free path of ionizing photons.
In Fig. 1, we compare the equilateral-type bispectrum signal with thermal noise at z = 8, 10, 12 and 17. Generally, the noise increases toward smaller scales which reflects the deficiency of longer baselines. On the other hand, the sensitivity for larger scales are limited by the survey volume. We see the signals are larger than SKA noise for k 0.3 Mpc −1 at all redshifts. However, the thermal noise dominates over the signal for the expanded MWA at almost all scales and redshifts, while the bispectrum may be observable for large scales k 0.05 Mpc −1 at z = 10. LOFAR has better sensitivity and the signal will be observable at scales with k 0.1 Mpc −1 at z = 10 and 17. Here it should be noted that the bispectrum signal has several peaks as a function of redshift and they are at z = 10 and 17 . The peak redshifts depend on the specific values of the model parameters and observation of them will give us information on the process of reionization. Thus, it is expected that LOFAR is enough sensitive to detect the bispectrum at the peak redshifts for large scales.
The isosceles-type bispectrum with K = 0.06 Mpc −1 bispectra are plotted in Fig. 2. The behavior and relative amplitudes of the signal and noise are very similar to the case of the equilateral type but SKA is more sensitive at smaller scales.
SUMMARY AND DISCUSSION
In this paper, we esimated the bispectrum of thermal noise for redshifted 21cm signal observation for Epock of Reionization by extending the formalism of the noise power spectrum estimation given by McQuinn et al. (2006). Because thermal noise was assumed to be Gaussian, the ensamble average of the bispectrum vanishes and its variance contributes to the noise to the bispectrum signal. We developed a formalism to calculate the noise bispectrum for an arbitrary triangle, taking the array configuration into account. We applied it to the cases with equilateral and isosceles triangles and estimated the noise bispectrum for expanded MWA, LOFAR and the SKA. Consequently, it was found that the SKA has enough sensitivity for k 0.3 Mpc −1 for both types of triangles. On the other hand, LOFAR will have sensitivity for the peaks of the bispectrum as a function of redshift. The expanded MWA has even less sensitivity but it will be possible to put a meaningful constraints on model parameters which induce larger signals than those with the parameters used in this paper.
Not only the themal noise but signal of bispectrum depend on the configuration of the triangle of three wave numbers. It is possible that the signal bispectrum has a large amplitude for a specific configuration of the triangle and observation may become easier in that case. An investigation of the details of the bispectrum signal and comparison with noise bispectrum will be presented elsewhere (Shimabukuro et al. 2014b).
Actually, thermal noise is just one of many obstacles for the observation of 21cm signal. Other serious sources of noise are Galactic and extragalactic foreground and sample variance, and the foreground emission has not been well understood even for power spectrum. However, the observation of the bispectrum is very important because 21cm signal from Epoch of Reionization is highly non-Gaussian so that the bispectrum will give us enormous information complementary to the power spectrum. | 2015-05-16T01:08:23.000Z | 2014-12-17T00:00:00.000 | {
"year": 2014,
"sha1": "acff5fe172d938caf32c732477d739bab2c54b4b",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/451/1/266/4167745/stv855.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "acff5fe172d938caf32c732477d739bab2c54b4b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
202861111 | pes2o/s2orc | v3-fos-license | Roles of aberrant hemichannel activities due to mutant connexin26 in the pathogenesis of KID syndrome
Germline missense mutations in GJB2 encoding connexin (Cx) 26 have been found in keratitis, ichthyosis and deafness (KID) syndrome. We explored the effects of three mouse Cx26 mutants (Cx26-G12R, -G45E and -D50N) corresponding to KID syndrome-causative human mutants on hemichannel activities leading to cell death and the expression of immune response-associated genes. We analyzed the 3D images of cells expressing wild-type (WT) or mutant Cx26 molecules to demonstrate clearly the intracellular localization of Cx26 mutants and hemichannel formation. High extracellular Ca2+ conditions lead to the closure of gap junction hemichannels in Cx26-G12R or Cx26-G45E expressing cells, resulting in prohibition of the Cx26 mutant-induced cell death. Fluorescent dye uptake assays revealed that cells with Cx26-D50N had aberrantly high hemichannel activities, which were abolished by a hemichannel blocker, carbenoxolone and 18α-Glycyrrhetinic acid. These results further support the idea that abnormal hemichannel activities play important roles in the pathogenesis of KID syndrome. Furthermore, we revealed that the expressions of IL15, CCL5, IL1A, IL23R and TLR5 are down-regulated in keratinocytes expressing Cx26-D50N, suggesting that immune deficiency in KID syndrome expressing Cx26-D50N might be associated not only with skin barrier defects, but also with the down-regulated expression of immune response-related genes.
Intracellular localization of Cx26-WT and Cx26-D50N mutant proteins.
We produced HeLa cells transiently transfected with pIRES2-AcGFP1 Cx26-WT constructs (pIRES2-AcGFP1 Gjb2 WT-FLAG constructs) and HeLa cells transiently transfected with Cx26-D50N-FLAG constructs (pIRES2-AcGFP1 Gjb2 c.148 G > A-FLAG constructs). We incubated the cells in DMEM + FBS, which contained 1.9 mM Ca 2+ , at transfection. Unlike the cells transfected with Gjb2 c.34 G > C (Cx26-G12R) or c.134 G > A (Cx26-G45E), the HeLa cells with Gjb2 WT (Cx26-WT) or c.148 G > A (Cx26-D50N) were able to proliferate even after transfection under the condition of 1.9 mM Ca 2+ concentration. Immunofluorescent staining with ant-FLAG antibody (Cx26-FLAG staining) demonstrated that cells expressing Cx26-WT or Cx26-D50N were able to synthesize Cx26 proteins (Fig. 1, Supplementary Videos S1-S4). Further, Cx26 proteins were localized to the plasma membrane, and gap junction plaques were formed at the cell-to-cell contact zones between adjacent cells (Fig. 1, blue arrows, Supplementary Videos S1-S4). To clearly demonstrate the intracellular location of Cx26-WT and Cx26-D50N proteins, co-labeling with an anti-Cx26-FLAG antibody and an antibody to TGN46, which is a marker for the trans-Golgi network (TGN), or double staining with an anti-Cx26-FLAG antibody and wheat germ agglutinin (WGA) for plasma membrane staining were performed ( Fig. 1 Supplementary Videos S1-S4). There was no overlap of Cx26 and TGN46 signals in cells expressing Cx26-WT or -D50N (Fig. 1A,B, blue arrowheads, Supplementary Videos S1 and S2). Furthermore, Cx26-FLAG and WGA co-staining showed that Cx26-FLAG staining overlapped with WGA staining in the cell membrane area without cell-to-cell contact and verified that Cx26-WT and -D50N were localized to the plasma membrane under the condition of 1.9 mM Ca 2+ concentration (Fig. 1C,D, purple arrowheads, Supplementary Videos S3 and S4). In contrast, gap junction plaques showed no overlap of Cx26-FLAG with WGA ( Fig. 1C,D, blue arrows, Supplementary Videos S3 and S4). WGA labels glycoproteins or glycolipids on the outer surface of the cell membrane 26 . Completely built gap junction plaques consist of a compact assembly of connexin molecules lacking surface glycoproteins and glycolipids. Thus, WGA cannot bind to the gap junction plaque. These findings clearly indicate that Cx26-WT and Cx26-D50N expressed in the HeLa cells were localized to the plasma membrane, but not to the Golgi apparatus, and that WGA was unable to access the gap junction plaques consisting of Cx26-WT and Cx26-D50N.
High extracellular Ca 2+ concentration rescues cells producing Cx26-G12R mutants and those producing Cx26-G45E mutants. We tested whether cell death due to Cx26-G12R mutants or Cx26-G45E mutants could be abolished by increased extracellular Ca 2+ concentration (3.8 mM). The cell death induced by the Gjb2 mutation Cx26-G12R and that induced by the Gjb2 mutation Cx26-G45E were prohibited by the high extracellular Ca 2+ concentration (3.8 mM). Cells transfected with Cx26-G12R or Cx26-G45E mutant DNA constructs showed no overlap of Cx26-FLAG and TGN46 ( Fig. 2A,B, blue arrowheads, Supplementary Videos S5 and S6) under the condition of high extracellular Ca 2+ concentration. Under the high extracellular Ca 2+ condition, HeLa cells with Cx26-G12R or Cx26-G45E also formed gap junction plaques without WGA co-staining between adjacent cells (Fig. 2, blue arrows, Supplementary Videos S5-S8). Hemichannels co-stained with WGA were also detected (Fig. 2C,D, purple arrowheads, Supplementary Videos S7 and S8). These results suggest that both Cx26-G12R-expressing cells and Cx26-G45E-expressing cells were rescued by the high extracellular Ca 2+ concentration and that the Cx26 proteins formed gap junction plaques and hemichannels in the rescued cells, similarly to gap junction plaques and hemichannels in the cells transfected with Cx26-WT or Cx26-D50N mutants. Consequently, cells carrying Cx26-G12R or Cx26-G45E were judged to be insufficiently healthy for further mutant protein localization studies and for gene expression profiling under the condition of physiological Ca 2+ concentration.
Altered activities of Cx26-D50N hemichannels. We determined the effects of Cx26-D50N in mammalian cell lines with a fluorescent dye uptake assay using neurobiotin (NB) (Fig. 3). Under Ca 2+ -free conditions, the mean fluorescent intensity of NB in cells with Cx26-WT or Cx26-D50N hemichannels was three times as high as that in negative control cells (p < 0.01 (1.2 mM), NB uptake was almost the same in cells with Cx26-WT as in the negative control cells, but NB uptake by Cx26-D50N-expressing cells was about 1.7 times as large as those by cells with Cx26-WT and negative control cells (p < 0.01) (Fig. 3A). NB dye uptakes by the cells with Cx26-WT or Cx26-D50N under the condition of 1.2 mM Ca 2+ concentration were less than those under the Ca 2+ -free conditions (p < 0.01). These results suggest that both normal hemichannels of Cx26-WT and aberrant hemichannels of Cx26-D50N had similar activities under the Ca 2+ -free conditions, but that aberrant hemichannel activities were aggravated under the condition of physiological Ca 2+ concentration.
To further verify the formation of abnormal hemichannels in the plasma membrane, dye uptake studies were performed with the presence of hemichannel blockers, CBX and AGA (Fig. 3B,C). Under the Ca 2+ -free condition, the treatment of cells with 100 μM CBX or AGA for 20 min reduced the levels of dye uptake in both cells producing Cx26-WT and Cx26-D50N compared to their counterparts without CBX or AGA treatment, respectively (p < 0.01) (Fig. 3B). Under the condition of physiological Ca 2+ concentration (1.2 mM), treatments of cells with 100 μM CBX or AGA for 20 min resulted in a 40% reduction in the levels of dye uptake (mean fluorescent intensities) in Cx26-D50N-expressing cells compared to their counterparts without CBX or AGA treatment, respectively (p < 0.01) (Fig. 3C). In contrast, cells producing Cx26-WT with or without hemichannel blocker treatment showed almost the same levels of dye uptake (mean fluorescent intensities). These findings suggest that the increase in the uptake of NB into cells was mediated by aberrant hemichannels consisting of Cx26-D50N.
Cx26-D50N mutant down-regulates expression of genes involved in immune responses by keratinocytes. To identify down-regulated genes in the HaCaT cells (human skin keratinocytes) producing
Cx26-D50N, we analyzed data from gene expression profiling using the Clariom S array. Using a minimum fold change of 2.5, we selected 69 down-regulated genes. We evaluated these genes by using the functional annotation chart of DAVID Bioinformatics Resources 6.8 (https://david.ncifcrf.gov/). We initially chose 6 terms involved in immunological processes: "immunity", "positive regulation of T cell proliferation", "inflammatory response", "innate immune response", "TNF signaling pathway" and "defense response to bacterium". Under these terms, we found 16 genes associated with immune function (IL23R, FCN1, CLEC4E, IFNL1, IFNL3, BIRC3, IL1A, IL15, USP41, CD180, TAPBPL, HMHB1, CCL5, TLR5, FCAMR and USP41) out of the 69 differentially expressed genes (16/69 genes, 23%). Next, we evaluated these genes by using the functional annotation clustering of DAVID Bioinformatics Resources 6.8. We adopted an enrichment score of more than 1.98 and an adjusted p-value of less than 0.05 to select these down-regulated genes, and we chose 9/16 genes involved in immunological processes. Among these genes, we selected a total of 5 genes (IL15, CCL5, IL1A, IL23R, and TLR5) that have been reported in numerous papers in the past. We confirmed the down-regulation of mRNA expression of those genes by real-time PCR (Fig. 4) and concluded that Cx26-D50N expression resulted in the down-regulated mRNA expression of IL15, CCL5, IL1A, IL23R and TLR5.
Discussion
Elucidation of the effects of KID syndrome-causative Cx26 mutations might provide clues on how Cx26 functions in normal epidermal homeostasis and on the pathomechanisms of disease phenotypes in KID syndrome patients. In the present study, we examined how three Gjb2 mutants (Cx26-G12R, -G45E and -D50N) causative of KID syndrome affected the formation and function of the hemichannels and gap junctions in transfected HeLa cells. In addition, we analyzed the cells by 3D imaging. In the 3D images, we are able to evaluate the molecular localization sites more accurately, although in 2D images we sometimes misunderstood the colocalization sites of target molecules.
GJB2 mutations causative of KID syndrome have been shown to induce elevated hemichannel activities 22,27,28 . In the present study, the Gjb2 mutations Cx26-G12R and Cx26-G45E led to increased hemichannel activity and induced cell death. The Gjb2 mutation Cx26-D50N induced high hemichannel activity, but showed no cellular lethality. The cell death induced by the Gjb2 mutation Cx26-G12R and that induced by the Gjb2 mutation Cx26-G45E were rescued by the addition of extracellular Ca 2+ . It was previously shown that cells with GJB2 mutations are rescued from cell death by the introduction of Ca 2+ to the extracellular media during incubation, although cells carrying different GJB2 mutations showed different reactions to high extracellular Ca 2+ levels 28 . It is known that elevated extracellular Ca 2+ drives the hemichannels into their closed state 24 . The high extracellular Ca 2+ condition prohibited cell death induced by the GJB2 mutant Cx26-G12R and that induced by the GJB2 mutant Cx26-G45E 25,27,28 .
Regarding the hemichannel current, the cells carrying the GJB2 mutation Cx26-G12R responded more quickly and completely to the high Ca 2+ concentration switch than did the cells with the GJB2 mutation Cx26-D50N 28 . Elevation of the external Ca 2+ concentration reduced the amplitude of hemichannel currents by shifting the voltage activation curve of the channels to more positive potentials 29 . This indicates that different activation voltages resulting from the mutations might cause higher hemichannel activity leading to cellular lethality. The Cx26 mutant Cx26-D50N was previously reported to induce elevated membrane currents in Xenopus oocytes as measured electrophysiologically 30 . Cells expressing the Cx26-D50N showed abnormal hemichannel activity and increased cell death in the absence of elevated extracellular Ca 2+28 . Recently, several studies have used highly sophisticated methodologies to investigate the calcium gating of Cx26-D50N hemichannels. Lopez et al. [30][31][32] reported that extracellular Ca 2+ destabilizes the open state of human Cx26 hemichannels by disrupting a salt bridge interaction between residues D50 and K61 located close to the extracellular entrance of the pore of the adjacent Cx26-WT-expressing cells. Purple signals (purple arrowheads) represent the co-localization of Cx26-WT and WGA, which are putative hemichannels. (D) Gjb2 c.148 G > A-transfected cells were doublestained for rhodamine-labeled WGA (red) and Cx26-D50N-FLAG (blue). Gap junction formation (blue) is indicated with blue arrows. Purple signals (purple arrowheads) indicate the co-localization of Cx26-D50N and WGA, suggesting hemichannel formation in the cell surface. Scale bars: 20 μm. hemichannels. This open-state destabilization is thought to facilitate hemichannel closure 30,31 . A highly conserved electrostatic network located at the extracellular entrance of the pore is involved in the gating of hemichannels of all connexin types. Extracellular Ca 2+ disrupts the open-channel form of this network, resulting in a closed conformation 32 . Another study by Helmuth et al. 33 revealed that Q48 and D50 tightly interact and that disruption of this interaction activates hemichannel gating along the voltage axis. Further shifts in gating activation by extracellular Ca 2+ persist even in the absence of the Q48-D50 interaction, but the shifts require an Asp (D) or Glu (E) at position 50 33 . This fact suggests an independent electrostatic mechanism of gate activation that significantly involves position 50. The difference of Cx26-D50N from Cx26-G12R and Cx26-G45E in terms of their in vivo and in vitro effects may due to the unique characteristics of D50. In the present study, hemichannel activity in Cx26-D50N was less responsive to changes in the extracellular Ca 2+ concentration. Summarizing the results of hemichannel activities in the present study, we clearly demonstrated the formation of aberrant hemichannels by the KID syndrome-causative Gjb2 mutations Cx26-G12R, -G45E and -D50N. The present results further support the idea that abnormal hemichannel activities play an important role(s) in the pathogenesis of KID syndrome.
Hearing impairment-associated Cx26 mutants are roughly classified into two categories. Cx26 mutants in the first category are still localized to the cell membrane and form non-functional gap junctions. Cx26 mutants in the second category accumulate in the cytoplasm due to the abnormal trafficking 25 . The Cx26 mutants in this category localize within subcellular structures, including the ER, the ER-Golgi intermediate compartment and the Golgi apparatus 34-37 . In the present study, it was difficult to clarify the relationship between the Cx26 mutants and the trans-Golgi network. However, in the 3D images, none of the Cx26 mutants showed overlapping localization with TGN46 protein signals under the condition of high extracellular Ca 2+ levels. This indicates that the Gjb2 mutants Cx26-G12R, -G45E and -D50N can be classified into the first category. Most of the mutations associated with KID syndrome have shown to cause aberrant hemichannel activities, leading to the altered regulation of molecular exchanges through the plasma membrane 22,25,27,28 . As is true for the many characterized cell lines from KID syndrome patients 16,25 , in the present study, cells with Cx26 constructs with the mutation Cx26-D50N showed increased uptake of NB fluorescent dye, and the increased uptake was abolished by the hemichannel blockers CBX and AGA. This suggests that aberrant hemichannel activities are involved in the pathomechanisms of KID syndrome phenotypes due to the Cx26-D50N mutation.
Patients with KID syndrome are at high risk for neoplastic complications or cutaneous infections such as those of Candida albicans and Trichophyton rubrum 6 . Although the genetic defect of KID syndrome has been identified, the mechanisms behind the high incidence of neoplastic complications and fungal infections in KID syndrome are poorly understood. Increased risk of malignant tumors and chronic infections in KID syndrome is likely attributable to impaired epithelial barrier function or defective immune function 38 . Regarding immune function in KID syndrome patients, some studies have reported it to be normal 39,40 , but others have showed immune dysregulation 41 or have suggested primary systemic immunodeficiency 3,42 . In the present study, we determined that human keratinocytes expressing the Cx26-D50N mutant show down-regulated expression of IL15, CCL5, IL1A, IL23R and TLR5, compared with keratinocytes expressing Cx26-WT. Interleukin-15 (IL-15) exhibits biological activities similar to those of interleukin-2 and enhances the proliferation of CD8 + cytotoxic T cells and natural killer cells, which in turn eliminate tumor cells [43][44][45][46][47][48][49] . Chemokine C-C motif ligand 5 (CCL5) plays an important role in recruiting various leukocytes, including T cells, macrophages, eosinophils, and basophils, to inflammatory sites. In collaboration with certain cytokines released from T cells, CCL5 also activates natural killer cells to C-C chemokine-activated killer cell and induces their proliferation 50 . Interleukin-1α (IL1α) is produced by cells of various types, including activated macrophages, keratinocytes, stimulated B lymphocytes, and fibroblasts, and it is a potent mediator of inflammation and immune reactions 51 . In humans, high levels of Interleukin-23 receptor (IL-23R) expression are detected on activated/memory T cells, NK cells, macrophages, dendritic cells and monocytes. IL-23 and Interleukin-12 (IL-12) share a common subunit in their receptor complex. The receptor complex is a heterodimer made up of 2 subunits, IL-23R and IL-12β1 52 . Stimulation of the receptor complex activates janus activating kinase 2 (JAK2) and tyrosine kinase 2 (TYK2), resulting in phosphorylation of the receptor complex, and the formation of docking sites for signal transducers and activators of transcription (STATs) 1, 3, 4 and 5. The STATs are subsequently dimerized, phosphorylated and translocated into the nucleus, activating target genes 53 . Importantly, the phosphorylation of STAT4 is essential for increasing interferon (IFN) γ production and the subsequent differentiation of Th1 cells, whereas STAT3 is important for the development of interleukin 17-producing helper T cells (Th17 cells) 54,55 . Overall, this process orchestrates the cytokine cascade, activating the necessary immune cells involved in the eradication of any pathogenic/antigenic challenge.
Toll-like receptors (TLRs) provide efficient and immediate immune responses to bacterial, fungal, and viral infections by recognizing diverse molecules released from them 56 . One study reports that an Asian KID syndrome patient with fungal infection expressed only a lower level of TLR2 mRNA 57 . The present results clearly indicate that keratinocytes expressing the Cx26-D50N mutant show down-regulated expression of important molecules involved in the epidermal immune responses including IL-15, CCL5, IL-1α, IL-23 receptor and TLR5. We speculate that the down-regulation might cause immunodeficiency in the epidermis of KID syndrome due to the Cx26-D50N mutant, resulting in the malignant tumors and chronic infections seen in the patients.
Observations of 2-dimensional plane images are insufficient to clarify the intracellular localization of Cx26 proteins and the formation sites of hemichannels. In the present study, we analyzed the 3D images of cells expressing WT and mutant Cx26 molecules and were able to demonstrate clearly the intracellular localization of Cx26 proteins and hemichannel formation sites. In conclusion, we clearly demonstrated that aberrant hemichannels form due to the KID syndrome-causative Gjb2 mutations Cx26-G12R, -G45E and -D50N. Our findings further support the idea that abnormal hemichannel activities are involved in the pathogenesis of KID syndrome. In addition, we revealed that the expressions of IL15, CCL5, IL1A, IL23R and TLR5 are down-regulated in keratinocytes expressing the Cx26-D50N mutation. These findings suggest that the immune deficiency of KID syndrome caused by Cx26-D50N mutation owes not only to skin barrier dysfunction, but also to the down-regulated expression of immune response-related genes in the keratinocytes. TRITC-conjugated swine polyclonal anti-rabbit immunoglobulins antibody (1:1000) (Dako Cytomation, Glostrup, Denmark)) were applied for 45 min at 37 °C in the dark. After two more washings with PBS, coverslips were mounted with VECTORSHIELD mounting medium (Vector laboratories, California, USA) on glass slides. For co-labeling for rhodamine-labeled WGA and Cx26-FLAG, after blocking with 10% BSA, rhodamine-labeled WGA (5 mg/ml, Vector laboratories, California, USA) diluted 1:500 was applied to cells for 30 min at 37 °C. After two washings with PBS, the cells were incubated with primary antibodies (M2 monoclonal mouse antibody against FLAG (1:1000)) overnight at RT. After two more washings with PBS, secondary antibodies (goat polyclonal anti-mouse IgG antibody, Alexa Flour 350 (1:1000) (Invitrogen, Massachusetts, USA)) were applied for 45 min at 37 °C in the dark. After two more washings with PBS, coverslips were mounted with VECTORSHIELD mounting medium on glass slides.
Dye uptake assays. HeLa cells (2.0 × 10 4 ) were grown over glass coverslips in 8-well glass slides for dye uptake assays with an NB tracer (NB, FW 322.8, Vector Laboratories). At 48 h post-transfection (Ca 2+ concentration, 1.9 mM), the cells were washed with PBS and incubated with Ca 2+ -free medium or a medium with a physiological calcium concentration (1.2 mM Ca 2+ ) for 20 min at 37 °C. The cells were then incubated with 0.5 mg/ ml NB in the same Ca 2+ concentration medium for 20 min at 37 °C 22 . Next, the cells were washed with DMEM containing 4.0 mM CaCl 2 twice for 10 min and PBS once. The cells were fixed with 4% PFA for 15 min at room temperature. After being washed twice with PBS, the fixed cells were permeabilized with 0.25% Triton-X 100 for 30 min, washed twice with PBS, and blocked with 10% BSA for 30 min at 37 °C. Following the two washings with PBS, the cells were incubated with primary antibodies (mouse monoclonal anti-FLAGR M2 antibody (1:1000)) overnight at RT. After two more washings with PBS, TRITC-conjugated rabbit polyclonal anti-mouse immunoglobulins antibody (1:1000) and Alaxa Flour 350-conjugated Streptavidin (1:200) (Invitrogen, Massachusetts, USA) were applied for 45 min at 37 °C in the dark. After two more washings with PBS, images were acquired using the A1R confocal laser scanning microscope system with fixed exposure times. For treatments with carbenoxolone (CBX, Sigma-Aldrich, St Louis, USA) or 18α-Glycyrrhetinic acid (AGA, Sigma-Aldrich), the cells were initially incubated with calcium-free medium containing 100 μM CBX or AGA for 20 min, and then NB in the same medium was applied to the cells as described above.
Image analysis for signal intensity determination was performed with ImageJ software (NIH, Bethesda, USA). We selected 2 high-power view fields under the microscope. During image analysis, after the background in merged images of red and green channels was subtracted, the same parameters were applied to threshold the images for the measurement of blue signal intensities of fluorescence only in the GFP-positive cells 16 . Each experiment was performed at least twice.
Transfection experiments targeting HaCaT cells for microarray analysis.
HaCaT cells were transfected using the Amaxa Cell Line Nucleofector Kit V (Lonza, Cologne, Germany) according to the manufacturer's instructions. The cells were seeded on the plate and the medium was changed 3 h after seeding. At 8 h of incubation, the cells were subjected to microarray analysis and quantitative real-time reverse transcription (qRT)-PCR assays.
Microarray analysis. We isolated total RNA from the transfected HaCaT cells using the RNeasy Mini Kit (Qiagen, Hiden Germany). Whole-genome expression profiling was performed using Clariom S Array (Affymetrix, Santa Clara, California, USA). Differentially expressed genes were defined as those showing a 2.5-fold or greater change in expression between the cells transfected with Gjb2 WT constructs (Cx26-WT) and those transfected with the c.134 G > A constructs (Cx26-D50N) using the Affymetrix Transcriptome Analysis Console Ver 3.1.0.5 (Affymetrix). The list of down-regulated genes was analyzed using gene-set enrichment analyses from DAVID Bioinformatics Resources 6.8 (https://david.ncifcrf.gov/) in order to identify the functions of these genes. We used an enrichment score of more than 1.9 and an adjusted p-value of less than 0.05 to select these down-regulated genes.
qRT-PCR. We isolated total RNA from the transfected HaCaT cells using the RNeasy Mini Kit (Qiagen). We reverse-transcribed 250 ng total RNA using the Prime Script RT Reagent Kit (Takara, Shiga, Japan) according to the manufacturer's instructions. The recovered cDNA was diluted 10-fold with DW for qRT-PCR. mRNA expression levels were measured by qRT-PCR using the Light Cycler System (Roche, Basel, Switzerland). The PCRs were set up in microcapillary tubes filled with 10 μL of reaction agents including 2.5 μL of diluted cDNA solution, and the PCR program was set according to the manufacturer's instructions. The primers and probes used for qRT-PCR are listed in Supplementary Table 1. Each experiment was performed at least three times.
Statistical analyses.
All results are expressed as mean (± standard deviation). In the dye uptake assay, we selected 2 high-power view fields under the microscope without any random sampling method and each experiment was performed at least twice. In the qRT-PCR analysis, each experiment was performed at least three times. Analysis of dye uptake samples and qRT-PCR samples was done by Student's t-test. Statistical significance was shown as *p < 0.01, † p < 0.05. | 2019-09-17T02:45:26.532Z | 2019-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "ffb154a22bc80d1cad1dc9b5c6d8c8a0864e5784",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-30757-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "006e4299428e2cb844334f4edfe06cf132c58c55",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
28947424 | pes2o/s2orc | v3-fos-license | A double stapled technique for oesophago-enteric anastomosis
AIM: Leakage from oesophageal anastomosis is associated with substantial morbidity and mortality. This study presented a novel, safe and effective double stapled technique for oesophago-enteric anastomosis. METHODS: The data were obtained prospectively from hospital held clinical database. Thirty nine patients (26 males, 13 females) underwent upper-gastrointestinal resection between 1996 and 2000 for carcinoma ( n = 36), gastric lymphoma ( n = 1), and benign pathology ( n = 2). Double stapled oesophago-enteric anastomosis was performed in all cases. RESULTS: No anastomotic leak was reported. In cases of malignancy, the resected margins were free of neoplasm. Three deaths occurred, which were not related to anastomotic complications. CONCLUSION: Even though the reported study is an uncontrolled one, the technique described is reliable, and effective for oesophago-enteric anastomosis. oesophago-enteric
INTRODUCTION
Leakage is a major problem associated with oesophago-enteric anastomosis. The anastomosis may be hand sewn or stapled [1][2][3][4] . Even though there is no proven advantage between either techniques, basic principles of anastomosis surgery-tension free, good vascularity, and good mucosal apposition apply to both. A technique of double stapled anastomosis avoids stretch at oesophageal side of anastomosis and circumvents damage to vascularity.
MATERIALS AND METHODS
The double stapled oesophago-enteric reconstruction was performed as follows. The oesophagus was mobilized in the standard way to above the proposed level of division. A transverse incision was made in the anterior wall of the oesophagus at least 3 cm above the proximal extent of the tumour in those cases undergoing surgery for malignancy ( Figure 1). An appropriately sized circular stapler (CA Ethicon) was selected based on the oesophageal diameter. A 2-0 polypropylene suture was placed through the 'eye-hole' situated in the plastic spike that fits the head of the circular stapler. The polypropylene suture should be tied such that around 5 cm of suture with the attached needle remained attached to the spike attached to the circular stapler head. The transverse anterior oesophagotomy should be of sufficient size to allow insertion of the stapler head with attached spike -see step 7 ( Figure 2). The circular stapler head with spike and attached needle were placed through the oesophagotomy into the proximal oesophagus ( Figure 3). The suture needle was brought out through the anterior oesophageal wall 2 cm proximal to the oesophagotomy. The oesophagus was then cross-stapled and divided transversely below the site of needle puncture but above the oesophagotomy. The suture attached to the spike was used to pull the spike and axis of the circular stapler head through the anterior wall of the oesophagus, around 2 cm proximal to the transverse staple line ( Figure 4). The spike was then removed from the stapler head. Resection of the gastric or oesophago-gastric specimen was then completed. The distal conduit (either distal stomach or jejunal limb) was prepared and mobilized to allow a tension free anastomosis. The body of the circular stapler was introduced into the lumen of the efferent conduit through an appropriately placed enterotomy ( Figure 5). The circular stapler head was engaged with the body of the circular stapler gun. The gun was closed and fired creating a double stapled oesophago-enteric anastomosis ( Figure 5). A naso-gastric tube was fed across the oesophago-enterostomy after completion of the anastomosis.
RESULTS
All patients were operated on by or under the direct supervision of a consultant surgeon with an upper gastrointestinal interest. Patients with malignancy underwent pre-operative staging with thoraco-abdominal computed tomography scan. Laparoscopy was used in selected instances. In elective cases, mechanical bowel preparation was employed. Thoracic epidural anaesthesia was employed for post-operative pain relief. Patients were kept nil by mouth for 5 d post-operatively. If the post-operative course was uneventful, fluids were introduced on d 5. Water soluble contrast studies were not routinely used to assess anastomotic integrity unless there was clinical indication. Data were obtained prospectively from the hospital-held clinical database.
Morbidity and mortality rates are shown below (Tables 1-3). Risk adjusted morbidity and mortality rates were calculated using the physiological and operative scoring system for enumeration of morbidity and mortality (POSSUM) and Portsmouth POSSUM (P-POSSUM) models [5,6] . Three deaths occurred, two post oesophago-gastrectomy and one post total gastrectomy. The deaths after oesophagogastrectomy were shown at post-mortem to be due to left ventricular failure and myocardial infarction respectively. The patient who died after total gastrectomy deteriorated suddenly on the 10 th post-operative day tolerated diet for three days. Symptoms, blood gases and electrocardiogram were compatible with a pulmonary embolus. Permission for a post-mortem was refused. No anastomotic leaks occurred on clinical grounds.
DISCUSSION
Anastomotic leakage following oesophago-enteric reconstruction may result in significant morbidity and mortality. The standard principles governing any gastro-intestinal anastomosis apply in dealing with the oesophagus. The anastomosis should be tension-free, well vascularized and there should be accurate mucosal apposition. Oesophageal anastomoses may be hand sewn (in one, two or even three layers) or stapled [1][2][3][4] . There is no proven advantage to either the hand sewn or stapled technique. Stapled anastomoses might be associated with a higher rate of subsequent benign stricture formation [7] . The stapled technique that is commonly used requires a purse string suture in the cut end of the proximal oesophagus to retain the head of the circular stapler. The anastomosis is completed by a single firing of the circular stapler. It is the authors' contention that the oesophageal purse string may be a contributory factor to subsequent anastomotic related complications (leak and stricture). This is based on the observation that the distal oesophagus is stretched over the head of the circular stapler, and therefore possibly devascularized by the purse string suture. To avoid the use of a purse string, a double stapled oesophago-enteric anastomotic technique has been devised.
This paper described a novel technique of oesophago-enteric anastomosis using a double stapled technique. Whilst this was an uncontrolled study, the technique has proved reliable and to date has not resulted in any anastomotic leaks. Whilst this study cannot implicate the use of a proximal oesophageal purse string suture as a factor in anastomotic leakage, it is interesting to speculate whether the double stapled anastomosis does allow improved anastomotic healing through omission of a purse string suture. The technique described merits further evaluation either in the setting of a larger uncontrolled study or more preferably in the context of a randomised trial. | 2018-04-03T01:17:38.635Z | 2004-11-15T00:00:00.000 | {
"year": 2004,
"sha1": "96bf1d61cd098b010619a2dd3e0f0c0bf5f9896e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v10.i22.3339",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "427c3607e415440b378fabd3162605bd5998ddb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85463716 | pes2o/s2orc | v3-fos-license | Progressive Interval Type-I Censored Life Test Plan for Rayleigh Distribution
In this paper, we have considered the problem of optimal inspection times for the progressive interval type-I censoring scheme where uncertainty in the process is governed by the two-parameter Rayleigh distribution. Here, we also introduced some optimality criterion and determined the optimum inspection times, accordingly. The effect of the number of inspections and choice of optimally spaced inspection times based on the asymptotic relative efficiencies of the maximum likelihood estimates of the parameters are also investigated. Further, we have discussed the optimal progressive type-I interval censoring plan when the inspection times and the expected proportions of total failures in the experiment are under control.
Introduction
The Rayleigh distribution is recognized to be a very useful distribution in the lifetime analysis and operations research for its mathematical simplicity and statistical flexibility.It has numerous application in the diverse areas such as health, agriculture, biology, engineering, and other sciences.Rayleigh (1880) and Siddiqui (1962) have introduced this model and discussed its various captivating properties.The inferential problems regarding considered model have been discussed by Sinha and Howlader (1983), Lalitha and Mishra (1996) and Abd-Elfattah, Hassan, and Ziedan (2006).The probability density function of the Rayleigh distribution with parameters γ and σ is given by and the distribution function is given by (2) Mousa and Al-Sagheer (2005) have discussed Bayesian prediction whereas Wu, Chen, and Chen (2006) have performed Bayesian inference.Mousa and Al-sagheer (2006) have conducted statistical inference for progressive type-II censored data from the Rayleigh distribution.Seo and Kang (2007) obtained the approximate MLEs based on progressive type-II censored data and Kim and Han (2009) estimated the scale parameter under general progressive censoring.Recently, Dey and Dey (2014) estimates the parameters for Rayleigh distribution under progressively Type-II censoring with binomial removal and Abdel-Hamid and Al-Hussaini (2014) provided the Bayesian prediction analysis for Type-II progressive-censored data from the Rayleigh distribution under the progressive-stress model.
In life-testing experiments, many times, it is more economical and practical to gauge observations as progressive interval type-I (PITI) censored data than to record their actual measurements because exact observations may not be possible (e.g., in medical experiments) or may be very costly (e.g., engineering experiments which contain precious items).PITI censoring is a combination of interval Type-I censoring and progressive censoring, proposed by Aggarwala (2001), which is having wide applications in clinical trials.For more details about PITI censoring readers may refer to Kaushik, Singh, and Singh (2015), Kaushik, Pandey, Maurya, Singh, and Singh (2017), Ng and Wang (2009), Chen and Lio (2010), Lio, Chen, and Tsai (2011), etc.In the PITI censored situations, a natural problem that may arise is to determine the associated inspection times appropriately before conducting the experiment to assess the parameter(s) of interest with the least possible reduction in efficiency as compared to the exactly observed situation.In this context, Lin, Chou, and Balakrishnan (2013) have developed some optimum inspection plan for log-normal distribution.For this purpose, they proposed the use of maximization of the determinant of the Fisher information matrix or minimization of the determinant of the variance-covariance matrix.A discussion on optimal grouping or monitoring times can be found in the works of Kulldorf (1961) based on the criterion of minimizing the asymptotic variance or maximizing the determinant of the expected Fisher information matrix of the maximum likelihood estimates (MLEs) of the parameters under the interval type-I censoring scheme.Further for related work on optimal inspections times for lifetime censored data one may refer to Lin, Wu, and Balakrishnan (2009) and Aggarwal (1984).Our goal here is to determine the optimally spaced(OS) inspection times for the PITI censoring scheme concerning the two-parameter Rayleigh distribution by proposing some additional optimality criteria.
This paper is systematized into five sections.In section 2, we have described the Fisher's information matrix and variance-covariance matrix in case of Rayleigh distribution for a PITI censored sample.Section 3 devoted to the criteria for choosing the OS and the optimal equally spaced (OES) inspection times.In section 4, we have performed a numerical study and provided the discussion based on the results obtained thereof.The effect of the number of inspections and the choice of inspection times based on the asymptotic relative efficiencies (AREs) and relative entropy under the OS inspection scheme are assessed.In the same way, OES and EP inspection schemes are compared with the OS inspection scheme.Further, the optimal PITI censoring plan when the inspection times and the expected proportions of total failures in the experiment are pre-fixed have also been discussed.Finally, concluding remarks have been given in section 5.
Expected Fisher information matrix
Let us consider a PITI censored data D = (d 1 , d 2 , . . ., d m ) and R = (r 1 , r 2 , . . ., r m ) from Rayleigh distribution.It is necessary to mention here that the values R = (r 1 , r 2 , . . ., r m ) may be pre-specified as the proportion p 1 , p 2 , . . ., p m (with p m = 1) of the remaining live units consequently, the numbers of units remaining at times t 1 , t 2 , . . ., t m are random variables.
Hence, the log-likelihood function for considered distribution under PITI censoring can be written as To obtain the maximum likelihood estimates of parameters σ and γ, one requires to maximize Eq. ( 3) simultaneously with respect to σ and γ.It is noticed here that the simultaneous solution of likelihood equations is not achievable in explicit form.Therefore, one can use a suitable numerical method to obtain the maximum likelihood estimates of parameters.Further, the Fisher's information matrix of the likelihood is obtained as where, and , and Therefore, the expected asymptotic variance-covariance matrix of the MLEs is
Optimal inspection plan
We have observed here that, generally the removals in the PITI censoring scheme are not in the control of the experimenter.Thus, we are left with optimization of the inspection plan only.That is t 1 , t 2 , . . ., t m are to be chosen in accordance with an optimality criterion.Some optimality criteria used in this context are given in the following subsections:
Optimality criterion based on the Fisher information matrix
This criterion was suggested by Lin et al. (2009) to obtain the optimal choices of inspection times.According to them, the inspection times t 1 , t 2 , . . ., t m are to be chosen so as to maximize the determinant of the Fisher information matrix given in Eq. ( 4) i.e.
Optimality criterion based on generalized asymptotic variance (GAV)
The determinant of the inverse of Fisher's information matrix is called as generalized asymptotic variance (see Bai, Kim, and Chun 1993).Ismail (2015) has suggested the use of GAV as the criterion to plan the inspection times.Therefore, the inspection times t 1 , t 2 , . . ., t k are to be chosen such that GAV is minimized i.e.
Proposed optimality criterion based on Shannon entropy
Shannon entropy provides the amount of information contained in the observed likelihood.Consequently, we propose to use it for the optimal choice of the inspection time t 1 , t 2 , . . ., t m .Thus, the resulting criterion is to choose the inspection times t 1 , t 2 , . . ., t m which maximizing the Shannon entropy, i.e., the values of t 1 , t 2 , . . ., t m are to be determined by max where, H is the Shannon entropy for PITI censored data obtained in the following Eq.( 12). where
Optimality criterion based on the variance of the estimate of some specific population characteristic
In many realistic circumstances, one may often be interested in some specific characteristics of the population which is a function of the parameters.Then one would be interested in minimization of the variance of the estimate of the characteristic under interest rather than minimization of the variance-covariance matrix of the estimate of the parameters.For example, one may be interested in getting a precise estimate of the population mean rather than the individual estimate of the parameters.The population mean µ for Rayleigh distribution is Let μ be the MLE of mean lifetime.Then, our proposed criterion is to choose t 1 , t 2 , . . .t m so as to minimize var(μ) min Following Kamakura and Yanagimoto (1989) and using the delta method, the asymptotic variance of μ will be where, The above-stated criteria can be used for the choice of the inspection plan to design the experiment.The t 1 < t 2 < • • • < t k obtained by using any of the aforesaid criteria will be called as optimal spaced (OS) inspection plan.However, if one is interested in keeping the inspection time equally spaced i.e. t i = it, i = 1, 2, . . ., m, where t to be chosen such that it maximizes the determinant of the Fisher information matrix i.e.
Such inspection plan may be called optimal equal spaced (OES) inspection plan.For optimization, we propose to use the Simulated Annealing algorithm (see Corana, Marchesi, Martini, and Ridella (1987)).Further, we fix termination point t m at the 95% quantile of considered distribution and then we calculate the inspections time following the way that every interval having an equal probability of occurring an event called as equal-probability (EP) spaced inspection plan.
Numerical result and discussion
In this section, we explore the optimal choice for inspection times utilizing a numerical study.
The numerical study has been performed using different values of the number of inspections m and sample size n based on the optimality criteria discussed in Sections 3.1, 3.2, 3.3 and 3.4, respectively.First, we computed the inspection times by using the transformation τ i = t i −γ σ .It is noted here that the optimal choice of τ i will become independent of population parameters (γ and σ) for all the considered criterion mentioned above.Given p 1 = • • • = p m−1 = 0, we compute the OS inspection times for each of the optimality criteria, in the form τ i 's for m = 2, 3, . . ., 10.The results are presented in Table 1, where last three columns of this table show asymptotic relative efficiencies (AREs) of the estimates.Here, ARE is defined as the ratio of the asymptotic variance of parameters in the complete sample case to the that of in the PTIT censored case.Thus, we obtained the AREs of the MLEs of γ and σ as V 22 , respectively.It may be noted that V 11 /σ 2 and V 22 /σ 2 are functions of τ i 's only (i.e.independent of γ and σ).
Table 1 contains the optimal inspection times under considered criterion along with respective AREs.It is noted here that the ARE(μ) is highest for criterion discussed in section 3.4, ARE(γ) is highest for criterion described in Section 3.3 and ARE(σ) is highest for criterion given in Section 3.2 among others for all the considered choices of m.From an extensive numerical study, that we carried out here, it has been revealed that using any of these three optimality criteria will lead to similar results in terms of efficiency.Therefore, we shall primarily report the results based on the optimality criterion discussed in Section 3.3 in the subsequent paragraphs.
Table 2 presents the optimal length of the inspection interval for choosing the OES inspection times when m = 2, 3, . . ., 9, 10, 15, 20 and Similarly, results were also obtained for the estimation of the OS inspection plan under other censoring schemes.
From Tables 1 and 2, we can see that as m increases the AREs increases under all the considered criterion and for large m AREs tend to 1.It is interesting to note here that for the estimation of γ the choice of optimum censoring time leads to consistently higher ARE than that for the estimation of σ in the considered cases.It may further be seen from the table that the performance of the estimates of γ and σ under PITI censoring will still be reasonably good if the number of inspections is chosen to be at least 5 and preferably 8 or more.For a comparison of the OS inspection scheme with the OES and EP inspection schemes based on ARE(γ), ARE(σ), ARE(µ) and Entropy respectively, we calculated the AREs, for different choices of m when p 1 = • • • = p m−1 = 0 and the results are summarized in Table 3.It can be seen easily from the table that the ARE for OS inspection plan is highest and for EP plan it is least in all the cases.The AREs for OES lies in between the AREs under OS and OES plans.Thus, on the basis of this, we may say that for the Rayleigh distribution, the OS inspection scheme is more suitable as it provides the maximum ARE as compared to other schemes irrespective of the parameter under deliberation.
To study the effect of variation in the values of m and n on the relative entropy we considered a number of values for m and n and taking an arbitrary choice for p i s as The results obtained are presented in Table 4.It may be seen from the table that the relative entropy increases as m or n increases, but the increment in relative entropy is higher due to increase in m as compared to that of increase in n.
We have noted above that AREs of the estimates for an optimum choice of the inspection time depends on the parameter to be estimated.Suppose that we are interested in the estimation of both parameters then neither the value of ARE(γ) nor the value of ARE(σ) can provide us the overall performance of the two estimates.Therefore, we need a single quantity 1 for those rows in which optimality criterion is discussed in subsection 3.3.
In earlier discussions, we have kept the fixed removals proportions p 1 = p 2 = • • • = p m−1 = 0 and our main interest was to discuss the optimal choice of inspection times for fixed removals.Moreover, the considered situation was corresponding to the removals which result in the minimum loss of information.At this stage, one may say that situations do arise where the experimenter can control the removals, for example in engineering experiments; although it is true that the removal of units is based on the practical necessity of saving test units or cost.However, one could fix the expected proportion of removals h based on time and cost of the experiment.As soon as h is fixed, the problem of optimum choice in PITI censoring scheme now reduces to the determination of the optimum inspection plan and optimal removal scheme (p 1 , p 2 , . . ., p m−1 , p m = 1) where the total proportion of failures h in the experiment is pre-fixed.In this situation, the optimality problem is max This scheme is called as generalized optimal spaced (GOS) inspection scheme.Further, we have also studied the effect of removals if these occur in the experiment when inspection times are chosen from Table 1.So, in this situation, the optimality problem will be max p 1 ,...,pm subject to the constraint as given in Eq.( 18).The optimum values of p i 's are computed and given in Table 6, under the GOS, OS, OES, and EP inspection schemes for n = 200, m = 5, and h = 0.5, 0.6, 0.7, 0.8.The first row of the table shows that if the experimenter chooses to use five inspection times and wishes to save 50% of the n = 200 units put under test, then the optimal removal scheme will be 11.80%, 37.67%, 0%, 0%, and 100% removals of the live units at the five consecutive inspection times under GOS inspection scheme.It may be recalled that when we chose the optimum inspection times and studied the effect of variation in the values of p i s the least loss was observed when all p i s was zero except p m (= 1).However, from the Table 6, we see that if we calculate the optimum value of inspection time and removal probability simultaneously then non zero p i s are observed in the solution.Evidently, the optimal censoring plan (i.e choice of inspection time and removal proportion simultaneously) will not be the same if different optimality criteria are used.The entries in the rows titled as OS, OES and EP provides the optimum values of removal proportions when the optimum inspection time is pre-calculated and fixed as per optimization criterion discussed in section 3.In this sense, it can be viewed as a result of two-step optimization where we optimize the inspection time at first and then we optimize removal proportion.The last column of Table 6 shows the relative efficiencies of selected inspection scheme with respect to GOS inspection scheme.The resulting relative efficiencies of the OES and OS inspection schemes are 92% for h = 0.5 and 95% for h = 0.2.But the relative efficiency of EP inspection plan is less than 80% in all the considered cases which, once again, reveals that the OS and OES inspection scheme are more efficient than the EP inspection scheme.It is also observed that as total removal proportion i.e. h decreases, the relative efficiency approaches to one.
Conclusion
In this paper, we have contemplated the problem of planning PITI censoring scheme for Rayleigh distribution.It is noted that the inspection times using optimality criterion that minimize Shannon entropy has either highest ARE or close to that relative efficiency which is the highest among all the considered criterion.Moreover, it is also observed that under the equal spacing situation, the Shannon entropy provides the smallest value for the spacing without compromising much in terms of ARE.Therefore, PITI censored plan for Rayleigh distribution must be constructed by using minimum Entropy criterion.In general, the use of the OS inspection scheme to construct the inspection plan reduces the required number of inspections significantly as compared to the EP/OES inspection for achieving the same level of relative efficiency.Thus, for Rayleigh distribution, the OS inspection scheme is more appealing.Moreover, as the choice of inspection times is crucial for the computation of the efficiency of an experiment, hence, the presented work may be productive for future research.
Table 1 :
The inspection times under the different optimal criteria in terms of τ i = t i − γ σ .
Table 2 :
The optimal length of the inspection interval in terms of τ = t − γ σ for OES inspection times using different optimality criterion when p 1
Table 6 :
Optimal progressive interval Type-I censoring plans under different inspection schemes for some selected failure rates when n = 200 and m = 5 | 2019-03-22T04:24:58.065Z | 2019-01-26T00:00:00.000 | {
"year": 2019,
"sha1": "96d343db1f76cef1c8c84ec9c89e5c3bd864e437",
"oa_license": "CCBY",
"oa_url": "https://ajs.or.at/index.php/ajs/article/download/781/658",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "96d343db1f76cef1c8c84ec9c89e5c3bd864e437",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235557658 | pes2o/s2orc | v3-fos-license | Evolution of Old World monkeys and great apes links to massive and directional shrinkage of the dinucleotide short tandem repeat compartment
Background: The evolutionary trend of short tandem repeats (STRs) at the crossroads of speciation remains largely elusive and attributed to random evolution for the most part. To explore this trend, we selected nine species, which shared sequential chronological ancestors, including rat, mouse, olive baboon, gelada, macaque, gorilla, chimpanzee, bonobo, and human, and collected three sets of data on the abundance of all classes of dinucleotide STRs (≥6-repeats) for three regions of every chromosome, each region spanning 10 Mb of DNA. Results: In all three datasets, we found directional shrinkage of the dinucleotide STR compartment as follows: rodents>Old World monkeys>great apes (P=0.000). The decremented gradient observed for the dinucleotide STRs was not detected for a number of other classes of STRs, such as mono and trinucleotide STRs. Conclusion: We report the first instance of massive and directional gradient of STRs, which may link with the evolution of Old World monkeys and great apes.
While a limited number of studies indicate that purifying selection and drift can shape the structure of STRs at the inter-and intra-species levels [11][12][13][14][15][16] , the global trend of STR evolution at the crossroads of primate speciation remains largely unknown.
The most common STRs in the human genome are dinucleotide repeats 17 . Here we analyzed the evolutionary trend of this category of STRs in nine selected species encompassing rodent, Old World monkey, and great apes.
Materials And Methods
Extraction of STRs from genomic sequences.
The written program was based on perfect (pure) STRs. By using the REST API service 18 from Ensemble 101 (https://asia.ensembl.org) 19,20 , data of three arbitrary regions of every chromosome, each region spanning 10 Megabase (Mb) of genomic DNA, were accessed in the nine species (Fig. 1). In each chromosome, the STR abundances were calculated and compared on a chromosome-to-chromosome basis (Suppl. 2). Subsequent to collecting the entire data, we also differentiated the STRs based on their length into two classes of 6-20 repeats and > 20-repeats, and studied their abundance in the selected species. Finally, the data of the selected regions of the chromosomes in the nine species were aggregated and analyzed.
Additionally, to compare the trend of dinucleotide STRs with other classes of STRs, we used the STR-Finder tool to screen the selected regions and species for mononucleotide STRs (T, G, A, and C) of ≥ 6-repeats and trinucleotide STRs (GCC/ GGC) of ≥ 6-repeat lengths.
Statistical analysis
The dinucleotide STR abundance trend in the nine selected species was compared across datasets 1, 2, and 3, by correlation coe cient and repeated measurements analysis (Table 1).
DS: Dataset
Comparisons of within and between classes were analyzed using one and two-way Anova tests. These analyses were con rmed by nonparametric tests
Results
Overall directional shrinkage of the dinucleotide STR compartment in Old World monkeys and great apes vs. rodents.
In three independent analyses, we studied the distribution of dinucleotide STRs in respect of their abundance across rodents and primates. The observed trend was strikingly decremental as follows: rodents > Old World monkeys > great apes P = 0.000 (Table 1, Fig. 2). That trend was replicated in datasets 1, 2, and 3 (P = 0.80).
Differential gradient of dinucleotide STRs based on their length.
The directional gradient, rodents > Old World monkeys > great apes, was found to be predominantly laid in the dinucleotide STRs of 6-20 repeats (Fig. 3). While the > 20-repeat compartment was the most dramatically affected in respect of shrinkage in primates vs. rodents, this compartment was the largest in human, in comparison with the remaining six primate species studied.
Differential gradient of STR classes in rodents vs. Old World monkeys vs. great apes.
To examine whether the observed trend in dinucleotide STRs can be generalized to other STR classes, we analyzed the abundance trend of mononucleotide STRs (G, A, T, and C) ≥ 6-repeats (Fig. 4) and trinucleotide STRs (GGC/GCC) ≥ 6-repeats (Fig. 5) in the selected species. While the trend in dinucleotide STRs was a decremented gradient (rodents > Old World monkeys > great apes), a similar trend was not detected in the mono (Fig. 4) and tri STR (Fig. 5) compartment. In fact, the dramatic excess of the dinucleotide compartment observed in rodents was not observed for the mono and trinucleotide STRs.
Discussion
It is largely unknown whether at the crossroads of speciation, STRs evolved as a result of purifying selection, genetic drift, and/or in a directional manner. In an attempt to resolve part of the picture, we selected multiple species that shared sequential chronological ancestors, and investigated all possible dinucleotide STRs of all possible lengths (≥ 6-repeats). Our analysis revealed an overall directional gradient in the abundance of dinucleotide STRs during primate speciation, evidenced by the following trend: rodents > Old World monkeys > great apes (Fig. 6).
Dinucleotide STRs are the most abundant class of STRs in the vertebrate genomes, and their global pattern of abundance may shed light on a vastly unknown aspect of evolutionary biology. The replicated trends observed in our three datasets seem to be independent of the genome size of the selected species. Mouse and rat have the highest abundance of dinucleotide STRs in comparison to the seven selected primate species, and yet their genomes are smaller than those species. This nding is in line with the previous reports of lack of relationship between genome size and abundance of STRs 21,22 .
An alternative hypothesis to the directional shrinkage of the dinucleotide STR compartment in primates vs. rodents is that this compartment has expanded excessively in rodents. Indeed, rodent genomes appear to be signi cantly rich in STRs in comparison to several other mammals 11 . However, our present ndings indicate that the above property cannot be generalized to all classes of STRs, as, at least, mono and trinucleotide STRs did not show the dramatic excess observed in the dinucleotide compartment in rodents. Furthermore, the decremented gradient of the dinucleotide STR compartment in the following order: rodents > Old World monkeys > great apes supports the shrinkage hypothesis for the dinucleotide compartment.
It is possible that there is a mathematical threshold required for the abundance of STRs in various orders of species (Fig. 6). This is in line with the hypothesis that STRs function as scaffolds for biological computers 23 .
Certain STRs located in the protein-coding gene core promoters have been subject to contraction in the process of human and non-human primate evolution 24 . A number of those STRs are identical in formula in primates vs. nonprimates, and the genes linked to those STRs are involved in characteristics that have diverged primates from other mammals, such as craniofacial development, neurogenesis, and spine morphogenesis. It is likely that those STRs functioned as evolutionary switch codes for primate speciation. In line with the above, structural variants are enriched near genes that diverged in expression across great apes 25,26 . It is speculated that STR variants are more likely than single-nucleotide variants to have epistatic interactions, which can have signi cant consequences in complex traits, in human as well as model organisms 27,28 . Future studies such as large-scale genome-editing of STRs 29 in embryonic stem cells and investigation of their differentiation into various cell lineages may be candidate approaches to investigate how the massive and dramatically diverged trend of dinucleotide STRs links to primate speciation and evolution.
Conclusion
In conclusion, we propose that massive and directional shrinkage of the dinucleotide STR compartment links to, and probably had a determining impact on primate speciation. This is a prime instance of non-random STR gradient in multiple speciation. Authors' contributions MA collected data and performed the bioinformatic analyses. MS performed the biostatistics analyses. IA contributed to data collection. MO conceived, designed, and supervised the project, and wrote the manuscript. Figure 1 Schematic representation of data collection of the mono, di, and trinucleotide STR compartments. All chromosomes were screened across the nine selected species in three datasets 1, 2, and 3. Only one chromosome is depicted as an example.
Figure 2
Massive directional shrinkage of the dinucleotide STR compartment in primates, replicated in datasets 1, 2, and 3.
Figure 3
Page 10/12 Abundance of various dinucleotide STR lengths across the nine selected species. The directional decremented gradient was predominantly laid in the 6-20 length compartment. While the >20-repeat compartment was the most dramatically affected as a result of shrinkage in primates in comparison with rodents, human had the highest abundance of the >20 repeat STRs in comparison with all other primates studied.
Figure 4
Mononucleotide STR trend across rodent and primates. In contrast to the dinucleotide compartment, the trend in the trinucleotide STRs was not decremented in primates vs. mouse.
Figure 5
Trinucleotide STR trend across rodent and primates. In contrast to the dinucleotide compartment, the trend in the trinucleotide STRs was not decremented in primates vs. mouse. | 2021-06-22T17:55:39.482Z | 2021-04-26T00:00:00.000 | {
"year": 2021,
"sha1": "b4114371a80e6e64dffac6a38d1674e2d3b0593d",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-441796/v1.pdf?c=1619468564000",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eac273188be82cade21900422138d7de74cff089",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
10360427 | pes2o/s2orc | v3-fos-license | Genetic Profiles of Korean Patients With Glucose-6-Phosphate Dehydrogenase Deficiency
Background We describe the genetic profiles of Korean patients with glucose-6-phosphate dehydrogenase (G6PD) deficiencies and the effects of G6PD mutations on protein stability and enzyme activity on the basis of in silico analysis. Methods In parallel with a genetic analysis, the pathogenicity of G6PD mutations detected in Korean patients was predicted in silico. The simulated effects of G6PD mutations were compared to the WHO classes based on G6PD enzyme activity. Four previously reported mutations and three newly diagnosed patients with missense mutations were estimated. Results One novel mutation (p.Cys385Gly, labeled G6PD Kangnam) and two known mutations [p.Ile220Met (G6PD São Paulo) and p.Glu416Lys (G6PD Tokyo)] were identified in this study. G6PD mutations identified in Koreans were also found in Brazil (G6PD São Paulo), Poland (G6PD Seoul), United States of America (G6PD Riley), Mexico (G6PD Guadalajara), and Japan (G6PD Tokyo). Several mutations occurred at the same nucleotide, but resulted in different amino acid residue changes in different ethnic populations (p.Ile380 variant, G6PD Calvo Mackenna; p.Cys385 variants, Tomah, Madrid, Lynwood; p.Arg387 variant, Beverly Hills; p.Pro396 variant, Bari; and p.Pro396Ala in India). On the basis of the in silico analysis, Class I or II mutations were predicted to be highly deleterious, and the effects of one Class IV mutation were equivocal. Conclusions The genetic profiles of Korean individuals with G6PD mutations indicated that the same mutations may have arisen by independent mutational events, and were not derived from shared ancestral mutations. The in silico analysis provided insight into the role of G6PD mutations in enzyme function and stability.
INTRODUCTION
Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most prevalent X-linked enzymopathy. G6PD is the first enzyme in the pentose phosphate pathway, and NADPH generated by the pathway provides an important source for intracellular reduction, particularly for red blood cells (RBCs) [1]. Since G6PD is the only NADPH-producing enzyme in RBCs, its activity in these cells provides defense against oxidative damage. Acute hemolytic anemia is a common clinical symptom of the deficiency, but G6PD-deficient individuals usually have no clinical manifestations and remain asymptomatic until they are exposed to a hemolytic trigger. The triggers include various exogenous agents, such as infection and hemolysis-inducing drugs, and can each cause jaundice, hyperbilirubinemia, and hemoglobinuria. When a G6PD deficiency is suspected, a patient receives various tests, including a complete blood count (CBC) with reticulocyte count, direct and indirect bilirubin levels, lactate dehydrogenase (LDH), Coombs test, and G6PD enzyme activity. A genetic analysis by G6PD sequencing is also available.
According to the WHO classification, G6PD deficiency is divided into five classes on the basis of the severity of the enzyme deficiency as measured by the level of RBC G6PD activity and clinical manifestations [2]. The majority of patients with G6PD deficiency belong to Class II, characterized by a severe enzyme deficiency, but rare G6PD-deficient individuals fall into Class I, with an even more severe enzyme deficiency related to chronic non-spherocytic hemolytic anemia (CNSHA). Genetic diagnostic methods can be used to identify asymptomatic patients who are not in an acute aggravation state, even those with a Class IV G6PD deficiency, with enzyme activity levels within the normal, reference range, but who have the potential for aggravation in response to triggers.
Since G6PD Riley and Guadalajara were first reported by our institute [3,4], two additional G6PD deficiency patients have been genetically confirmed in Korea [5,6]. We described three more Korean cases of genetically confirmed G6PD deficiency, covering the laboratory profiles of all seven patients including previously reported cases, and investigated mutations in G6PD using an in silico approach. We also compared the simulated effects of the G6PD mutations to WHO classes according to the level of enzyme activity in RBCs and clinical manifestations.
Patients
All seven known Korean male patients with G6PD mutations including four previously reported cases were examined. The seven patients experienced episodes of acute aggravation of hemolytic anemia with decreased G6PD enzyme activity. Among them, three patients were newly diagnosed as G6PD-deficient in this study. The G6PD enzyme activity levels in the RBCs of all three patients were low, i.e., 10.5, 2.1, and 0.8, respectively (reference range for men: 7.9-16.3 U/g Hb). The study protocol was approved by the Institutional Review Board of The Catholic University of Korea, and written informed consent for clinical and molecular analyses was obtained from the three newly diagnosed cases.
Biochemical analysis of G6PD enzyme activity levels
A spectrophotometric assay was used to quantify G6PD enzyme activity (Ben S.r.l. Biochemical Enterprise, Milan, Italy) by measuring the formation of NADPH molecules (based on absorbance at 340 nm). Fluorescence was detected by using a Hitachi U-3010 UV-Visible, Scanning Spectrophotometer (Hitachi, Tokyo, Japan).
Direct sequencing
A genetic analysis was performed by direct sequencing of the G6PD. Genomic DNA was extracted from peripheral blood by using a QIAamp DNA Mini Kit (Qiagen GmbH, Hilden, Germany). Entire coding exons and flanking intronic sequences of G6PD were amplified by PCR using different combinations of 11 primer sets designed using Primer3 (http://bioinfo.ut.ee/primer3/) by the authors. Direct sequencing of PCR products was performed by using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, USA), and the products were resolved on the ABI 3130XL Genetic Analyzer (Applied Biosystems). Sequence electropherograms were analyzed by using Sequencher 4.9 (Gene Codes, Ann Arbor, MI, USA). The G6PD sequence with RefSeq ID NM_001042351.2 was used as a reference for cDNA nucleotide numbering. All identified variants were confirmed by bidirectional resequencing.
1) Sequence variant databases
Other G6PD mutations at the identified amino acid residues were retrieved from the Human Gene Mutation Database (HGMD, http: //www.hgmd.cf.ac.uk/) [7], ClinVar (http://www.ncbi.nlm.nih.gov/ clinvar/) [8], and the 1000 Genomes Project (http://browser.1000 genomes.org/) [9]. The Exome Aggregation Consortium (ExAC, http://exac.broadinstitute.org/) was also searched; this source provides exome sequencing data from a variety of large-scale sequencing projects, including 60,706 unrelated individuals, and serves as a useful reference set of allele frequencies for severe disease studies. An allele frequency of less than 0.001 indicates that a variant is rare in the healthy population and the variant can be interpreted as a pathogenic mutation. ated by next-generation sequencing and high-resolution comparative genomic hybridization arrays, were also searched.
The three newly diagnosed patients as having G6PD deficiency were characterized. Patient of case 5 was a 3-yr-old boy who suffered from fever and jaundice. He had no family history of hematologic disorders. CBC showed the following: Hb 0.96 g/L, mean corpuscular volume (MCV) 101.3 fL, mean corpuscular Hb (MCH) 32.3 pg, mean corpuscular Hb concentration (MCHC) 31.9%, reticulocyte 17.18%, and undetectable G6PD enzyme activity. The Coombs test and osmotic fragility test were negative. After one week, he recovered from the acute aggravation. The G6PD enzyme level increased to 10.5 U/g Hb, and the G6PD deficiency was categorized as Class IV according to the WHO classification. Sanger sequencing of the G6PD revealed a c.660C > G (p.Ile220Met) mutation, which was previously reported as G6PD São Paulo [18].
Patient of case 6 was a 5-yr-old boy who suffered from a fever and a pale appearance. He had no family history of hematologic disorders. He visited our hospital after recovery from the acute aggravation. The Coombs test and the osmotic fragility test were negative. The G6PD enzyme level was 2.1 U/g Hb without acute aggravation. His condition was classified as Class I G6PD deficiency according to the WHO classification owing to the severely decreased enzyme activity with CNSHA. Sanger sequencing revealed a c.1153T > G (p.Cys385Gly) mutation in G6PD. This mutation was not reported previously; therefore, we named the mutation G6PD Kangnam. Interestingly, mutations affecting the same residue as that of G6PD Kangnam, but resulting in different amino acid changes, p.Cys385Arg (G6PD Tomah) [22], p.Cys385Trp (G6PD Madrid) [23], and p.Cys385Phe (G6PD Lynwood) [24], have been previously described in different ethnic groups.
Patient of case 7 was a 2-yr-old boy who had a fever and a pale appearance. He had a family history of anemia (maternal uncle and younger brother). With acute aggravation, the G6PD level was 0.8 U/g Hb. His condition was classified as Class I G6PD deficiency according to the WHO classification. Sanger sequencing revealed a c.1246G > A (p.Glu416Lys) mutation, which is known as G6PD Tokyo [19].
Four additional mutations were previously reported in Korean individuals. A case study of patient 2, with the G6PD Seoul mutation, was not published, but was described in a later study [5]. The mutation was c.916G > A in exon 9 and a patient with the same mutation was classified as Class II in a later report (the G6PD activity level was 1.6 U/g Hb and there was no CNSHA). Patient 1 was a 7-yr-old boy who presented with mild jaundice and dyspnea. He had neonatal jaundice and recurrent nonspherocytic hemolytic anemia with type B hepatitis. CBC showed an Hb of 0.26 g/L and G6PD activity of 1.8 U/g Hb. On the basis of CNSHA, the patient was Class I according to the WHO classification. He had three brothers who died from severe neonatal jaundice and acute hemolytic anemia. The mutation was c.1139T >C on exon 10 of G6PD. Patient 3 was a 22-month-old boy who presented with jaundice and anemia. He had no family history of hematologic disorders. CBC results were as follows: Hb 0.62 g/L, MCH 33.3 pg, MCHC 31.2%, MCV 106.6 fL and reticulocyte 10.4%. A peripheral blood smear revealed anisocytosis, poikilocytosis, and macrocytosis. Total bilirubin was 5.56 mg/dL, direct bilirubin was 0.57 mg/dL, LDH was 1,329 U/L, and G6PD activity was 0.5 U/g Hb. The patient belonged to Class I according to the WHO classification. Based on sequencing, the mutation was c.1159C > T in exon 10 of G6PD. Patient 4 was a 20-month-old boy who presented with chronic hemolytic anemia and several episodes of acute exacerbation. His Hb level was 1.02 g/L, and G6PD enzyme level was 0.2 U/g Hb. The patient belonged to Class I according to the WHO classification. The mutation was documented as c.1187C > G in exon 10.
Effect of mutations on disease manifestation
In an in silico analysis, G6PD mutations associated with Class I or II were predicted to be highly deleterious on the basis of their effects on protein structure and function, whereas the effects of Class IV mutations were unclear. Scorecons, an evolutionary conservation predictor, showed that the residues were located in highly conserved regions (Table 2). In terms of evolution-based sequence diversity, the overall diversity based on all position scores for the G6PD protein sequence was 0.529, ranging from 0 for non-conserved residues to 1 for highly conserved residues. Mutations affecting the same residue, such as p.Ile380, p.Cys385, p.Arg387, and p.Pro396, were predicted to be pathogenic on the basis of codon changes (Table 3). FoldX predicted alterations in protein stability, except for G6PD São Paulo, which has been reported in the ExAC database at an extremely low frequency, i.e., 1.141 × 10 -5 ( Fig. 1A and B, Table 4). For the Class IV mutation, G6PD São Paulo, an in silico analysis using SIFT, PROVEAN, PolyPhen-2, Align-GVGD, and the FoldX implemented in SNPeffect 4.0 predicted a relatively mild effect, consistent with the observed enzyme activity levels.
DISCUSSION
G6PD deficiencies selectively affect RBCs via two mechanisms. First, most known mutations decrease enzyme stability. Since these cells do not have the ability to synthesize proteins and replenish their enzyme levels, the enzyme level decreases as cells age during their 120-day lifespan in circulation. Second, G6PD activity is decreased, and the diminished ability of RBCs to withstand stress increases the risk of destruction by hemolysis.
To date, approximately 186 G6PD mutations have been documented, most of which (85%, 159/186) are single nucleotide substitutions leading to missense variants [25]. This and previ-ous studies have revealed identical G6PD mutations in different countries around the world. Since 1999, when G6PD Riley was reported at our institute, seven G6PD mutation types have been identified in Korea. The same mutations have also been found in Brazil (G6PD São Paulo) [18], Poland (G6PD Seoul) [26], Americas (G6PD Riley) [21], Mexico (G6PD Guadalajara) [20], and Japan (G6PD Tokyo) [19]. In addition, several mutations can occur at the same nucleotide position, but result in different amino acid changes in different ethnic populations. For example, p.Cys385 variants have been reported as G6PD Kangnam as well as G6PD Tomah [22], Madrid [23], and Lynwood [24]. In addition, p.Pro396 variants have been described as p.Pro396Arg in Korean individ- The mutation from Ile (red in A) to Met (red in B) at position 220 resulted in a ddG of -0.38 kcal/mol. This implies that the mutation had no effect on protein stability. (C, D) The mutation from Gly (red in C) to Ser (red in D) at position 306 resulted in a ddG of 2.81 kcal/mol. This implies that the mutation reduced protein stability. (E, F) The mutation from Ile (red in E) to Thr (red in F) at position 380 resulted in a ddG of 0.56 kcal/mol. This implies that the mutation slightly reduced protein stability. (G, H) The mutation from Cys (red in G) to Gly (red in H) at position 385 resulted in a ddG of 0.65 kcal/mol. This implies that the mutation slightly reduced protein stability. (I, J) The mutation from Arg (red in I) to Cys (red in J) at position 387 resulted in a ddG of 2.18 kcal/mol. This implies that the mutation reduced protein stability. (K, L) The mutation from Pro (red in K) to Arg (red in L) at position 396 resulted in a ddG of 3.19 kcal/mol. This implies that the mutation reduced protein stability. (M and N) The mutation from Glu (red in M) to Lys (red in N) at position 416 resulted in a ddG of -0.86 kcal/mol. This implies that the mutation slightly enhanced protein stability. roles of G6PD mutations from evolutionary and structure-based computational points of view. Further investigations involving more Koreans with G6PD deficiencies are needed to clarify the relationship between the clinical manifestations and mutational spectrum related to this enzyme deficiency. | 2017-08-30T00:35:25.256Z | 2016-12-20T00:00:00.000 | {
"year": 2016,
"sha1": "b8c74792fcce6b6108ab1b3096d63f03f0f34523",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3343/alm.2017.37.2.108",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8c74792fcce6b6108ab1b3096d63f03f0f34523",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
169190727 | pes2o/s2orc | v3-fos-license | Risk Governance Structure and Firm Performance: An (Exploratory) Empirical Study in Indian Context
The study attempts to explore the relationship between risk governance structure and firm performance . In perhaps the first of its kind attempt, a normative framework for risk governance structures is being put forward. Based on the framework, an index indicating strength/quality of risk governance structures is proposed. Then, the impact of risk governance structure on firm performance is gauged. To this end, the study makes use of constituents of S&P CNX500 index and covers a ten year period from April 1, 2005 to March 31, 2015.To control for potential endogeneity among variables of interest, the study makes use of a robust and reliable methodology, ‘ difference-GMM ’. In addition, to ensure completeness of results, the study employs control variables such as recession dummy, firm’s age, size, and growth rate and leverage ratio. The results suggest that robust risk governance structures do not necessarily lead to better firm performance. In fact, risk governance index is negatively related to both ROA and ROE. The relationship is not statistically significant but has wide economic implications. A prominent implication being, mere constitution of risk management committee and appointment of CRO will not improve firm performance; regulators and companies need to ensure that governance structures are not too rigid, excessively risk averse and ineffective and inefficient in decision making. Given the simplicity and reliability of the proposed risk governance index, and the recommendations put forth in the paper, the study is expected to be of immense utility in an important yet neglected area of risk governance.
Introduction
One of the basic objectives of business is to generate sufficient returns for its various stakeholders, particularly, shareholders and lenders of long-term finance.But, this objective is fraught with risks (internal as well as external).
Since a company is an artificial person, the responsibility of both management of these risks and direction and supervision of the company vests with Directors.Therefore, the structure and composition of Board of Directors become significant factors in determining the success of risk management in a firm.
In addition, every organization has separation of ownership and control.This invariably leads to agency problems (Jensen and Meckling, 1976).Management may be inclined to waste shareholders' resources to satisfy its exploitative purposes.Therefore, any excessive risk taking may only be curbed through an effective and efficient risk governance mechanism, which in turn depends on a robust and resilient risk governance structure.
The literature is rife with corporate governance indices and their relationship with different business parameters like risk, return, profitability, etc. But,few studies focus on risk governance and risk structure.Further, the available studies are largely in context of financial entities.Therefore, the study is perhaps the first of its kind with the focus on risk governance in non-financial entities.
Further, an efficient risk governance structure is believed to be pivotal in setting tolerable risk limits, appropriate risk-appetite standards, risk policy, and risk culture in an organization.Recognizing this importance of risk governance, the study proposes a normative framework for risk governance structure and then goes on to gauge the impact of this structure on firm performance.
A number of studies document that certain governance structures may be drivers of firm performance, but these studies have been widely criticised as most of them tend to ignore potential endogeneities.Critics argue that there is a possibility (of reverse causality) that performance drives governance or some third unobservable factor influences both governance and performance (Wintoki, et al., 2012).Recognising this (potential) endogenous nature of relationship between corporate governance and firm performance, Bhagat and Black (2002), suggest unreliability of estimation techniques that overlook this pertinent issue.Following this stream of thought, Wintoki, et al. (2010), advocate use of generalized method of moments (GMM), technique that is aptin dealing with endogeneity and simultaneity bias.
Therefore, the study uses 'difference GMM' with control variables to gauge the relationship between risk governance structure and firm performance.
The paper has been organized into seven sections.Section 2 highlights relevance of risk governance.Section 3 describes the sample used and sources of data.Section 4 elaborates the methodology employed for index construction and for analysis.Section 5 examines the findings and presents the analysis of the same.This is followed by concluding observations.
Background and review of literature
With international organizations such as Financial Stability board (FSB, 2013)and Committee of Sponsoring Organization (COSO) (COSO,2013) focusing extensively on risk governance, it seems imperative to review that whether the structures based on recommendations provided in these guidelines are actually serving their intended purpose or not.These studies are particularly required for emerging countries (such as India and China) that are in transitory phase in terms of governance reforms.It will help policy makers and regulators in examining that whether the regulations are actually making any contribution or not and that whether the compliance with the regulations is a mere eyewash.Further, the concept of risk governance being a recent phenomenon is narrowly researched; whatever research exists is mainly for financial entities.
Corporate governance and firm performance is a widely researched phenomenon, with results ranging from positive association (Brookman and Thistle, 2009;Pan et al., 2013)to negative association (Bebchuk et al., 2009).In fact few studies have documented no relation also.It is worth mentioning that not just corporate governance (Noor and Ayoub, 2009) and related measures, but also other firm specific factors like age, size, leverage (Mancinelli and Ozkan, 2006), growth (Damodaran, 2006), DPS (Kamunde, 2011), etc. could be significant factors affecting firm performance.
In a recent study, Abu- Ghunmi et al. (2015) emphasized that corporate governance mechanisms not only affect firm performance but could provide plausible explanation for idiosyncratic risk also.Their arguments are in line with those of Baxter and Cotter (2009) and Davidson et al. (2005), who evidenced that composition of audit committee and proportion of non-executive directors, are significant contributors towards improvement in earnings quality.In a similar study Huang and Wang (2015) emphasized the importance of board size for firms' riskiness.
Unlike most studies, that have linked specific governance indicators with firm's riskiness, Jiraporn et al. (2015) explored the relationship between governance and risk by using a composite governance indicator.They consider two contrasting hypothesis; first, risk-avoidance hypothesis and second, risk-seeking hypothesis.In the context of risk-avoidance view, they posit that weak governance structures result in lower risk taking.Pursuing the risk-seeking hypothesis Lee et al. (2006) suggest that better governance may reduce firm specific risks.
It is noteworthy that there are empirical evidences and studies focusing on corporate governance and risk, but, no particular study focusing on risk governance could be found.Also, in terms of corporate governance, the studies have either looked at specific governance mechanisms (like, CEO duality, board size, audit committee composition, etc.) or existing composite governance measures like GIM-index, developed by Gompers et al. (2003).
It is pertinent to note, that corporate governance has varying definitions and encompasses a plethora of variables, whose relevance varies as per the context.In the context of risk and risk taking, focus needs to be put on a specialized subset of corporate governance, called risk governance.Risk governance has been defined as "the ways in which directors authorize, optimize, and monitor risk taking in an enterprise.It includes the skills, infrastructure (i.e., organization structure, controls and information systems), and culture deployed as directors exercise their oversight" (International Finance Corporation (IFC).Since, risk governance is the specialized arm of corporate governance that deals exclusively with risk and risk management, it appears reasonable to believe that the quality of governance structure would have an impact on the firm performance, which could be viewed as function of risk levels of the company.
In view of the above, the primary purpose of the paper is to explore the relation between risk governance structure and firm performance.In other words, study attempts to gauge whether better risk governance structure leads to better firm performance.
Sample
The sample consists of non-financial companies that constitute CNX 500 index as on March 31, 2014
Methodology
The main objective of the paper is to gauge whether better risk governance structure leads to better firm performance.Therefore, as a first step, risk governance index (RGI) (which will be the independent variable) has been developed.The index is based on ninevariables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of Chairperson, proportion of independent directors, CEO duality, existence of Chief risk officer (CRO), risk management committee, and whistle blower policy.These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of Chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5).Further, it is worth mentioning that in respect of some of the above-mentioned variables, there are certain legal/statutory requirements under the provisions of Companies Act 2013 and/or Clause 49 of Listing agreement (entered into with SEBI)).In the case where there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged.Though there is no legal requirement, for the larger part of study, especially in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence.This view has been taken in the light of the fact that the study basically, focuses on risk governance structure.In other words, recognising the importance of these variables in context of risk governance structure, their absence has been equated to non-compliance of a legal/statutory requirement.Therefore, based on the eleven variables the minimum score a company can have is 15 and the maximum possible is 55.For ease of comprehension, the score obtained by every company has been expressed as a percentage of maximum possible score, i.e. 55.To illustrate, if a company scores 22, then the index will be presented as 40 per cent i.e. (22/55)*100.
The rationale for each of the variable has been discussed below.
(1) Number of board of directors
Boards that are below the minimum legal requirement in terms of size have been considered inappropriate by Chen et al. (2007).Jensen (1993) asserts "when boards get beyond seven or eight people, they are less likely to function effectively".Similarly, Lipton and Lorsch (1992) suggested limiting the number of directors to ten people, with an ideal of eight or nine members.
Status of Chairman Score
Executive Chairman 3 Non-executive Chairman 5
Appointed CRO Score
No 1 Yes 5
Implemented a Whistle blower Score
No 1 Yes 5 as Phase II (post-recession period).A dummy variable has been used for the purpose.
Age-Abernathy and Utterback (1978) have highlighted the significance of firms' lifecycle on strategic decisions.
They suggest that younger firms have limited knowledge base and that is reflected in their governance structures.
Whereas, certain other studies show that older firms exhibit rent seeking behaviour and poorer corporate governance.
Further, Firm age has been linked to strategic decisions of the firm and it has been observed that complexity increases with firm age.The number of years the firm has been in existence since its inception , has been taken as the proxy for firm age.
Firm size-The effect of firm size on governance is ambiguous (Klapper and Love, 2004).It is suggested that large firms may have severe agency problems and therefore need to compensate with stricter governance mechanisms.
Alternatively, small firms may have better growth opportunities and greater need for external finance, leading to better governance mechanisms.Natural log of total assets has been used to proxy size (Akbar et al 2016).
Growth-Studies havesuggested that growth rate/ growth opportunities available to a company may affect its performance.Similarly, Durnev and Kim (2003) show that growing companies, tend to exhibit higher returns to various stakeholders.Therefore, it is imperative to control for growth of company.
Leverage-Traditionally, financing pattern of a company has been viewed as a significant determiner of firm performance.The capital structure of a firm is expected to have an impact on performance, as the pecking order theory suggests a negative relation between corporate profitability and debt ratios (Fama and French, 2002).Concepts including 'trading on equity' are often employed to magnify returns for a particular group of stakeholders (Khan and Jain, 2014).
Dependent variables
Firm performance could be measured using either accounting measures or market based measures or both.The study uses accounting based measures only, as stock market based measures like stock returns could be unduly affected by investor perception (Bhagat and Bolton, 2008).values of a variable influence its current period values.This form of endogeneity has been often observed in studies dealing with corporate governance-performance relationship (Hermalin and Weisbach, 1998).(ii).Simultaneity-It occurs when two variables simultaneously affect each other, resulting in their co-determination.(iii).Unobserved heterogeneity-It is phenomenon where the relationship between two variables is affected by some third unobservable variable.In general, these may be attributed to firm-specific characteristics or firm-fixed effects (Haubrich and Ritter, 1996).
Return on assets (ROA)-
The most common solution to deal with endogenetity problems is the use of lagged dependent variables or instrumental variables.The estimation techniques that may be employed are OLS, FE or DPD GMM.If OLS is used for estimation, it typically results in an upward bias in the coefficient of lagged dependent variable (Bond, 2002).Similarly, in the context of unobservable firm heterogeneities, Baltagi (2008) discourages the use of fixed effects model (particularly, when the panel is a short panel).He suggests that the lagged dependent variable may end up being correlated with error-term, resulting in biased coefficients.Further, the coefficients of the lagged dependent variable, obtained through FE estimation may have a downward bias (Nickel, 1981).To overcome these problems Holtz-Eakin et al. (1988) proposed generalized method of moments (GMM) panel specifications, which was later popularised by Arellano and Bond (1991), Arellano and Bover (1995) and Blundell and Bond (1998).They suggest that the problems of endogeneity may be overcome by developing valid instruments which will result in unbiased and consistent coefficients.It is worth mentioning that Arellano and Bond (1991), first-difference the panel data to remove the time-invariant fixed effect and show that the lagged dependent variables' values (levels)constitute legitimate instruments for the first-differenced variable; provided that the residuals are free from second-order serial correlation.
The validity of instruments is gauged by using the Sargan test of over-identifying restrictions.Further, a test is conducted to ensure there is no serial correlation (of order two) among the transformed error terms.
The decision to use Arellano Bond (1991)-'difference GMM' -for the current study is based on the findings of (Larcker&Rusticus, 2010; Petersen, 2009).They suggest that companies are unique in terms of their strengths and weaknesses.This can result in a scenario whereby disclosure and governance practices are jointly and dynamically determined by unobserved company-specific heterogeneities, such as managerial talent, corporate culture and complexity (Guest, 2009;Henry, 2008),which simple OLS regressions may be unable to detect (Gujarati, 2003; argue that firm performance and corporate governance are simultaneously determined by unobservable firm-specific factors, and that governance changes are determined by past, present and/or expected characteristics of the firm.
Hence, given the panel nature of data and following past studies, the study proposes to use 'difference GMM' for estimating various relationships.
Empirical evidence
It is evident from Table 2 that the mean index score for the period of study is about 65 per cent; this may be attributed to increased focus on corporate governance and risk management.Further, the index score in the range of about 65% per cent is indicative of most (governance) parameters being in the range of 3 to 4 (out of 5) each.In other words, on an average, Indian companies have a near ideal index, based on the normative framework developed above.
In addition, a low standard deviation in the range of 8-9 per cent is suggestive of somewhat similar structures in majority of companies.In sum, the Indian corporate sector appears to be mindful of the benefits of strong governance structure.They seem to have the belief that it's the governance structure and mechanism that will enable companies to manage risks, endure difficulties and leverage the opportunities.Interestingly, all the control variables are statistically significant in context of ROE.In contrast to results of ROA, age is negatively related with ROE, i.e. younger firms tend to generate higher return for their equity shareholders.
Variable
Similarly, in terms of size, bigger firms seem to have higher ROE than smaller firms.Further growth and leverage are negatively related to ROE.Just as in the case of ROA, recession has had a negative impact on ROE as well.
These surprising yet interesting findings call for a review of policies (regulatory), policies those prima facie seem to strengthen risk governance structures and facilitate effective and efficient decision making but, in reality fail to yield the desired/intended results.
Concluding observations
Literature is rife with corporate governance studies and various versions of corporate governance index are available.But, construction of a risk governance index (as proposed in this study) is perhaps the first of its kind attempt.
Indian companies have decent risk management structure with mean index scores of about 65 per cent.The general view is that a good risk governance structure is pertinent for effective and efficient risk management and better firm performance, but, results indicate the contrary.To overcome the problem of endogeneity and simultaneity bias the relationship between governance structure and firm performance has been gauged using a robust estimation technique, 'difference GMM'.The results show that booth ROA and ROE are negatively related to risk governance index, indicating that better the governance structure, poorer the firm performance.This is suggestive of rigid structures and/or inefficient decision making.Regulators need to take cognizance of the fact that mere constitution of risk management committee or appointment of a CRO is not going to ensure better risk management and improved firm performance.
Directors and CRO and risk management committee should be competent and effective.
It is noteworthy that both ROA and ROE are significantly affected by the respective measures of immediately preceding previous year.Further, firms' age, size, growth rate, leverage ratio and recession affect ROA and ROE in varying degrees.
In addition,the study is believed to have important implications for regulators, investors as well as for management of companies.
In sum, better governance structures do not necessarily ensure better risk management and improved firm performance.
( 9 ) 1 Yes 5 Table 10 .
Risk management committeeRisk management committee (RMC) has been defined as a committee that is charged with the responsibility of organisational risk, advising the Board on firm's overall current and future risk appetite and risk strategy and implementation of that strategy (FSB, 2013).Existence of a Risk management committeeScoreNo Scoring in context of existence of Risk management committeeControl variablesRecession-The period of study is of particular importance as it includes the recession period, which impacted the world economy towards second half of 2008.As per the United Nations Council on Trade and Development (UNCTAD), investment brief (November 1, 2009), the year 2008 marked the end of a growth cycle in global foreign direct investment.Worldwide flows came down by more than 20 per cent.This global financial crisis reduced access to financial resources internally as well as externally(Singh et al., 2012).Thus, the study considers, two phases, Phase I (pre-recession period) April 1, 2005 toMarch 31, 2008March 31, (2006March 31, -2008) ) andApril 1, 2008 to March, 31, 2015 (2009-2015)
Table 1 .
Eisenberg et al. (1998)presented evidence that smaller boards are more effective.Scoring for number of board of directors
Table 2 .
Scoring in relation to number of women directors on Board (3) Proportion of executive directorsExistence of non-executive directors on Board ensures independent judgement in times of potential conflict of interest.They are appointed to bring to Board: independence, impartiality, wide experience, special knowledge and personal qualities(Financial Stability Board, 2013).
Table 3 .
Scoring in relation to proportion of non-executive directors (4) Executive/Non-executive chairman Higgs report (2003) outlines the duties of the Chairman.These include upholding standards of integrity and probity, promotion of communication between executive and non-executive directors, coherent leadership of company to name a few.Therefore, in a bid to have transparency and fairness in governance structure, it seems desirable to have a non-executive director as Board Chairman.
Table 4 .
Scoring in relation to Executive-non executive status of Chairman
Table 5 .
Scoring in relation to proportion of independent directors, with executive Chairman
Table 6 .
(Brickley et al., 1997)proportion of independent directors, with non-executive ChairmanWhen the CEO also serves as the Chairman of the Board, the board's ability to fulfil its supervisory function is significantly reduced due to conflict of interests(Brickley et al., 1997).Further, Rechner & Dalton (1991) suggest absence of CEO duality facilitates effective monitoring of the activities of top management and results in reduction in agency costs.Therefore, CEO non-duality is often preferred for strategic as well as operational reasons.Exhibit 7. Scoring in context of CEO duality (7) Chief risk officer (CRO)Appointment of a CRO is often linked with likely implementation of Enterprise-wide risk management (ERM)(Beasely et al., 2005).Further is it believed that CRO will act as a supporting pillar in development of risk management policies, frameworks and analysis.
Table 8 .
Scoring in context of appointment of a CRO
(8) Whistle blower policy Implementation
of a whistle blower policy and protection of whistle blowers has been advocated by several regulations and legislations worldwide (e.g.: The Public Interest Disclosure Act, 1998, in the UK; Sarbanes Oxley Act, 2002, in the US).Whistle blowing at the right time may save the company from financial loss, scathing publicity or costs of litigation (Rotschild and Miethe, 1999).
Table 9 .
Scoring in context of existence of a whistle blower policy
Table 11 .
Wintoki et al. (2010)sures usedGiven the panel nature of data, panel data regression appears to be the appropriate technique.Panel data analysis provides several advantages over pooled OLS regression.It facilitates consideration of individual/firm specific heterogeneities that may be having an impact on the dependent variable, provides more informative data, more degrees of freedom and more efficiency(Baltagi, 2005).Further,Wintoki et al. (2010)suggest three potential sources of endogeneity that may exist in panel data structures: (i).Dynamic endogeneity-it occurs when the preceding periods' (Johnson and Greening, 1999)008;Jackling and Johl, 2009;en appliedalsoto understand operating efficiency of firm.Though, it measures profitability of total funds, it throws no light on profitability of different sources of funds.The use of ROA as a proxy for firm performance is quite prevalent in context of studies dealing with governance aspects(Brick et al., 2006;Cheng, 2008;Jackling and Johl, 2009; Brown and Caylor, 2005).Return on Equity (ROE)-It helps to gauge profitability from owners/ equity shareholders' point of view(Zabria, et al., 2016).ROE is a widely accepted measure of performance(Johnson and Greening, 1999).
Table 12 .
Descriptive Statistics of relevant variablesIt is noteworthy that the sample adequately represents both young and old &established companies, with an average age of about 38 years for the sample companies.Similarly, the sample is representative in terms of size of company.Further, given the fact these are India's top non-financial companies; an average growth rate of 13% over the period of study is not surprising.A leverage ratio of around 1.5 indicates that on an average these sample companies rely more on outside liabilities than shareholders' funds.Both the dependent variables, measures of firm performance, have a mean of about 15 per cent.Though ROA (15.2%) is slightly higher than ROE (14.6%), it is markedly less dispersed among companies than ROE.This clearly implies that companies vary significantly in terms of their ability to generate returns for equity shareholders; nonetheless, companieshave similar operating efficiencies.
Table 3
reveals that current levels of return on assets are significantly and positively affected by return on assets realised in immediately preceding previous year.Surprisingly, there is a negative relationship between risk governance index and ROA.This implies that better the governance structure, poorer the firm performance.Though this relationship is not statistically significant, it has wide economic implications.This could possibly mean that seemingly good governance structures do not necessarily imply effective governance.In other words, good governance structures may not always translate into better decisions.It is noteworthy, Arora and Sharma(2016), found similar results in Indian context.It is worth mentioning that few control variables such as age of firm, size of firm and period of recession seem to have a statistically significant impact on ROA of Indian firms.On one hand as the firms survive more years they tend to gain more operating efficiency, on the other hand, as firms increase in size, they tend to exhibit lower ROA.Further, recession seems to have adversely affected the firms' capacity to generate ROA.
Table 4
Arora and Sharma(2016)evels of return on shareholders' equity are significantly and positively affected by ROE observed in immediately preceding previous year.Surprisingly, there is a negative relationship between risk governance index and ROE.This implies that better the governance structure, lower the returns generated on funds provided by equity shareholders.Though this relationship, like that of ROA and governance structures, is not statistically significant, it has extensive economic implications.The negative relationship seems to suggest that may be the governance structures though strong, prima facie, are too rigid.These structures are probably acting as impediments to effective and efficient decision-making; they seem to be facilitating acceptance of safer non-yielding alternatives over that of risky but rewarding opportunities.In other words, these structures appear to propagate the culture of risk avoidance, possibly leading to passing-on of risky yet potentially rewarding projects.The results are similar to that ofArora and Sharma(2016), who found that ROE of Indian firms are not related to corporate governance indicators.
Table 3 .
Results of (Arellano-Bond) GMM estimation of ROA on first lag of ROA and RGI and control variables ***Significant at 1% level; **Significant at 5% level ;*Significant at 10% level
Table 4 .
Results of (Arellano-Bond) GMM estimation of ROE on first lag of ROE and RGI and control variables | 2019-05-30T23:46:03.357Z | 2018-09-05T00:00:00.000 | {
"year": 2018,
"sha1": "009bff0735776a7705dc0b8c7d2e94825650e63d",
"oa_license": "CCBYNC",
"oa_url": "https://systems.enpress-publisher.com/index.php/FSJ/article/download/942/605",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "38177f5ff57db95c8f4a1ed142de89c5a522e2f4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
118453858 | pes2o/s2orc | v3-fos-license | Updated measurements of the dark matter halo masses of obscured quasars with improved WISE and Planck data
Using the most recent releases of WISE and Planck data, we perform updated measurements of the bias and typical dark matter halo mass of infrared-selected obscured and unobscured quasars, using the angular autocorrelation function and cosmic microwave background (CMB) lensing cross-correlations. Since our recent work of this kind, the WISE Allwise catalogue was released with improved photometry, and the Planck mission was completed and released improved products. These new data provide a more reliable measurement of the quasar bias and provide an opportunity to explore the role of changing survey pipelines in results downstream. We present a comparison of IR color-selected quasars, split into obscured and unobscured populations based on optical-IR colors, selected from two versions of the WISE data. Which combination of data is used impacts the final results, particularly for obscured quasars, both because of mitigation of some systematics and because the newer catalogue provides a slightly different sample. We show that Allwise data is superior in several ways, though there may be some systematic trends with Moon contamination that were not present in the previous catalogue. We opt currently for the most conservative sample that meet our selection criteria in both the previous and new WISE catalogues. We measure a higher bias and halo mass for obscured quasars ($b_{\textrm{obsc}} \sim 2.1$, $b_{\textrm{unob}} \sim 1.8$) --- at odds with simple orientation models --- but at a reduced significance ($\sim$1.5$\sigma$) as compared to our work with previous survey data.
INTRODUCTION
Large astronomical surveys over a wide range of wavelengths have led to a dramatic increase in public data that is mined by the community for studies of all kinds, especially systematic studies of large samples. Such surveys are generally multi-year efforts, involving multiple data releases as more observations are carried out, sky coverage expanded, and reduction pipelines improved. These changes can propagate through to the scientific results, and impact findings that need to be revisited. Further, the way that survey data are handled -samples selected, regions discarded/weighted, etc. -can shift results in important ways (e.g. Scranton et al. 2002;Myers et al. 2006;Huterer et al. 2006;Ross et al. 2011;Ho et al. 2012Ho et al. , 2015Agarwal, Ho & Shandera 2014;Leistedt & Peiris 2014) Quasars, the luminous accreting supermassive black holes (SMBHs) in the nuclei of massive galaxies, are relatively rare objects seen primarily in the early Universe. Because of this rarity, their study has seen rapid improvements with the dramatic increase in sample sizes from astronomical surveys such as (to name a few) the Large Bright Quasar Survey (LBQS; Hewett, Foltz & Chaffee 1995), Faint Images of the Radio Sky at Twenty Centimeters (FIRST; Becker, White & Helfand 1995;Helfand, White & Becker 2015), the Sloan Digital Sky Survey (SDSS; York et al. 2000), and the 2dF QSO Redshift Survey (2QZ; Croom et al. 2004). Quasars are a key component to studying the growth of black holes over cosmic time (e.g. Elvis et al. 1994;Richards et al. 2006;, as well as probing the potential links between those periods of growth, quasar host galaxies, and parent dark matter haloes (e.g. Hopkins et al. 2008;Booth & Schaye 2010;Volonteri, Natarajan & Gültekin 2011;Sabra et al. 2015).
Large samples of both spectroscopic and photometric quasars have permitted analysis of their large-scale distribution and clustering, which reflects their distribution in the underlying dark matter density field (the quasar "bias", bq) and provides insight into the masses of their dark matter haloes. Studies of optically bright (unobscured, or type 1) 1 quasars have revealed that they tend to reside in haloes of similar mass (∼3×10 12 h −1 M⊙) across all redshifts (Porciani, Magliocchetti & Norberg 2004;Croom et al. 2005;Coil et al. 2007;Myers et al. 2007;da Ângela et al. 2008;Padmanabhan et al. 2009;Ross et al. 2009;Krumpe, Miyaji & Coil 2010;White et al. 2012;Shen et al. 2013;Eftekharzadeh et al. 2015). This suggests a link between black hole fueling and the growth of large-scale structure.
The dark matter haloes of quasars not only impact their spacial distribution, but deflect the photons of the cosmic microwave background (CMB, which backlights the whole sky) traveling past them via gravitational lensing. Full-sky maps of the CMB have been steadily improving in depth and resolution, with the current state-of-the-art data being provided by the Planck satellite (Planck Collaboration et al. 2011). The lensing signature of large scale structure has now been detected in numerous studies (Das et al. 2011;van Engelen et al. 2012;Planck Collaboration et al. 2014b, 2015b. Combined with estimates of the intrinsic CMB power spectrum (Seljak & Zaldarriaga 1999;Hu 2001), lensing maps of the CMB can trace the projected mass along a given line of sight back to the surface of last scattering at z ∼ 1100.
CMB lensing measurements are a particularly powerful tool for studying the haloes of quasars, which peak in number density at z ∼ 2 (Croom et al. 2004;Richards et al. 2005;Fan et al. 2006), coinciding with the peak of the CMB lensing kernel. Additionally, CMB lensing measurements are subject to different systematics than clustering, providing independent follow up to such studies. The first significant detection of a cross-correlation of unobscured quasars and the CMB lensing convergence found a typical halo mass in agreement with clustering results (Sherwin et al. 2012).
However, a subset of the quasar population has remained largely hidden from study because their optical (and in more extreme cases even X-ray) light is dramatically diminished by intervening gas and dust. The existence of these obscured (type 2) quasars has been known for some time (Setti & Woltjer 1989;Comastri et al. 1995), but only recently have large IR datasets from Spitzer (Werner et al. 2004) and the Wide-Field Infrared Survey Explorer (WISE; Wright et al. 2010) allowed detailed study of their demographics (Lacy et al. 2004;Stern et al. 2005;Hickox et al. 2007;Mateos et al. 2013;Stern et al. 2012;Assef et al. 2013;Lacy et al. 2013;Assef et al. 2015;Lacy et al. 2015). However, the nature of the obscuration in these sources is still unclear (e.g. Netzer 2015), with two prominent models being geometric obscuration by a dusty torus ("unification by orientation", well supported at low-L and low-z; e.g. Antonucci 1993) or larger, galaxy-scale obscuration (e.g. Goulding et al. 2012) that may be a product of an evolutionary sequence (Sanders et al. 1988;Hopkins et al. 2008;Croton 2009;Booth & Schaye 2010).
A simple test of orientation models for quasars is to compare their dark matter haloes. If obscured quasars are simply unobscured quasars seen from a dustier line of sight, such as through a torus, then they should have the same halo mass, on average. Some evolutionary scenarios, however, predict a difference in halo mass between the subclasses as the halo and black hole grow as a product of major galaxy mergers. Hickox et al. (2011), Donoso et al. (2014), andDiPompeo et al. (2014, hereafter D14) measured the halo masses of IR-selected quasar samples split into obscured and unobscured populations via their optical-IR colors (Hickox et al. 2007). All found that obscured quasars seem to cluster more strongly, and thus reside in higher mass haloes, though the levels of significance varied considerably. DiPompeo et al. (2015b, hereafter D15) followed up on D14 by cross-correlating the WISE-selected quasar samples with a Planck CMB lensing map, and found excellent agreement with the clustering results.
However, there have been other recent studies that suggest no difference in the bias and halo masses of obscured and unobscured quasars. Geach et al. (2013) cross-correlated WISE-selected quasars with a CMB map from the South Pole Telescope (and Planck as well), and found that the bias of obscured and unobscured quasars was roughly consistent. Mendez et al. (2015) used a spectroscopic sample over a reduced area (∼10 • ) that benefits from individual source redshifts (instead of relying on an estimate of the ensemble average; see section 2.2) compiled from several fields and find no significant difference between obscured and unobscured halo masses. While the additional redshift information assures that the evolution of the bias with z does not skew the results, D15 illustrated that their mean redshift estimates would need to be offset by an unreasonably large amount to fully account for the difference they measured. Clearly the relative halo masses of obscured and unobscured quasars is not yet conclusively determined, and so we follow up here on the work of D14 and D15 using updated data from both WISE and Planck. We find that the measurements using the new and old data in various combinations can produce some significant variation in the results. Our goal here is to provide both a quantitative analysis of the difference between the samples selected from the original and new catalogues, as well as to identify the most reliable set of measurements. We then provide an updated measurement of the obscured and unobscured quasar bias based on the best possible sample.
Allsky and Allwise catalogues
The WISE mission mapped the entire sky at 3.4, 4.6, 12, and 22 µm (W 1, W 2, W 3, and W 4) with angular resolutions of 6.1, 6.4, 6.5, and 12 arcsec, respectively (Wright et al. 2010). The survey reached at least 0.08, 0.11, 1, and 6 mJy 5σ point-source sensitivities in each band (in unconfused regions), with this depth increasing toward the ecliptic poles due to the observing strategy. The WISE full cryogenic mission phase in 2010 resulted in the Allsky (AS) data release. The AS source catalogue includes objects with SNR > 5 in any band, at least five good measurements, and not flagged as spurious in at least one band 2 .
After both cryogen tanks were exhausted, The NEOWISE Post-Cryogenic Mission surveyed the entire sky again using the two shorter-wavelength bands, W 1 and W 2 (Mainzer et al. 2011). Combining data from the original WISE survey with the NEOWISE data, along with improved reduction and calibration pipelines, led to the updated Allwise (AW) catalogue release, with improved photometric sensitivity and accuracy, better astrometric precision, and new information on source motions and variability 3 .
In this work, our goal is to update the analyses of D14 and D15, which used the WISE AS catalogue to select quasars and measure their bias and host halo masses, by using the improved AW catalogue (sections 4.1 and 4.2). We will also provide a detailed comparison of the objects selected as quasars from the two catalogues (section 4.4).
Quasar selection
WISE is ideal for identifying quasars via their characteristic hot dust emission, which causes a rising power-law spectrum in the mid-IR while stellar populations tend to peak around 1.5 µm (Lacy et al. 2004;Stern et al. 2005Stern et al. , 2012Donley et al. 2007;Mateos et al. 2013;Assef et al. 2013). Critically, mid-IR selection is efficient at identifying both optically luminous unobscured and optically faint obscured quasars, the latter of which may make up around half of the full quasar population (Hickox et al. 2007;Assef et al. 2015).
Various methods and criteria using mid-IR data have been developed to select quasars (and lower luminosity active galactic nuclei, e.g. Lacy et al. 2004;Stern et al. 2005;Donley et al. 2012;Mateos et al. 2012;DiPompeo et al. 2015a;Myers et al. 2015), and used in an array of studies of the IRselected quasar population (e.g. Hickox et al. 2011;Goulding et al. 2014;Smith, Koss & Mushotzky 2014;Satyapal et al. 2014;Ellison, Patton & Hickox 2015;Mendez et al. 2015). However, even a simple color-cut of W 1 − W 2 > 0.8 (along with W 2 < 15.05, the 10σ flux limit in this band, which helps reduce contamination from high-redshift star-forming galaxies as well as faint stars) identifies quasars at 80 per cent completeness and a contamination rate of only 5 per cent (Stern et al. 2005(Stern et al. , 2012. These cuts have also been used to successfully study unobscured and obscured quasars (Donoso et al. 2014, D14;D15), and we adopt them here. WISE photometry is not corrected for Galactic extinction, as the rapidly dropping near-IR extinction curves of Fitzpatrick & Massa (2009) show that this will affect WISE minimally, and we avoid the Galactic plane where extinction is more prevalent (see below).
To provide a direct comparison between quasar selection with AW and AS and the resulting effect on bias measurements, we restrict our sample to the same region as Donoso et al. (2014), D14, and D15: 135 • < RA < 226 • and 1 • < Dec < 54 • . This area is sufficiently far from the Galactic plane to limit high stellar densities and the majority of Galactic reddening, but is also not affected by depth changes and source confusion in WISE. An extension beyond this footprint to utilize the full potential of millions of quasars in WISE (e.g. Secrest et al. 2015) while properly handling the selection function across the whole sky is reserved for a future paper.
In the AW catalogue, 225,303 objects satisfy the selection criteria within this footprint. This is a reduction of about 10% from the AS catalogue, which had 250,163 objects satisfying these cuts.
Cleaning the data
An accurate data mask is necessary to properly handle the normalization of the angular autocorrelation function by comparison with a randomly distributed sample, as well as to remove pixels from the Planck maps where quasar data are discarded. D14 highlighted the importance of proper masking of the WISE data, and by more conservatively removing regions around flagged WISE data found a notable drop in the IR-selected obscured quasar bias compared to Donoso et al. (2014). We develop a similar mask for the AW data here 4 , using the spherical cap utility MANGLE (Hamilton & Tegmark 2004;Swanson et al. 2008). Full details of these components can be found in D14, unless a change is mentioned here: (i) Regions with high Galactic extinction in the g-band (Ag > 0.18).
(ii) WISE Atlas tiles (the main imaging product of WISE) with contamination from the Moon. We mask tiles with moon_lev > 1 in W 4. Using W 4 makes the mask more conservative, as the longer wavelength bands can be affected by scattered light as far away from the Moon as ∼30 • , while the shorter bands used for selection can be affected to ∼10 • . The new values of moon_lev in the updated Atlas tiles are used (and are a major source of difference between the two masks; see Figure 1).
(iv) Regions with poor photometric quality based on the WISE ph_qual flags, which is not included in the previous AS mask. We pixelize the sky using HEALPIX 5 (Górski et al. 2005) with nside = 64 (pixel areas of ∼0.8 deg 2 ), and mask any pixel that has more than one object with photometric quality not set to 'A' (SNR 10) in W 1 and W 2. This procedure is tuned based on the WISE data over the full sky, primarily to remove prominent strips of lowquality WISE data, but does remove some regions in the footprint considered here. We do not discard other objects with lower quality, but these only make up a small fraction of our sample (< 1%), due to the W 2 < 15.05 cut.
(v) The SDSS bright star mask, which masks circular regions around bright stars from the Yale and Tycho-2 bright star catalogues (Warren & Hoffleit 1987;Høg et al. 2000). The vast majority of bright stars are already masked well by the WISE flagged data mask in (iii).
The final region, after all masking is completed, has an area of 3422 deg 2 , and contains 175,911 quasars from the AW catalogue (Table 1). The sample distribution on the sky using this mask is shown in the top panel of Figure 1. The usable area and sample size is similar to what was found based on the AS sample (3,338 deg 2 , 180,606 quasars), but is distributed somewhat differently on the sky -the middle panel of Figure 1 shows the objects that are not masked by the AW mask, but fall within the AS mask of D14. The majority of the difference is due to the Moon level between the two catalogues, as it is clear that the strips contaminated by the Moon in AS are wider than in AW.
We explore the impact of our masks on our bias measurements by applying various combinations to the data, including the conservative cases of applying both the AW and AS masks. These samples with both masks applied will be labeled with an asterisk ( * ) throughout, and the Allwise * sample is shown in the bottom panel of Figure 1. The sample labeled "both" contains sources that satisfy our selection criteria in both the AW and AS catalogues, and these make up 92.5% of the full AW * sample. Table 1 summarizes these samples. The distribution on the sky of Allwise-selected quasars using the Allwise mask. Center: Allwise-selected quasars that fall within the mask generated for the Allsky data. Bottom: The distribution on the sky of Allwise-selected quasars after applying both masks. Hickox et al. (2007) used multiwavelength data in the Böotes field to demonstrate that an optical-IR color cut at R−[4.5]= 6.1 (Vega magnitudes) can robustly separate obscured and unobscured quasar populations. Donoso et al. (2014), D14, and D15 used AS W 2 and SDSS r-band fluxes in a similar way to study these populations, with r − W 2 = 6 separating the samples. We do the same using AW W 2 magnitudes here.
Obscured and Unobscured Quasars
The SDSS completely covers our footprint, and reaches 50% completeness at r = 22.6 (York et al. 2000;Abazajian et al. 2009). As in D14/D15, we utilize the SDSS pipeline psfmag values, to isolate as much as possible the contribution from the quasar, as compared to that from the host galaxy, in resolved sources. D14 determined that the use of SDSS modelMags did not affect results significantly, and given the general similarity in the morphologies of AW-selected quasars (see below) there is no reason to believe this will change here. The SDSS r-band magnitudes are corrected for Galactic extinction using extinction values supplied with the SDSS data (based on the dust maps of Schlegel, Finkbeiner & Davis 1998), and converted from AB mag-nitudes to Vega using mr,AB = mr,Vega + 0.16 (Blanton & Roweis 2007).
For the obscured and unobscured samples (but not the total IR-selected sample) the SDSS bad fields mask 6 is applied (e.g. White et al. 2011White et al. , 2012. This reduces the AW area to 3,387 deg 2 and the total number of sources to 173,834. We match the AWselected quasars to SDSS sources with a radius of 2 arcseconds, only keeping objects with 15 < r < 25, and find matches for 83%, the same as was found for AS-selected quasars. Objects without SDSS matches are placed in the obscured sample, resulting in 72,587 (41.9%) obscured and 101,247 (58.1%) unobscured quasars.
Of the objects with an SDSS counterpart, 68% are unresolved (based on the objc_type keyword; Stoughton et al. 2002), marginally higher than the 65% in the AS-selected sample of D14 and significantly higher than the 55% of Donoso et al. (2014). Note however that Donoso et al. (2014) used deeper optical imaging in the COSMOS field to classify source morphologies, which partially explains the lower unresolved fraction. Broken down into subclasses, 36% and 81% of the obscured and unobscured sources are unresolved, respectively, broadly consistent with D14. We note that the additional application of the AS mask to the AW data does not change these ratios.
As in D14/D15, to estimate the redshift distribution (dN/dz) of the AW-selected quasars, which is necessary to understand and interpret the bias measurements, we apply our mask and selection criteria to objects in the Böotes field. This field has been observed in multiple photometric bands, and has extensive follow-up spectroscopy (Brodwin et al. 2006;Hickox et al. 2011;Kochanek et al. 2012). We find 368 with AW data that satisfy our selection in this field, with 145 (39.4%) and 223 (60.6%) obscured and unobscured, respectively. These are consistent with the fractions in our overall sample. All of the unobscured and all but two of the obscured samples have spectroscopic redshifts, and the distributions are shown in Figure 2. The mean/median/standard deviations of these distributions are 1.02/0.97/0.56 (total), 0.98/0.90/0.54 (obscured), and 1.05/1.04/0.58 (unobscured), consistent with what is found for ASselected samples.
We point out that the AW and AS mask do not significantly affect the Böotes region, and so potential differences in the redshift distribution of sources masked in one catalogue and not the other are not accounted for. This will be analysed further in section 4.4.1. Finally, it is worth noting that the AW and AS selected samples have very similar redshift distributions, despite differences in their average photometric properties (see sections 4.4.1 and 4.4.2).
Planck CMB Lensing Maps
The Planck mission (Planck Collaboration et al. 2011) mapped the CMB at nine frequencies, from 30 to 857 GHz, over the entire sky. The first data release (DR1) in March 2013 (with an update in December 2013) was based on the nominal mission data with 15 months of observations (Planck Collaboration et al. 2014a) and included a lensing potential (φ) map with a lensing signature detected at an overall significance >25σ (Planck Collaboration et al. 2014b). This map was cross-correlated with the quasar density by Geach et al. (2013) and D15. Summary of the areas (in deg 2 ) and number N of each subsample using the two WISE catalogues and various masks. Note that when samples are split to obscured/unobscured, the additional SDSS bad fields mask is applied, reducing the sample size and area further. An asterisk indicates a sample with the masks from both Allsky and Allwise applied, and "both" indicates only objects that meet the Stern et al. (2012) quasar selection criteria in both Allsky and Allwise. The second half gives the same information but after also applying Planck mask and discarding any partially masked HEALPix n side = 2048 pixels, to limit errors in estimating the used area of partial pixels in the quasar density calculation. Obscured/unobscured percentages are given in the relevant rows -the fractions are very consistent across the samples. In 2015 a second data set (DR2), using four years of data and improved reduction and calibration pipelines, was released (Planck Collaboration et al. 2015a). This release included an updated map of the lensing potential, with a lensing signal detected at >40σ (Planck Collaboration et al. 2015b). These DR2 data are the highest quality all-sky CMB maps to date, with sensitivities down to µK and, at the frequencies where most of the lensing information is carried (143 GHz and 217 GHz), resolutions of 7 and 5 arcmin.
Using the HEALPIX routine ISYNFAST, we convert the Planck DR2 "alm" (A lm ) file 7 , which contains the coefficients of the spherical harmonic transform of the data on the sky, to a HEALPIX map of the lensing convergence (κ = 1 2 ∆ 2 φ) with nside = 2048, which has ∼3 arcmin 2 pixels. In Figure 3 we show the distribution 7 http://irsa.ipac.caltech.edu/data/Planck/ release_2/all-sky-maps/ of both the raw and 1 • Gaussian-smoothed κ in our region of interest. The updated DR2 distribution is narrower, and has a peak marginally more consistent with zero, which is expected in the absence of systematic effects.
METHODS
Our goal is to measure the quasar bias and infer dark mater halo masses using the quasar data alone, as well as cross-correlate the quasar data with CMB lensing maps. We also explore the WISE AS and AW catalogues for possible sources of contamination, to identify which combination of data and masks is the most reliable. In this section we outline the formalisms for these measurements.
Angular autocorrelations
All of the codes used to calculate the angular autocorrelation functions, including models and bias fitting, are available at https:// github.com/mdipompe/angular_clustering.
Dark matter autocorrelation function theory
We utilize the two-point angular (as opposed to real-space, as our sources lack individual redshift measurements) correlation function ω(θ) to analyze how WISE quasars cluster around themselves. The angular correlation function is related to the probability that a given pair of objects (dark matter haloes hosting quasars, in our case) with mean number density n, separated by a projected angular distance θ, are within a solid angle dΩ (Totsuji & Kihara 1969;Peebles 1980): (1) Objects formed in the peaks of a Gaussian random field, such as massive, quasar-hosting galaxies, should cluster more strongly than the underlying typical dark matter distribution (Kaiser 1984;Bardeen et al. 1986). This excess, or bias, is independent of scale in most models, at least on large scales. The bias of quasars bq is related to the underlying dark matter autocorrelation by ωq(θ) = ωdm(θ)b 2 q . To calculate ωdm(θ), we first generate the nonlinear matter power spectrum P (k, z) using CAMB 8 (Lewis, Challinor & Lasenby 2000). We note that this procedure is updated from D14, for more consistency with the CMB lensing cross-correlation measurements.
For angular scales θ << 1 radian, as we probe here, we can project the matter power spectrum to an angular autocorrelation in a flat Universe via Limber's approximation (Limber 1953;Peebles 1980;Peacock 1991): 8 Code for Anisotropies in the Microwave Background (http:// lambda.gsfc.nasa.gov/toolbox/tb_camb_ov.cfm). We use our IDL wrapper for CAMB (CAMB4IDL) which can be found at https://github.com/mdipompe/camb4idl. Our initialization file for CAMB is available with the other autocorrelation codes linked above, as well as code to parse the CAMB output and generate the model autocorrelation.
In this equation, ∆ 2 (k, z) = (k 3 /2π 2 )P (k, z) is the dimensionless power spectrum, J0 is the zeroth-order Bessel function of the first kind, χ is the comoving distance along the line of sight, dN/dz is the normalized redshift distribution (taken from a spline fit to the distribution of the appropriate subsample shown in Figure 2; see section 5.2), and dz/dχ = Hz/c = (H0/c)[Ωm(1+z) 3 +ΩΛ] 1/2 . This model of ωdm(θ) can then be rescaled to fit the estimated quasar autocorrelation and measure the effective bias, or the bias integrated over our redshift range. We will explore the effect of other bias models that evolve with z in the discussion section.
Estimating ωqq(θ)
We estimate the quasar autocorrelation ωqq(θ) by comparing the number counts of quasar pairs in annuli of increasing radii with what is expected for a random distribution (Landy & Szalay 1993): In this estimator, DD, DR, and RR are the normalized numbers of data-data, data-random, and random-random pairs in each bin of θ (i.e. DD = DD(θ) = Ndata pairs/(ND ND)). The random sample must follow the same angular selection function as the data, which is simple in our case because the WISE selection is uniform over this field with holes described by our mask. We generate a random catalogue that obeys our mask using the MANGLE function RANSACK. The random catalogue for each measurement is always at least 10 times the size of the data set so that the random counts do not limit the statistical precision. We calculate ωqq using four bins per dex, beginning at ∼12 arcsec and extending to 1.1 • . Errors on the quasar autocorrelations are estimated using inverse-variance weighted jackknife resampling (e.g. Scranton et al. 2002;Myers et al. 2005Myers et al. , 2007. We divide our full footprint into N = 16 equal-area regions, build N subsamples by iteratively removing a single region, and repeating the autocorrelation measurement using the remaining regions. Denoting each subsample by L, the inverse-variance-weighted covariance matrix Cij = C(θi, θj) (i and j denote angular size bins) is where ω is the angular autocorrelation for all of the quasars and ωL is the the angular autocorrelation for subset L. Note here that the RR terms are not normalized by the sizes of the random samples, and account for the different number of counts in each region (though these are very close to the same, given the equal area of each region). The jackknife errors σi are taken from the squareroot of the diagonal elements of the covariance matrix, though the full matrix is used when performing fits to measure the bias.
Cross-correlations with systematics
In an ideal data set, the quasar density will not correlate with any observational systematics, but in practice such effects can impact target selection. Below we will cross-correlate WISE AS and AW selected quasars with various parameters, to quantify whether one catalogue is superior in this regard, using a pixelization method as in e.g. Scranton et al. (2002). Due to the relatively low density of sources (∼50 deg −2 , less for the obscured and unobscured subsamples), we must use somewhat large pixels of 0.66 degrees on a side (0.44 deg 2 in area), and are thus limited to exploring crosscorrelations on scales of this size or larger. Splitting up our region results in 9360 pixels, with an average of ∼20 sources per pixel. The available area of each pixel is calculated by randomly populating it with 1000 sources, applying our mask, and multiplying the full area by the fraction of random points outside the mask. Only pixels with at least 0.05 deg 2 of available area and at least five sources (one in the case of obscured and unobscured sub-samples) are used.
For each pixel i, the relative density of quasars is calculated: where ρ q is the mean quasar density for the whole field. The relative value of systematics δ s i , including W 1 and W 2 magnitudes, W 1 − W 2 color, Galactic reddening Ag, and the WISE Moon level (recall that any region with Moon level > 1 is already discarded) is calculated in a similar way for each pixel, using the values of these parameters at the location of each object. Cross-correlations are then calculated as: where Θij = 1 if the separation between the centers of pixels i and j is within the bin θ, and zero otherwise (the denominator is then a normalization equal to the number of pixel pairs satisfying this criteria). The angular binning here is also four bins per dex, beginning at 0.6 • (the smallest scale possible due to the pixel size), and extending to 10 • . Errors on the angular correlation functions using this method are also estimated via jackknife resampling. In this case, the region is split into N = 25 regions, and for a given iteration in the covariance matrix calculation (using Equation 4) any pixel falling within this subregion is excluded. The square root of the diagonal elements of the covariance matrix are adopted as the 1σ errors.
We also use Equation 6 to rapidly calculate the quasar autocorrelation on large scales (>1 • ), where the pair counting method becomes computationally expensive.
CMB Lensing Correlations
All of our code to work with the CMB lensing maps and perform the CMB cross-correlations with HEALPIX maps can be found at https://github.com/mdipompe/lensing_xcorr.
CMB lensing-matter cross-correlation theory
The quasar bias bq can also be measured by comparing the crosscorrelation of the quasar density with the CMB lensing convergence (C κq l ) with theoretical predictions for a given matter distribution. This formalism is detailed fully elswhere (Bleem et al. 2012;Sherwin et al. 2012), and we provide a brief summary here.
The lensing convergence (κ) in comoving coordinates (χ) along a line of sightn is the integral over the relative over-density of matter (δ(r, z)) multiplied by the lensing kernel W κ : The lensing kernel (Cooray & Hu 2000;Song et al. 2003) is: where a(χ) = (1 + z(χ)) −1 is the scale factor, and χCMB is the co-moving distance to the CMB. Fluctuations in the quasar density are given by: where W q (χ) is the quasar host distribution kernel: Here, dN/dz is the normalized redshift distribution of the quasar population (again estimated using a spline interpolation; see section 5.2), which has bias bq, assumed here to be independent of redshift. The cross-power at a Fourier mode l is where P (k = l/χ, z) is the matter power spectrum (e.g. Eisenstein & Hu 1999), again generated using CAMB. Equation 11 gives us the model cross-power spectrum for the underlying distribution of matter when the effective bias is unity (again we will explore other models of the bias in the discussion section). We note that the current defaults of CAMB have been slightly updated since D15, and these result in a matter power spectrum with slightly higher amplitude that propagates to the final model, resulting in a lower bias measurement. A CAMB parameter file is included with our supplied code to aid with consistent future measurements.
Measuring CMB lensing auto and cross-correlations
Leveraging the HEALPIX format of the Planck maps, and the speed with which cross-correlations can be performed using spherical harmonic transforms, we employ routines in the HEALPIX package to measure auto and cross-correlations with Planck data. The cross-power C κX l of a κ map (Mκ) with map X (MX , which could be another κ map or a quasar density map as described below) is measured by taking the Fourier transform of each map and multiplying them: where l ∈ l describes the binning. We present all results calculated this way with 5 bins in l per dex, beginning at l = 10.
To cross-correlate the quasar density with the CMB lensing convergence (C κq l ), we first generate a HEALPIX map of the relative quasar density δ at the same resolution as the Planck lensing convergence map (nside = 2048). This requires an estimate of the used area of each pixel, which we find by populating our footprint with 150 million points (for an average of ∼30 per pixel), applying our mask, and using the ratio of points inside and outside the mask per pixel. This area estimation is of course subject to its own errors, and as discussed in D15 can impact the final measurement in several ways. We therefore discard any pixel that overlaps a mask component, which generally removes less than 100 deg 2 (see Table 1). The relative quasar density is then calculated with respect to the total number of quasars and area in the remaining complete pixels.
Uncertainties in C κX l are estimated by repeating the measurement with several rotations of the κ map. D15 illustrated the consistency of this method with others, such as substituting simulated noise maps. We use 34 rotations, 17 in increments of 20 • in Galactic longitude, and another 17 with an additional reflection in latitude about the Galactic equator. We derive covariance matrices (C(li, lj) = Cij): where N is the number of rotated cross-correlations and C κX l,k − C κX l is the cross-correlation from each rotation. We adopt the square root of the diagonal elements of Cij as the 1σ errors on these correlations.
Fitting procedures
To measure the quasar bias, we fit the models from Equations 2 and 11 to the measured autocorrelations and CMB lensing crosscorrelations. We use the full covariance matrix to scale our given model fm(x) to the data f (x) with a χ 2 minimization: Here, f (x) may be ω(θ) or C κq l and the sums are over bins of θ or l. Both models are only a function of one parameter, the bias bq, and so the errors on the fits are determined by where ∆χ 2 = 1.
On scales larger than ∼1 Mpc/h the clustering of quasars and their parent haloes is sensitive only to the underlying density field and can be fit by the simple models described in the previous sections, while on smaller scales the halo occupation distribution (HOD) and the physics of galaxy formation and evolution become important (e.g. Berlind & Weinberg 2002;Berlind et al. 2003;Richardson et al. 2012;Krumpe, Miyaji & Coil 2013;Eftekharzadeh et al. 2015). At z = 1 (the approximate mean for all of our samples) this linear scale corresponds to ∼0.04 • . Therefore, we restrict our fits on the autocorrelation to 0.04 • < θ < 0.4 • . For CMB lensing crosscorrelations we restrict our fits to 40 < l < 400, which is above the peak of the model cross-correlation and below a possible correlated feature in the Planck DR2 lensing map (which lies in the range 638 < l < 732; Planck Collaboration et al. 2015b). This is also the range with the smallest errors on the cross-correlation (section 4.2).
Dark matter halo masses
All of the code used to convert biases to halo masses is provided at https://github.com/mdipompe/halomasses.
Once the quasar bias is determined, it can be converted into a typical dark matter halo mass (M h ) for a given sample. This is done using the model fits to cosmological simulations of Tinker et al. (2010) where ν = δc/σ(M ). The numerator, δc, is the critical density for collapse of a dark matter halo, and is defined for a Universe containing matter and a cosmological constant Λ by: where Ωm,z is the density parameter for matter at the redshift under consideration (Navarro, Frenk & White 1997). The denominator, σ(M ), is the linear matter variance at the size scale of the halo (R h = (3M h /4πρm) 1/3 , withρm the mean density of matter) calculated by: We again use CAMB to obtain the linear P (k, z), andŴ (k, R h ) is the Fourier transform of a top-hat window function of radius R h : The constants A, a, B, b, C, and c for Equation 15 are taken from Table 2 of Tinker et al. (2010) and defined for the overdensity parameter ∆ = 200 (y ≡ log 10 ∆):
The quasar autocorrelation function
In Figure 4 we show the quasar angular autocorrelation ω(θ) for various combinations of samples and masks. As a reminder, samples marked with an asterisk have both the AW and AS generated masks applied. The final panel of this plot shows our fiducial model, as generated with Equation 2 for each of the sample redshift distributions (note that the subtle differences in dN/dz do not strongly affect the models). The final panel also shows the fitting ranges for the bias measurement.
In each panel, the models rescaled by the measured bias are shown as dashed lines. The panels below each plot show the autocorrelations divided by the models, to highlight any scale dependencies of the bias. Bias measurements for these samples are given in the top half of Table 6, and shown in the right side of Figure 6.
In all cases, over the chosen fitting range the bias is independent of scale. Below ∼0.04 • , for the obscured sample particularly, clustering amplitudes anti-correlate with angular scale. This 9 Note that here we use the more recent Tinker et al. (2010) model, while in D14/D14 we used the Sheth, Mo & Tormen (2001) model for more direct comparison with previous results. In the context of searching for differences in obscured and unobscured halo masses (the main goal of D14/D15), the choice of model is less critical; however, moving forward we prefer the most precise halo masses for future modeling purposes is true for the unobscured sample as well, but generally only below ∼0.01 • . The bias of the unobscured sample is very consistent across all samples and mask combinations (Figure 6). On the other hand, there is a marked decrease in the obscured bias when switching to AW-selected sources, even when the AS mask is applied.
The difference in bias between obscured and unobscured quasars identified by D14 remains present when using the AS data with the additional AW mask components 10 . It is also still marginally present in the AW-selected samples, as well as the sample selected from both catalogues -however, in these cases the error bars overlap significantly, reducing the difference to at best 1σ. Figure 5 shows the quasar-CMB lensing cross-correlations for various combinations of the data, including the AW and AS samples as well as Planck DR1 and DR2 lensing maps. The bottomcenter panel shows the model generated from Equation 11 for each dN/dz, and highlights the bias fitting region. Again, the model does not depend strongly on the subtle differences in redshift distribution for the three samples. Dotted lines in each panel show the models rescaled by the bias values, and the small panels below each show the measurement divided by the model to highlight scale dependencies. Bias measurements are summarized in the lower half of Table 2, and shown in the left panel of Figure 6.
Quasar CMB lensing cross-correlation
As seen in the autocorrelation measurements, the lensing cross-correlations are generally scale-independent over the chosen fitting range. However, beyond this range toward smaller scales (larger l), there is a clear increase in cross-correlation power that is particularly strong for the obscured sample, though more prevalent for both samples when Planck DR1 is used. While the measurement is noisy at the largest l, with the Planck DR2 lensing map the unobscured cross-correlation stays consistent with flat out to the smallest scales (∼0.1 • , consistent with the clustering measurement). The scale-dependence for the obscured sample is reduced somewhat with DR2, but is still present. The obscured sample cross-correlation with DR2 also shows a prominent bump around l ∼ 600, about the scales for which there may be an unknown correlated feature in the DR2 lensing map (Planck Collaboration et al. 2015b).
The bias values for the unobscured sample are quite consistent across all sample and mask combinations, and with the autocorrelation measurements (Figure 6) 10 . Overall the use of Planck DR2 reduces the measured bias in both samples (compare for example the measurements for DR1+AS and DR2+AS). For the obscured sample, the use of Planck DR2 and the AW-selected sample reduces the bias somewhat, until the AS mask is also applied where the bias is raised again. Overall, the obscured sample tends to have a higher bias, though at a lower significance compared to D15 (at most ∼2σ).
Planck DR1 vs. DR2 lensing convergence
Now that we have seen that the updated Planck CMB lensing map tends to reduce the measured bias, particularly for the obscured sample, we briefly compare the two maps directly to investigate 10 Note that there is a decrease in the measurements presented here compared to D14/D15 due to our updated model procedure as well as the more restricted fitting range. quantitatively whether one map is superior to the other. Of course, the fact that the signature of lensing, at least as averaged over the whole sky, is so much stronger in DR2 is already indicative of the superiority of the newest data. Figure 3 illustrates the improved behavior of the DR2 data as well. Finally, DR2 has a lensing potential power spectrum in better agreement with theoretical predictions (see e.g. Figure 6 of Planck Collaboration et al. 2015b).
We also correlate the DR1 and DR2 κ maps with themselves and each other (using the method described in section 3.3.2), to highlight potential differences. If the maps have similar properties, these auto and cross-correlations should all be quite similar. The results are shown in Figure 7. While the DR2-DR2 auto-correlation and DR2-DR1 cross-correlation are nearly identical, the DR1-DR1 autocorrelation has significantly more power (by ∼45%), while having the same shape at all scales. This suggests that the DR2 map contains significantly less correlated noise, while still preserving real features in the data. However, it is not clear how this might affect the obscured sample cross-correlation more than that of the unobscured sample.
WISE Allsky vs. Allwise-selected quasars
In section 2, many similarities between the AS and AW-selected quasar samples were discussed. The number densities are similar (three fewer objects per square degree in the AW selected sample), the obscured and unobscured fractions are indistinguishable (Table 1), the redshift distributions are consistent, and the optical morphological properties are very similar (with a slight increase in unresolved sources in AW). Despite these similarities, the difference in bias measurements when changing from AS to AW-selected samples (or changing the masks), suggests that there may be some fundamental difference in samples selected from the two catalogues. To help inform the discussion of the bias measurements, we explore the properties of these samples in more detail here. Bias measurements for the various sample and mask combinations, using the quasar autocorrelation (top half) and CMB lensing cross-correlations (bottom half). Measurements are made by fitting the model to the data over the range 40 < l < 400 or 0.04 • < θ < 0.4 • . Samples with an asterisk have had the masks developed with both Allsky and Allwise samples applied. Values in parentheses in the bottom half are from measurements with the Planck DR1 data, and all others use the DR2 data. These results are shown in Figure 6.
Photometric properties of Allsky and Allwise selected quasars
The top row of Figure 8 shows distributions of W 1, W 2, and r for the AW and AS selected quasars (the full IR-selected samples; these comparisons are very similar for obscured and unobscured subsamples), as well as the difference between the AW and AS W 1 and W 2 magnitudes for objects that are selected from both catalogues. These comparisons are made after applying both masks to the samples (i.e. Allwise* and Allsky*), so any differences are not due to masking (see below). Of the common objects in AS and AW, the vast majority (>99.9%) are matched to the same optical counterpart so differences between r magnitudes for the common sample are not shown as they are generally null. The AW sample shows a subtle shift toward brighter W 1 and W 2 fluxes, and as noted the r distributions are nearly identical. The reason for this is clear in the top right panel showing the difference in the common sample. The W 1 difference distribution is strongly asymmetric, with an updated AW flux more likely to be brighter, while the W 2 difference distribution is much more symmetric about zero and smaller in magnitude . These effects are noted in the AW explanatory supplement, which states that AW photometry is known to be increasingly brighter at magnitudes fainter than W 1 ∼ 14 and W 2 ∼ 13 (the majority of our sources) due to correction of a faint source underestimation bias in AS. These effects are particularly important for color-selected samples such as ours.
The second row of Figure 8 illustrates how these changes propagate through to our color selection. We see that the larger brightening in W 1 compared to W 2 leads to an IR-redder sam-ple in AS than in AW. The similarity in the distributions of r − W 2 illustrates that most of the difference in W 1 − W 2 colors is driven by the shift in W 1.
We also compared the photometric properties of AW-selected quasars that fall within the AS mask (see the middle panel of Figure 1) to the sources outside of the mask. There are no clear differences that indicate serious problems with the AW objects that fall within the AS mask, and so we do not show them here. Additionally, the obscured and unobscured fractions of these masked sources are consistent with the overall fractions. The same is true for AS-selected quasars that are within the AW mask. These similarities in photometric properties suggest that the fact that the Böotes field is not strongly affected by differences in masking does Fit over 0.04° < θ < 0.4°F igure 6. A comparison of the measured bias via CMB lensing cross-correlations (left) and the quasar autocorrelation (right), using various data sets. In the CMB lensing panel, "DR1" and "DR2" refer to the Planck data release. An asterisk on the WISE catalogue indicates that the masks derived from both catalogues have been applied, and "Both" is the measurement for objects satisfying our criteria in both catalogues. The numbers under each measurement indicate the total area used, in deg 2 . Comparison of W 1, W 2, and r distributions for the Allwise (solid) and Allsky (dashed) selected samples, after applying both masks to both samples (so differences are not due to masking). The top-right panel shows the difference between the W 1 (solid) and W 2 (dashed) magnitudes for those objects that are common between the samples (r-band magnitudes are excluded since the majority of Allwise and Allsky sources are matched to the same optical counterpart, and these differences are null). Bottom row: The same, but for colors.
not have an important impact on our estimates of the redshift distribution.
Allsky and Allwise-only sources
There are 12,544/5,776/6,632 (total/obscured/unobscured) objects that are only selected by AW and 19,777/9,019/10,523 (total/obscured/unobscured) objects that are only selected in AS. These sources are not generally missing from the opposite catalogue, but rather their photometry has been updated such that they no longer meet our selection criteria. Figure 9 compares the AW and AS photometry and colors for sources that are selected as quasars in only one catalogue (again, only the full samples are shown as the obscured and unobscured trends are similar). In general, objects that do not make the cut in one or the other catalogue are borderline objects, near W 2 ∼ 15 or W 1 − W 2 ∼ 0.8. Most of the difference in the two samples is thus due to the small adjustments to W 1 and W 2 in AW. The W 1 distributions of these objects tend to have a spike around W 1 ∼ 15.8, reflective of the sharp W 2 cut. On average however, the AW-only sources are somewhat fainter in W 1 than the AS-only sources (compare the peaks of either the dashed or solid lines in the top left and top right panels). This is also true for W 2, as the tail to brighter magnitudes is smaller in the AW-only sources. Figure 9. A comparison of WISE W 1 (top), W 2 (middle), and W 1 − W 2 (bottom) from the Allwise and Allsky catalogues for objects that satisfy our selection criteria in only one catalogue (right: Allwise only, left: Allsky only). In the W 2 and color panels the distributions of objects that would not satisfy the other cut (W 1 − W 2 < 0.08 or W 2 < 15.05) are shown in magenta. Clearly, most objects satisfying a cut in one catalogue but not the other are primarily border-line objects in terms of W 2, which causes them to also appear generally fainter in W 1.
Considering that the sources that only meet our selection in AW or AS tend to have borderline photometric properties, can we argue that one catalogue is truly eliminating more contamination, or is this simply just noise near the cuts? In Figure 10, we plot the relative densities (δ, see section 3.2) of AS and AW-only sources as a function of position on the sky, using nside = 2048 HEALPIX pixels smoothed with a 1 • Gaussian. We of course expect that intrinsically quasars are uniformly distributed across the field. However, the AS-only selected sources are heavily biased in position, with their density generally increasing toward the Ecliptic plane (the strips masked for Moon contamination are perpendicular to the Ecliptic). AW-only sources are more evenly distributed, though the distribution is still not completely uniform. This suggests that objects selected by AS only may indeed be artifacts, or their selection is biased by a position-dependent factor more-so than in AW (see section 4.4.3).
The clustering and lensing cross-correlation properties of these AS and AW-only samples may shed some light on their nature, but their position-dependent density complicates this because of the need for a random sample that mimics their distribution. Instead, we measure cross-correlations with the full samples from each catalogue, which are more uniformly distributed and can be normalized with our uniform random sample. This measurement is done via (e.g. Croft, Dalton & Efstathiou 1999): where the 'F' and 'O' subscripts indicate the full and only-in-one catalogue samples, respectively. The results are shown in Figure 11. For comparison, the autocorrelation measurements for the full AS and AW samples are shown in green in each panel. The AS-only sources clearly show a stronger clustering signal relative to the full sample, increasingly so going from unobscured to obscured objects. There is some indication that the AW-only obscured sources cluster more strongly than the full AW sample, but at much lower significance.
To quantify this, we fit a simple power-law to the data of the form ωqq(θ) = Aθ −1 (note that we do not fit our model DM autocorrelation to these measurements, as the redshift distributions for the AS and AW-only samples are not well constrained). The power-law slope of −1 is a typical value for quasar autocorrelations in angular projection (Myers et al. 2006;Shen et al. 2007Shen et al. , 2009Ross et al. 2009;White et al. 2012, D14), and fits the full sample results here well. The amplitudes of these fits for each sample are given in the figure legend, and highlight the qualitative impression discussed above.
Since objects only selected by one catalogue tend to be faint (see section 4.4.1) it is likely that at least some of these sources represent the higher redshift end of the distribution, on average, partially explaining their larger clustering signal. However, it isn't clear why this would affect one sample more than the other, which suggests that there is some additional signal from contamination present in the AS-only sources that isn't present in the AW-only objects. The fact that this is stronger in the obscured subsample is also reflective of why the bias of the obscured sample is affected more significantly by changing samples and masks.
We also cross-correlate the AS and AW-only samples with the CMB lensing maps, and the results are shown in Figure 12. The green points in each panel show the result from the full AS and AW samples. To zeroth order, the fact that there is any cross-power here confirms what we found above, that many of these objects are indeed extragalactic, probably at the high redshift end of our sample. We again fit fixed-slope-power-laws to these results for more direct comparisons, and the amplitudes are given in the legend. Given the amount of noise in these measurements, it is difficult to draw stronger conclusions other than the AS and AW-only samples have a similar cross-power as the full samples.
Cross-correlations with systematics
To further compare the samples selected from AS and AW, we cross-correlate the quasar density with several systematics, using the method described in section 3.2. We do this first for the full AW and AS samples, with both masks applied, cross-correlating with W 1 and W 2 magnitude, W 1 − W 2, Galactic reddening Ag, and Moon level. The results are shown in Figure 13.
The first panel shows the quasar autocorrelation using the pixelization method (Equation 6) as compared to the estimator in Equation 3. They agree quite well on scales where they overlap, and the pixelization method allows us to probe larger scales (above ∼2 • ). On these scales (>50 Mpc/h at z = 1), the quasar autocorrelation should approach zero (e.g. Myers et al. 2006;Krumpe, Miyaji & Coil 2013). This is true of the AWselected sample, but the AS sample retains a significant signal even out to ∼7 • .
In the absence of systematic observational effects, W 1 and W 2 should be uncorrelated with the quasars as a function of scale. The next two panels of Figure 13 show that this is generally true Figure 10. The relative densities (δ; see section 3.2) of Allsky-only (right) and Allwise-only (left) selected quasars as a function of position. Blue indicates an under density relative to the mean, red is over dense. It is clear that the objects selected only by Allsky are less evenly distributed, suggesting that many are in fact artifacts or that there is a position dependent bias affecting their selection. This is greatly reduced, but still present, in Allwise. of AW, within errors, but not for AS. This is also true, though to a lesser degree, for W 1 − W 2.
The next panel shows the cross-correlation of the quasar density with Galactic reddening in the g-band, Ag. Again, this is consistent with zero for AW (though there is a slight systematic shift from zero), but not AS. Note that the reddening component of the mask did not change from AS to AW, and so this reflects a change in the data itself. It is also interesting to note the flatness of these cross-correlations, which is most likely a consequence of the fact that the Galactic dust density varies slowly with scale, on average.
Finally, the last panel shows the cross-correlation with the Moon level. Recall that regions with moon_lev > 1 in W 4 were masked, which does leave some minor Moon contamination possible. In this case, the results are reversed -the AS data do not correlate with the Moon level at all, but AW is anti-correlated. This seems to imply that there may be problems with WISE data calibration due to the Moon present in AW that were not present in AS.
Because the change from AS to AW seems to affect the obscured sample more than the unobscured, we focus in Figure 14 on cross-correlations of these subsamples of the AW-selected quasars with systematics. The first panel shows agreement with the paircounting method for calculating the autocorrelation, as well as the fact that both autocorrelations approach zero on larger scales. We omit the cross-correlations with W 1, W 2, and W 1 − W 2 as these are null for both subsets of data. However, as seen in the center panel, obscured sources do correlate with reddening in this sample, while unobscured sources do not. In the final panel, we see that both obscured and unobscured sources correlate in a similar way with the Moon level.
Which measurement is the most reliable?
Based on the analysis of Planck DR1/DR2 and the quasars selected from the WISE Allsky and Allwise catalogues, it is clear that the latter in both cases are superior and more reliable data sets. The apparent reduction in correlated noise in Planck DR2, along with the increased lensing signal are obvious improvements. For the WISE selected quasars, the lack of correlations with W 1 and W 2 magnitudes and Ag, as well as the more uniform distribution of AW-only sources, indicate that this catalogue has certainly improved systematic effects that impact selection and contamination by artifacts. However, Figure 10 illustrates that this is not completely eliminated with AW, and there is still a slight correlation of obscured quasars with Galactic reddening in AW.
Despite these overall improvements, the correlation of AW quasars with the Moon level is concerning. In order to address this, we perform one final check by repeating our clustering and CMB lensing cross-correlation analysis with any region with moon_lev > 0, in any WISE band, masked out. This removes an additional ∼1000 deg 2 of area, and the reduction in sample sizes inflates the uncertainties accordingly. However, we find that the results do not change significantly, though the bias is slightly reduced for both samples (by a few per cent). While it appears that the Moon may have an adverse effect on our selection, we are unable to remove it from our measurement with the current information from WISE.
Given the lack of understanding on how the Moon level is affecting the AW-selected quasars, and the fact that the objects selected only in AW still show a position-dependent relative density, our most conservative approach currently is to use objects selected as quasars by both AW and AS. This naturally includes the mask components of both data sets as well. We adopt the bias and halo masses from this sample as our best current constraints, and these values are summarized in Table 3 for convenience.
The redshift distribution and evolution of the bias
There are two approaches we have adopted that can impact the models from Equations 2 and 11, and thus the final inferred bias values. The first is in the way we handle the empirical redshift distributions. While the spline interpolation of dN/dz preserves the features seen in Figure 2, it is possible that some of these are artifi- cial and do not fully reflect the true intrinsic distribution (though the application of our selection criteria to the Böotes field should mitigate some of these issues). Therefore, we also test fitting a smoother function to these distributions -a Gaussian with an exponential tail -which results in a smoother overall dN/dz. Propagating these fits through to the models, we find that they shift by at most a few percent over the scales of interest. The effect is most dramatic for the unobscured sample, due to the spike at low-z, but still only shifts the model by ∼4%, and only at larger scales. Re-fitting the effective bias, we find that the results shift by ∼1%. Our choice of model for dN/dz does not impact our results significantly. We have also adopted a simple bias model that does not account for evolution with redshift, and we are really measuring the effective bias, or the bias averaged over our redshift range. Since we lack individual source redshifts, and our error bars are still sufficiently large, we are unable to accurately generate our own empirical b(z) model directly from our data. However, we explore the role of such evolution in our fits, by including in our model calculations the b(z) of Croom et al. (2005): b(z) = 0.53+0.289(1+z) 2 . We then fit these new models, with an additional scale factor b0, to our data.
In general, a model of the bias including redshift evolution does not significantly change the quality our fits (based on the values of χ 2 ). In the case of the angular autocorrelation, both obscured and unobscured populations require an additional rescaling, with b0,obsc = 1.34 and b0,unob = 1.26. Including these factors Figure 11. The cross-correlation of objects selected by Allsky only (top) and Allwise only (bottom) with their respective full samples, after applying both masks to each sample (differences not due to masking). In each panel, the green points show the autocorrelation for the complete sample selected from each catalogue, and the amplitudes of power-law fits to the data are listed in the legend (using a fixed slope of −1). The Allsky-only sources clearly cluster more strongly than the full AS sample, and more so for the obscured sample. This is only weakly seen in the AW-only obscured sample. Some of this effect might be explained by the faint objects only selected by one catalogue lying at the higher redshift end of the full z distribution, but the fact that the behavior is not the same in both cases suggests additional contamination in the AS sample. and inserting the mean redshifts of each sample into the model (i.e. b = b0[0.53 + 0.289(1 + z ) 2 ]), we derive bias values of bobsc = 2.23 ± 0.18 and bunob = 2.21 ± 0.13. These are both increased but roughly consistent with our simpler effective bias measurement, and the values for the two populations are consistent. For the CMB lensing cross-correlations, we measure b0,obsc = 1.28 and b0,unob = 1.04. In this case, the unobscured sample is consistent with the fiducial unobscured Croom et al. (2005) model, while the obscured sample requires additional rescaling. With this model we find bobsc = 2.13 ± 0.14 and bunob = 1.82 ± 0.11, fully consistent with our values assuming a constant bias.
Given that we are unable to currently determine b(z) for these samples directly, the fact that such a model does not improve our fits, and that overall the results are consistent between a constant and evolving bias model, we prefer to adopt the constant bias measurements here as they involve fewer assumptions. Future work will revisit this issue in more detail. Total A=38.1 Obsc A=49.8 Unob A=32.7 Figure 12. A comparison of the CMB lensing cross-correlation for objects selected by Allsky only (top) and Allwise only (bottom), after applying both masks to each sample (differences not due to masking). In each panel, the green points show the results for the complete sample selected from each catalogue. There is significant noise in all cases due to the small sample sizes, but there is a signal present indicating that these objects are not just artifacts and are indeed extragalactic. However, there is no evidence that the cross-correlation signal is different from the full samples. Without understanding the dndz of these samples, these results are difficult to interpret. The adopted bias and halo masses from the angular autocorrelation (top) and CMB lensing cross-correlations (bottom), using the most conservative sample -those objects satisfying the selection criteria in both Allsky and Allwise catalogues.
Comparison with previous results
There have been several measurements of the obscured and unobscured quasar bias and halo masses in the last several years, as it is only recently that samples have grown large enough to do so precisely.
There is yet to be convergence on a result. Our values here are lower than those of D14 and D15 due both to the subtle differences in the samples selected as discussed above, as well as some updates in our procedures. First, the use of CAMB with its most recent default parameters to produce the model power spectra in a consistent fashion for both the clustering and lensing cross-correlation measurements results in models with slightly larger amplitudes, which naturally decreases the inferred bias. This is seen in the "DR1/Allsky" points of Figure 6, as compared to e.g. Figure 7 of D15. This is a systematic effect that impacts both the obscured and unobscured samples, but not their relative values. However, in Figure 6 we also illustrate that using the most recent data results in a decrease of the bias as well, for both unobscured and obscured quasars, and more significantly for the clustering measurement. As we argued in the previous section, the new data are quantifiably superior, and these new values should be considered more reliable.
It is difficult however to pinpoint the exact reason for this reduction in the bias. The fact that it is present in both the clustering and CMB lensing cross-correlations, which depend on different systematics, indicates that it is a real effect -AW is picking out objects with lower clustering and lensing cross-correlation amplitudes, especially for the obscured sample. Since the AW W 1 − W 2 color distribution is slightly bluer there may be a relationship between IR color and bias (as opposed to just optical-IR color, or obscured and unobscured, and bias). However, exploring such a trend will require even larger samples, over extended areas, in order to have large enough sub-samples to explore with sufficiently small statistical errors. We will revisit this in future work. The systematic shift in color in AW may also point to the need for updates to the various WISE color selection techniques for quasars.
While D14 showed that the factor of 10 larger halo masses for obscured quasars of Donoso et al. (2014) was due to insufficient masking of the WISE data, and found instead a factor of ∼3 difference, that is further reduced here to roughly a factor of two. However, the significance of this difference is now only ∼2σ.
In Figure 15 we compare our updated measurements with several from the recent literature. Note that all halo masses in the right panel are calculated using our prescription based on the bias values taken from the respective studies shown on the left (i.e. the halo masses shown may not be the same as those calculated in the original reference). The grey band gives the approximate range of values for optically-selected unobscured quasars, primarily from the works referenced in section 1. We first note that both obscured and unobscured quasars are now roughly consistent with this range, despite the difference in the samples here. While the measurements of Hickox et al. (2011) are at slightly higher redshift, our results remain consistent with theirs. Though our error bars are significantly smaller due to the dramatic increase in area and sample size, the significance of the difference in bias/halo masses is roughly the same once the reduction in the magnitude of the difference is considered.
We also show the recent results of Mendez et al. (2015), at slightly lower redshift. Their sample used the IR selection technique of Assef et al. (2013), which extends deeper to W 2 < 17.11 with a magnitude-dependent color criteria. They also used samples with individual redshifts from several fields, and we show their results with the COSMOS field excluded, due to this region containing structures that are particularly over dense as compared to the cosmic average (e.g. Lilly et al. 2007). The deeper W 2 limit likely results in a slightly lower average luminosity, though the requirement of spectroscopic redshifts somewhat offsets this effect. Note that the Mendez et al. (2015) halo masses are quite low (the lowest of all the samples they considered) compared to ours and those of other groups -more than two orders of magnitude in the obscured quasar case. Aside from the small halo masses in general, the sense of the difference between obscured and unobscured samples is the opposite of what we find here and was found in Hickox et al. (2011).
The individual redshifts of Mendez et al. (2015) should reduce systematic errors due to differences in the redshift distributions that could be present in our measurements, despite the improved statistical errors we achieve with a larger sample. However, our bias values for the obscured and unobscured samples differ by ∼10%. As shown in Figure 8 of D15, our effective redshift estimates would need to be systematically offset by ∼0.25 to fully account for this difference. It is unlikely that the Böotes field is this poorly representative of the full population. On the other hand, considering that the bias is not weighted equally at each redshift, the presence of a high-z tail in the obscured sample could skew the bias significantly. Such a tail would most likely be present in the sources lacking SDSS counterparts. As a check we repeat our clustering and CMB lensing cross-correlation for only the obscured sources with r-band detections. These objects do not have a significantly different redshift distribution in Böotes, and as in D14/D15, the measurement is completely consistent with that of the entire obscured sample. While incorporating full redshift information into our measurements would certainly be an improvement, it is difficult to see how redshift distribution errors would dramatically impact our findings.
Interpretation and future work
While the magnitude and significance of the difference in bias and halo mass between obscured and unobscured quasars is reduced in this work, it is still present. The simplest interpretation of this is that obscured quasars reside in higher mass haloes, and are not simple analogues of unobscured quasars seen through a dusty torus. They could instead represent different phases common to the quasar phase of black holes.
It is also possible that our obscured sample is a mix of genuinely obscured objects and lower luminosity unobscured AGN. This "contamination" could imply either that lower luminosity AGN are more biased (which is unexpected, and previous results regarding trends of bias with luminosity have been weak at best; Shen et al. 2009Shen et al. , 2013Krolewski & Eisenstein 2015;Eftekharzadeh et al. 2015), or that the signal from the true obscured population is being diluted. In addition, it is likely that some fraction of quasars are in fact obscured only due to our particular line of sight, and are intrinsically the same as the unobscured population if seen from a different angle. This would also serve to dilute the bias measurement of sources obscured by other means. Both of these factors imply that the difference we are finding between obscured and unobscured quasars may be a lower limit.
We again use abundance matching techniques (e.g. Colín et al. 1999;Vale & Ostriker 2004;Shankar et al. 2006;Guo et al. 2010) to estimate the implied lifetimes of obscured and unobscured phases, assuming there is an evolutionary trend. The median Lbol ∼ 10 46 ergs/s (Hickox et al. Figure 15. The adopted bias (left) and halo mass (right) measurements from Table 3 compared to other recent results (comparison with the D14/D15 results can be seen in Figure 6). Points have been shifted slightly in redshift where necessary for clarity. The grey band represents the range of results typical for opticallyselected unobscured quasars, largely from the SDSS. The Mendez et al. (2015) results utilize individual redshifts for a projected clustering measurementnote that their halo mass measurements fall outside of the plot range (though the obscured halo mass error bar can be seen), so the actual values are listed. While the Hickox et al. (2011) results agree well with what we find here, the Mendez et al. (2015) estimates are significantly lower. Obscured quasars have halo masses ∼2 times larger than unobscured quasars, with a significance of ∼2σ.
Given the reduced significance of the difference in halo masses found here, it is possible that the large-scale bias of obscured and unobscured quasars is consistent with a pure orientation model. However, there are other indications that even if these sources occupy similar mass haloes overall, there are other differences. For example, the smaller scale signal (below 1 Mpc/h, or 0.04 • at our redshifts) appears to differ between the samples in both the clustering and CMB-lensing cross correlations. This could imply a difference in the HODs and in particular the satellite fractions of each population (Zheng et al. 2005;Zheng & Weinberg 2007;White et al. 2012;Chatterjee et al. 2013). However, studying the HOD of obscured quasars in detail is difficult without full redshift information, but such additional information could provide higherorder correlation functions or direct measurement of the mean occupation function of obscured quasars (Chatterjee et al. 2013).
Our next steps are to leverage the all sky nature of WISE along with the full footprint of SDSS and other optical surveys (e.g. The Dark Energy Survey) to build the largest IR-selected obscured and unobscured quasar samples possible. This will provide dramatically improved statistical error bars on these bias measure-ments, to further constrain the potential difference in halo masses. We will also use photometric redshift estimations from multiwavelength photometry and SED fitting (e.g. Hainline et al. 2014;DiPompeo et al. 2015a;Carroll & Hickox 2015) to explore the redshift evolution of obscured quasars.
SUMMARY
Using the most recent Allwise source catalogue from WISE, along with new products from Planck, we present updated measurements of the obscured and unobscured quasar bias via angular autocorrelations and cross-correlation with CMB lensing. We find, as with our previous work, that obscured quasars have a larger bias and therefore halo mass and longer lifetime. However, this difference is reduced with respect to our results using previous data products from WISE and Planck. The inferred typical halo mass of obscured quasars is roughly a factor of two larger than that of unobscured quasars, at a significance of ∼1-2σ.
In order to explain this change, we have carefully compared the properties of quasars selected from both WISE catalogues using standard color-cuts. The general properties of these sourcesmorphology, redshift distribution, unobscured/obscured fractions -are indistinguishable. However, the photometric properties do differ slightly. In particular, the more recent WISE catalogue tends to select brighter and bluer quasars, which could contribute to the change in our results.
We have also carefully explored systematic effects in WISEselected quasars from both catalogues, and find that the updated version is clearly superior. In particular, the quasar selection in the Allwise catalogue appears less biased with respect to flux measurements in W 1 and W 2, as well as with Galactic extinction. However, there may be some complications with respect to Moon con-tamination in Allwise, which leads us to rely on the sample that is selected from both catalogues for our current measurement.
Finally, we make available with this paper several sets of codes for making these measurements and generating consistent models. | 2015-11-13T21:51:24.000Z | 2015-11-13T00:00:00.000 | {
"year": 2016,
"sha1": "7d1f36835081515c40aa11fe72262e16c82195ab",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.04469",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7d1f36835081515c40aa11fe72262e16c82195ab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247353177 | pes2o/s2orc | v3-fos-license | Quantitative proteomic dataset of mouse caput epididymal epithelial cells exposed to acrylamide in vivo
This article reports the proteomic legacy of in vivo exposure to the xenobiotic, acrylamide, on the epithelial cell population of the proximal segments of the mouse epididymis. Specifically, adult male mice were administered acrylamide (25 mg/kg bw/day) or vehicle control for five consecutive days before dissection of the epididymis. Epididymal epithelial cells were isolated from the proximal (caput) epididymal segment and subjected to quantitative proteomic analysis using multiplexed tandem mass tag (TMT) labeling coupled to mass spectrometry. Here, we report the data generated by this strategy, including the identification of 4405 caput epididymal epithelial cell proteins, approximately 6.8% of which displayed altered expression in response to acrylamide challenge. Our interpretation and discussion of these data features in the article “Acrylamide modulates the mouse epididymal proteome to drive alterations in the sperm small non-coding RNA profile and dysregulate embryo development”
caput epididymal epithelial cell proteins, approximately 6.8% of which displayed altered expression in response to acrylamide challenge. Our interpretation and discussion of these data features in the article "Acrylamide modulates the mouse epididymal proteome to drive alterations in the sperm small non-coding RNA
Value of the Data
• These data provide valuable information on the complexity of the mouse proximal epididymal epithelial cell proteome and the adaptive proteomic response these cells mount following in vivo acrylamide exposure. • These data will benefit researchers investigating the physiological impacts of acrylamide and other compounds (e.g., ethanol or glycidamide) on the male reproductive tract. • Equally, these data represent a comprehensive catalogue of mouse epididymal epithelial cell proteins, thus forming a highly valuable molecular resource for researchers in examining epididymal function, molecular pathways involved in promoting sperm maturation and/or for the identification of protein biomarkers associated with stressors of the male reproductive tract. • These data will also be of benefit in the identification of acrylamide responsive proteins in tissues beyond those of the male reproductive tract.
Data Description
The files in this article comprise raw and processed data from a comparative proteomics experiment of caput epididymal epithelial cells isolated from male mice exposed to either acrylamide (25 mg/kg bw/day) or vehicle control. The workflow applied for cell isolation, sample Epididymal epithelial cells were isolated from the proximal epididymis of mice administered either vehicle control or acrylamide (25 mg/kg bw/day). Cells were lysed before proteins were extracted, denatured, reduced, alkylated and digested. Peptides were labeled using isobaric TMT labels and samples were mixed 1:1. Peptides were fractionated into 11 fractions and analyzed on nano-LC MS/MS. Data were processed in Proteome Discoverer 2.4 to identify and quantify proteins. Subsequently, differential expression analysis was performed on the refined protein list. preparation and data acquisition and analysis is depicted in Fig. 1 and outlined in the experimental design, materials and methods section below. This proteomic analysis led to the identification of 4405 proteins (Supplementary Table S1), including proteins known to be enriched in the proximal epididymis (DEFB41, SPAG11A/B and LCN8) [1] . Of the total identified epithelial cell proteins, an average of 12.1 peptides (10.8 unique peptides) were identified per protein, with an average peptide coverage of 26.9% per protein. Using a fold-change threshold of ± 1.5 and p -value ≤ 0.05 revealed that 6.8% of proteins (302 proteins) displayed altered expression in epididymal epithelial cells from mice exposed to acrylamide. Specifically, the majority of proteins (240 proteins) that displayed differential expression were increased in abundance, equating to 5.4% of all identified proteins, while 1.4% of proteins (62 proteins) were decreased in expression, following acrylamide exposure (Supplementary Table S1). Among the most significantly altered proteins were SERP1, KIF3A and NR3C1, which displayed increased expression following acrylamide exposure and GPX6, RER1 and TSPAN31, which were downregulated in acrylamide exposed epithelial cells. Processed data listing the 4405 proteins identified in caput epididymal epithelial cells and the differential protein expression associated with in vivo acrylamide exposure (abundance ratio) are included in this article. Supplementary Table S1 also contains the number of peptides (overall and unique), the percentage of peptide coverage, any detected peptide modifications and the abundance of each identified protein across three biological replicates. These data have been used to explore the proteomic impact of acrylamide exposure on the caput epididymis and interpret the mechanism by which acrylamide exposure leads to an altered microRNA profile of epididymal spermatozoa [2] .
Animals
Adult (8-12 weeks of age) male Swiss mice were obtained from the University of Newcastle's Central Animal House and were housed under a controlled lighting regimen (12 h light, 12 h dark) at 21-22 °C and supplied food ad libitum .
Acrylamide exposure regimen
After an acclimatization period of 7 days, male mice received an intraperitoneal injection of acrylamide (25 mg/kg bw/day) or vehicle alone (phosphate buffered saline, PBS) for five consecutive days (100 μl). Mice were euthanized via CO 2 inhalation on the fifth day, 2-3 h following the final acrylamide injection. Prior to dissection, the mice were perfused with pre-warmed Tris-buffered saline (TBS) to eliminate blood contamination from their vasculature. The caput epididymis was dissected and cleaned of fat and prepared for isolation of epididymal epithelial cells.
Purification of caput epididymal epithelial cells
Dissected caput epididymides were placed in a droplet of Biggers Whitten, and Whittingham (BWW) [3] media composed of 91.5 mM NaCl, 4.6 mM KCl, 1.7 mM CaCl 2 2H 2 O, 1.2 mM KH 2 PO 4 , 1.2 mM MgSO 4 7H 2 O, 25 mM NaHCO 3 , 5.6 mM D-glucose, 0.27 mM sodium pyruvate, 44 mM sodium lactate, 5 U/mL penicillin, 5 μg/mL streptomycin, 20 mM HEPES buffer, and 3.0 mg/mL bovine serum albumin [BSA] (pH 7.4; osmolarity 300 mOsm/kg) and multiple incisions were made in the tissue with a razor blade to allow spermatozoa harbored in the epididymal lumen to disperse. The epididymal tissue was then washed to remove all spermatozoa by subjecting it to agitation and washing three times with sterile Tris buffered saline (TBS). Tissue was subsequently digested with 100 μg/mL trypsin in TBS at 37 °C for 30 min with vigorous shaking (10 0 0 rpm). Clumped tissue sections were collected by centrifugation (800 × g for 5 min) and resuspended in collagenase type II (1.0 mg/mL) in TBS. After incubation for 30 min with shaking at 37 °C the resulting suspension was pipetted up and down to further homogenize it and ensure no observable tissue pieces remained. Once cell disaggregation was completed, the suspension was pelleted via centrifugation and resuspended in Dulbecco's Modified Eagle Medium (DMEM) culture medium containing sodium pyruvate (1 mM), 10% fetal bovine serum, 100 IU/mL penicillin, and 100 μg/mL streptomycin. The cell suspension was incubated in a 6-well plate at 32 °C for 4 h to allow nonepithelial cells to attach to the plate, leaving epididymal epithelial cell aggregates in suspension. Hence, following incubation the supernatant was collected, filtered through a 70 μM membrane, washed, and resuspended in lysis buffer (100 μL of ice-cold 0.1 M Na 2 CO 3 ; pH 11.3) supplemented with protease and phosphatase inhibitors (Complete EDTA free; Roche Holding SG, Basel, Switzerland) in preparation for proteomic processing. A single biological replicate was generated by pooling epithelial cells from five to six animals and three such replicates were analyzed. An aliquot of each epithelial cell suspension was assessed by immunocytochemistry using a nuclear stain 4 ,6-diamidino-2-phenylindole (DAPI) to ensure that each sample was free of spermatozoa contamination.
Protein digestion and labeling for comparative and quantitative proteomic analysis
Epididymal epithelial cell pellets were prepared for proteomic analysis as previously described [4] . Briefly, thawed cell suspensions were sonicated at 4 °C for 3 × 10 s intervals (100% output power) and subsequently incubated at 4 °C for 1 h. The protein concentration of each sample was determined using a bicinchoninic acid assay (Thermo Fisher Scientific) prior to dilution in a urea solution (6 M urea, 2 M thiourea). Before digestion, each protein sample was reduced and alkylated using 10 mM dithiothreitol (30 min, room temperature) and 20 mM iodoacetamide (30 min, room temperature, in the dark), respectively. Peptide populations were digested with 1:30 Lys-C/Trypsin mix for 3 h at room temperature. The urea concentration was then reduced to below 1 M by addition of 50 mM tetrathylammonium bromide (TEAB; pH 7.8) and digested overnight at 37 °C. Lipids were precipitated using formic acid (2% v/v final concentration) and the resulting peptides were purified using desalting columns (Oasis PRIME HLB; Waters, Rydalmere, NSW, Australia). After digestion and desalting, 100 μg of peptides were labeled using tandem mass tags (TMT) reagents according to the manufacturers protocol (TMT 10plex labels; control 1 = 126, control 2 = 127N, control 3 = 127 C, acrylamide 1 = 129N, acrylamide 2 = 129 C, acrylamide 3 = 130N; Thermo Fisher Scientific).
Proteomic data processing
The database for Mus Musculus downloaded from UniProt (25,260 sequences, downloaded 12th November 2019 including reviewed entries and their canonical and isoform sequences) was searched against the raw files using Proteome Discoverer (PD) software (version 2.4, Thermo Fisher Scientific) and the SEQUEST HT search algorithm. Searches were performed using the following parameters: two maximum missed cleavages for trypsin, a mass tolerance of precursor mass and fragment mass of 10 ppm and 0.02 Da, respectively and trypsin was specified as the cleavage enzyme. To evaluate the false discovery rate of peptide identification, the corresponding reversed database was interrogated using Percolator. For this, the target-decoy search approach was used to obtain q-values. PD Normalization node was utilized prior to statistical testing, whereby it sums the peptide group abundances for each TMT label and determines the maximum sum for all labels. The normalization factor is the factor of the sum of the sample and the maximum sum in all labels [4 , 5] . Fold changes between control and acrylamide sample groups were determined by PD, whereby the program calculates the protein ratios (abundance ratio) as the geometric median of all peptide group ratios. Statistical analyses were completed using a Student's t test, with p ≤ 0.05 considered significant. The protein lists were exported from PD as Excel files and the final list of proteins was refined to include proteins with a quantitative value in all three biological replicates and a minimum of 2 unique peptides.
Ethics Statements
All experiments were conducted with the approval of The University of Newcastle's Animal Care and Ethics Committee (ACEC, approval number A-2017-726). | 2022-03-10T16:30:09.786Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "9f1cd6fa5fcb60d4787b945e148a30637d3d65f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2022.108032",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f2749215fed53dc7fde27785af827abc34123ab",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257039083 | pes2o/s2orc | v3-fos-license | The Fr\'echet derivative of the tensor t-function
The tensor t-function, a formalism that generalizes the well-known concept of matrix functions to third-order tensors, is introduced in [K. Lund, The tensor t-function: a definition for functions of third-order tensors, Numer. Linear Algebra Appl. 27 (3), e2288]. In this work, we investigate properties of the Fr\'echet derivative of the tensor t-function and derive algorithms for its efficient numerical computation. Applications in condition number estimation and nuclear norm minimization are explored. Numerical experiments implemented by the \texttt{t-Frechet} toolbox hosted at \url{https://gitlab.com/katlund/t-frechet} illustrate properties of the t-function Fr\'echet derivative, as well as the efficiency and accuracy of the proposed algorithms.
Introduction
Functions of matrices play an important role in many areas of applied mathematics and scientific computing, e.g., in network analysis [9], exponential integrators [14], physical simulations [32] and statistical sampling [17]. This concept was generalized to functions of third-order tensors in [29], based on the tensor t-product formalism [5,21,22]; see also [31] for a further extension to so-called generalized tensor functions, which are functions of tensors with non-square faces. Functions (and generalized functions) of tensors have applications in deblurring of color images [34], tensor neural networks [30,33], multilinear dynamical systems [15], and the computation of the tensor nuclear norm [4].
For functions of matrices, the Fréchet derivative is a well-established object with applications in, e.g., condition number estimation [1], analysis of complex networks [8,36], and the solution of matrix optimization problems [37]. In this work, we consider the Fréchet derivative of functions of tensors, in order to generalize the above techniques to the tensor setting.
In addition to condition number estimation, the tensor Fréchet derivative has a number of potential applications, most notably in gradient descent procedures for nuclear norm minimization [3,16,23,25,27,28,38,39]. Thanks to close connections with bivariate functions (see, e.g., [24] for the matrix function case), computational approaches for the tensor Fréchet derivative are a stepping stone towards solutions of tensor Lyapunov and Sylvester equations [26]. Furthermore, a generalization of the network sensitivity measures discussed in [8,36] to multilayer networks (which can be represented as tensors) will also require a tensor Fréchet derivative. This paper is organized as follows. In section 2, we collect several important definitions and results concerning matrix functions, the Fréchet derivative, and the tensor t-product. Section 3 summarizes key results on the tensor t-function and introduces definitions and properties of its Fréchet derivative L f (A, C), including explicit Kronecker forms. In Section 4 we discuss a number of methods for computing L f (A, C), drawing on well understood techniques such as Krylov subspace methods for matrix functions and fast Fourier transforms. We examine applications such as the condition number of t-functions and the gradient of the tensor nuclear norm in Section 5. Finally, in Section 6 we compare the performance of different algorithms for small-and medium-scale problems, and we summarize our findings in Section 7.
Foundations
We recall important concepts from matrix function theory, Fréchet derivatives, and the t-product formalism that form the basis of this work.
Functions of matrices
Functions of matrices can be defined in many different ways, the three most popular of which are based on the Jordan canonical form, Hermite interpolation polynomials, and the Cauchy integral formula; see [13, Section 1.2] for a thorough treatment. We recall two of the definitions that are particularly important for our work.
Let A ∈ C n×n be a matrix with spectrum spec(A) := {λ j } j=1,...,N , where N ≤ n and the λ j are distinct. Suppose that A has Jordan canonical form, where J m (λ j ) is an m × m Jordan block for an eigenvalue λ j . Denote by n j the index of λ j , i.e., the size of the largest Jordan block associated to λ j . (Note that eigenvalues may be repeated in the sequence {λ j k } ℓ k=1 ). We then say that a function is defined on the spectrum of A if all the values f (k) (λ j ) for k = 0, . . . , n j − 1 and j = 1, . . . , N exist.
If f is defined on the spectrum of A with Jordan form (1) where f (J) := diag(f (J m1 (λ j1 )), . . . , f (J mp (λ j ℓ ))), and When A is diagonalizable with spec(A) = {λ j } j=1,...,n (possibly no longer distinct) the Jordan form definition greatly simplifies to where diag is the operator that maps an n-vector to its corresponding n × n diagonal matrix. When f is analytic on a region that contains spec(A), we can alternatively define f (A) via the Cauchy integral formula, where Γ is a path that winds around spec(A) exactly once. When f is analytic, so that both of the above definitions can be applied, the two definitions are equivalent and yield the same result; see [13,Theorem 1.12].
The Fréchet derivative
In the most general case, the Fréchet derivative is defined for functions between normed vector spaces V, W (with respective norms · V , · W ). Let U ⊂ V be an open subset and let f : When f : C n×n −→ C n×n is a function of a matrix, one usually denotes the Fréchet derivative of f at the matrix A as L f (A, ·) (see, e.g., [13,Chapter 3]) and rephrases the condition (2) using the matrix two-norm and Landau notation as for an appropriate matrix norm · . A sufficient condition for L f (A, ·) to exist is that f is 2n − 1 times continuously differentiable on a region containing spec(A) (see [13,Theorem 3.8]). If the Fréchet derivative exists, it is unique.
In particular, the Fréchet derivative of a matrix function is guaranteed to exist if f is analytic on a region containing spec(A), and in this case L f (A, E) has the integral representation where Γ is again a path that winds around spec(A) exactly once; see, e.g., [13,19]. In addition to being of theoretical interest, the integral representation also forms the basis of efficient computational methods for approximating L f (A, E), in particular when E is of low rank; see [18,19,24], as well as [35] for an extension to higher-order Fréchet derivatives. Related is the Gâteaux (or directional) derivative of f at A, defined as If f is Fréchet-differentiable at A, all its directional derivatives exist and we have G f (A, E) = L f (A, E) for all E ∈ C n×n . The converse is not necessarily true: even when all directional derivatives of f at A exist, f need not be Fréchet-differentiable at A.
Tensors and the t-product
In the context of this work, a tensor is viewed as a multidimensional array, i.e., a generalization of the concept of vectors and matrices to higher dimensions. We restrict ourselves to third-order tensors, i.e., arrays in C n×m×p , as the t-product introduced in [5,21,22] is only defined in this case. Figure 1 depicts the different "views" of a third-order tensor, which are useful for visualizing the forthcoming concepts. We define the (Frobenius) norm of a tensor A ∈ C n×m×p , with A(i, j, k) denoting the ijkth entry, as which can be seen as an analogue of the matrix Frobenius norm · F . As the t-product formalism makes extensive use of block matrices, we introduce basic notations for these. Define the standard block unit vectors E np×n k := e p k ⊗ I n , where e p k ∈ C p is the kth canonical unit vector in C p , and I n is the n × n identity matrix. When the dimensions are clear from context, we drop the sub-or superscripts.
The tensor t-product [5,21,22] defines a way to multiply third-order tensors, based on viewing them as stacks of frontal slices (as in Figure 1(d)). Let A ∈ C n×m×p , B ∈ C m×s×p and denote their frontal faces, respectively, as A (k) and B (k) , k = 1, . . . , p. The operations unfold and fold transform the tensor A into a block vector of size np × m and vice versa, i.e., . . .
Additionally, bcirc turns A into a block-circulant matrix of size np × mp, Note that the operators fold, unfold, and bcirc are linear. As a shorthand, we use the term n-block circulant matrix for a block circulant matrix with n × n blocks. Using the above operators, the t-product of the tensors A and B is given as A * B := fold(bcirc(A)unfold(B)).
Many important concepts well-known for matrices, such as an identity element, inverses, transposition, and eigendecomposition, can also be defined for third-order tensors within the t-product framework; see [5,21,22]. Transposition of tensors is defined face-wise, i.e., A H is the m × n × p tensor obtained by taking the conjugate transpose of each frontal slice of A and then reversing the order of the second through pth transposed slices. For tensors with n × n square faces, there is an identity tensor I n×n×p ∈ C n×n×p , whose first frontal slice is the n × n identity matrix I n and whose remaining frontal slices are all zero, which fulfills We drop the subscript on I when the dimensions are clear from context. When n = m, a unique inverse tensor A −1 can be defined as expected: if there exists B ∈ C n×n×p such that then A −1 := B.
If A ∈ C n×n×p has diagonalizable faces, i.e., A (k) = X (k) D (k) X (k) −1 , for all k = 1, . . . , p, a tensor eigendecomposition can be defined via where X and D are the tensors whose faces are X (k) and D (k) , respectively; vec(X ) i are the n × 1 × p lateral slices of X (see Figure 1(e)); and d j are the 1 × 1 × p tube fibers of D (see Figure 1(a)).
Block circulant matrices and the discrete Fourier transform
It is well established that the discrete Fourier transform (DFT) unitarily diagonalizes circulant matrices [7], and in [21,22] a block version of this result is shown to hold. Namely, letting F p denote the p × p DFT and ⊗ the Kronecker product, it follows for A ∈ C n×n×p that where each D i , i = 1, . . . , p is an n × n matrix, and blkdiag works similarly to diag, but instead places matrices on the diagonal. Another useful tool when working with block circulant matrices is the block circulant shift operator, which is clearly unitary. Using S n,p , define the transformation A matrix M ∈ C np×np is block circulant if and only if S n,p (M ) = M . In the following sections, when dimensions and block sizes are clear from the context, we omit the corresponding indices and just write S and S.
The tensor t-function
In [29], a definition for functions of third-order tensors based on the t-product is given, generalizing the usual concept of matrix functions discussed in Section 2.1. Precisely, the action of the tensor t-function f of A ∈ C n×n×p on another tensor B ∈ C n×s×p is defined as f (A) * B := fold(f (bcirc(A)) · unfold(B)).
By taking B to be the identity tensor, B = I n×n×p , one obtains the t-function f (A) via Note in particular that when f (z) = z −1 , we recover the definition of the tensor inverse (6); see [29,Theorem 5(iv)]. The definitions (11) and (12) boil down to evaluating the action of a matrix function (in the usual sense) on a block vector. The t-function therefore inherits many useful properties from matrix functions.
Theorem 1 (Theorem 6 in [29]). Let A ∈ C n×n×p , and let f : C → C be defined on a region in the complex plane containing the spectrum of bcirc(A). For part (iv), assume that A has an eigendecomposition as in equation (7), with A * vec(X ) i = D * vec(X ) i = vec(X ) i * d i , i = 1, . . . , n. Then it holds that , for all i = 1, . . . , n.
The derivative of the tensor t-function
In view of (12), which defines the tensor t-function in terms of a matrix function of a block-circulant matrix, it appears natural to define its Fréchet derivative accordingly. Lemma 1. Let A ∈ C n×n×p and let f be 2np − 1 times continuously differentiable on a region containing spec(bcirc(A)). Then the Fréchet derivative of f at A exists, and for any C ∈ C n×n×p , Proof. The operator L f (bcirc(A), ·) is the Fréchet derivative of f at a matrix of size np × np, so its existence is guaranteed by [13,Theorem 3.8] under the assumptions of the lemma. Now consider the difference Using linearity of bcirc, fold, and matrix multiplication, we can rewrite (14) as where we have used definition (3) in the second-to-last equality. Due to the special structure of bcirc(C), each of its np × n block-columns fulfills If the assumptions of Lemma 1 are fulfilled, we also say that f is t-Fréchet differentiable at A. A similar relation holds for the Gâteaux derivative.
Proof. The proof follows directly from the definition of the Gâteaux derivative, by inserting the definition (12) of the tensor t-function and again exploiting the linearity of fold and bcirc.
Consequently, we find which is exactly (16).
Remark 1.
As in the matrix case, when f is Fréchet-differentiable at A, then its Fréchet and Gâteaux derivative coincide:
Remark 2.
In the derivation of the Gâteaux derivative, one can observe that when A, C ∈ C np×np are both n-block circulant matrices, then
Properties of the t-Fréchet derivative
As it is defined in terms of the Fréchet derivative of a matrix function, the t-Fréchet derivative (13) also inherits many of the properties of the matrix function derivative, which we collect in the following lemma.
Lemma 2.
Let A ∈ C n×n×p and let g 1 and g 2 be t-Fréchet differentiable at A. Then (ii) f 2 = g 1 g 2 is t-Fréchet differentiable at A, and Proof. Let A, C denote bcirc(A), bcirc(C), respectively. For part (i), observe that by (13), we have where the second equality follows from [13, Theorem 3.2] and the third equality follows from the linearity of fold. In a completely analogous fashion, part (ii) and (iii) follow from their respective matrix function counterparts [13, Theorem 3.3 & Theorem 3.4].
We also have an analogous relation to the integral representation (4).
Lemma 3. Let f be analytic on a region containing spec(bcirc(A)). Then where the inverse is defined as in (6).
, bcirc(C), respectively. By (4) applied to L f (A, C) and the linearity of fold, it follows that Noting that so that (17) becomes
Explicit representation of the t-Fréchet derivative
An intuitive way to compute L f (A, C) for a particular direction tensor C is based on a well known relation for the matrix Fréchet derivative. For matrices A, C ∈ C np×np , if f is 2np − 1 times continuously differentiable on a region containing spec(A), we have where O np×np denotes an np × np matrix of zeros; see [13, eq. (3.16)]. Thus, L f (A, C) can be found by first evaluating f at a 2np × 2np block upper triangular matrix and then extracting the top-right block, In the context of the Fréchet derivative of the t-function, (19) turns into where A = bcirc(A), C = bcirc(C), and we have used the fact that We can thus explicitly write the Fréchet derivative of the t-function f (A) in the direction C in terms of the product of a matrix function acting on a block vector, wherein the upper half of the for j = 1, . . . , n do 3:
Kronecker forms of the t-Fréchet derivative
The Fréchet derivative induces a linear mapping where vec(·) stacks the entries of a tensor into a column vector. The matrix K f (A) is also called the Kronecker form of the Fréchet derivative. (See, e.g, [13, Section 3.2] for the matrix function case.) For computing the Kronecker form, one can simply evaluate the Fréchet derivative L f (A, ·) on all tensors of the canonical basis {E ijk : i, j = 1, . . . , n, k = 1, . . . , p} of C n×n×p (i.e., E ijk is a tensor with entry one at position (i, j, k) and all other entries zero). We summarize this discussion in the following definition.
A simple computational procedure for forming the Kronecker form is outlined in Algorithm 1, where we use MATLAB-style colon notation, i.e., a : b means all indices between (and including) a and b.
Remark 3. We note that the computational cost of Algorithm 1 is extremely high, making it infeasible even for medium scale problems (a situation that is similar already for matrix functions): computing a single Fréchet derivative L f (A, E ijk ) using the relation (20) and a dense matrix function algorithm for evaluating f has a cost of O(n 3 p 3 ) for most practically relevant functions f . Then, forming K f (A) via Algorithm 1 costs O(n 5 p 4 ) flops and requires O(n 4 p 2 ) storage. Thus, the Kronecker form can typically not be used in actual computations, but it is a useful theoretical tool, e.g., for defining condition numbers; see Section 5.1.
The tensor t-function is intimately related to matrix functions of block-circulant matrices. It is therefore interesting to examine the relationship between the Kronecker form K f (A) of the t-Fréchet derivative and the Kronecker form K f (bcirc(A)) of the Fréchet derivative of the matrix function f (bcirc(A)). Note that K f (bcirc(A)) ∈ C n 2 p 2 ×n 2 p 2 , so that both matrices cannot coincide, but it turns out that they are still highly related. To make the connection precise, we first need the following auxiliary result.
Proposition 2. Let E ijk be the unit tensor with a 1 only in position (i, j, k) and zeroes everywhere else. Then, with E IJ ∈ C np×np as the matrix that is zero everywhere except for a 1 at Proof. The result immediately follows by noting that (I, J) as defined above is one particular nonzero entry of bcirc(E ijk ), and, by the definition of S, the sequence of matrices S ℓ (E IJ ) cyclically moves through all other of its nonzero entries. 2 Due to the linearity of the Kronecker product, we thus have that with E IJ as defined in Proposition 2. The Fréchet derivatives on the right-hand side of (23), when vectorized, correspond to p columns of the Kronecker form K f (bcirc(A)). Further, by (13) and (22), the first n 2 p entries of the left-hand side of (23) correspond to a column of K f (A). Thus, each column of K f (A) equals the sum of (the first n 2 p entries) of p columns of K f (bcirc(A)), and each column of K f (bcirc(A)) appears in exactly one of those sums. The indices of the columns of K f (bcirc(A)) that contribute to a particular column of K f (A) can be obtained by carefully inspecting how the index (I, J) is moved around under the cyclical shifts S ℓ . Lemma 4. Let A ∈ C n×n×p , let f be analytic on a region containing the spectrum of bcirc(A), and let K 1 := K f (A) and K 2 := K f (bcirc(A)) denote the Kronecker forms of the Fréchet derivatives of the t-function f (A) and the matrix function f (bcirc(A)), respectively. Then, for c := i + (k − 1)n + (j − 1)np, we have otherwise.
Proof. The result follows from Proposition 2 by observing how S acts on a unit matrix E IJ . The application of S cyclically shifts each block of the matrix one block column to the right and one block row down. Thus, as all blocks are n × n, as long as the single nonzero entry of E IJ is not in the last block row or column, it is moved by exactly n entries to the right and n entries down, corresponding to n 2 p + n entries when vectorizing. Due to our choice of E IJ in Proposition 2, its nonzero entry lies in the kth block of the first block column. Therefore, this nonzero entry reaches the last block row after p − k applications of S and then moves to the first block row with the p − k + 1st application. Thus, it moves n positions to the right and n(p − 1) positions up. This corresponds to n 2 p − np + 1 entries after vectorization.
To verify that Lemma 4 is indeed true and to get a better handle on the rather unintuitive indexing scheme, the reader is encouraged to run and examine the script test t func cond.m in the t-frechet code repository described in Section 6.
1 In other words, E IJ = e T 1 ⊗ unfold E ijk , e 1 ∈ C p is the matrix that is zero everywhere except its first np × n block column, which is unfold E i,j,k . 2 As S p (E IJ ) = E IJ , one could also start with (I, J) corresponding to any other particular nonzero entry of bcirc E ijk , not necessarily the one given in the assertion.
A further interesting observation is obtained by viewing the relations we have derived so far "in the opposite direction." It then turns out that it is sufficient to compute n 2 Fréchet derivatives in order to obtain all columns of the n 2 p 2 × n 2 p 2 matrix K f (bcirc(A)) (and thus, in light of Lemma 4, all columns of K f (A) as well). This is due to the following result.
Proposition 3. Let A ∈ C n×n×p and let f be analytic on a region containing spec(bcirc(A)). Further, let S denote the shift matrix defined in (9) and let E IJ ∈ C n 2 p 2 ×n 2 p 2 be a matrix with 1 only in position (I, J) and 0 everywhere else. Then, for any integers ℓ 1 , ℓ 2 ≥ 0, Proof. By [13,Eq. (3.24)], for any C ∈ C np×np we have the relation using the power series representation f (z) = ∞ α=0 a α z α . Inserting S ℓ1 E IJ (S T ) ℓ2 instead of C in relation (24), we find that where for the second equality we have used the fact that powers of block circulant matrices are block circulant (and thus invariant under S), and the third equality follows from the fact that S is unitary.
As a special case, by choosing ℓ 1 = ℓ 2 , Proposition 3 states that the shift operator S defined in (10) can be "pulled out" of the Fréchet derivative, In particular, choosing ℓ 1 = 0 or ℓ 2 = 0 (and denoting the other one simply by ℓ), Proposition 3 reveals that all Fréchet derivatives L f (bcirc(A), S ℓ E IJ ) and L f (bcirc(A), E IJ (S T ) ℓ ) have exactly the same entries for any ℓ = 0, . . . , p − 1, just shifted. It thus suffices to compute one of these Fréchet derivatives and then obtain the others essentially for free by applying S and/or S T . In total, it is enough to compute L f (bcirc(A), E IJ ) for I, J = 1, . . . , n, as all other canonical basis matrices E IJ can be generated by appropriate shifts.
Remark 4. For "tubal vectors" A ∈ C 1×1×p , as they appear in certain tensor neural networks [30,33], the preceding discussion implies that all columns of K f (A) ∈ C p×p are shifted copies of the same vector. Thus, in this case, K f (A) is a circulant matrix.
Computing the t-Fréchet derivative
The primary challenge in computing with tensors is the so-called "curse of dimensionality," to which the t-product formalism is not immune. At the same time, due to the equivalence with functions of block circulant matrices, the tools at our disposal are largely limited by what has been developed for matrix functions in general. We discuss viable approaches, along with potential tricks for reducing the overall complexity of computing the t-Fréchet derivative.
A basic block Krylov subspace method
We recall from (17) in the proof of Lemma 3 that where A ζ := bcirc(ζI − A) and C := bcirc(C). The integral term appearing in (25) can be approximated by a block Krylov algorithm when the direction term C is of low rank and can thus be written in the form C = C 1 C H 2 with C 1 , C 2 ∈ C np×r , r ≪ np. Remark 5. As an illustration, let us focus on the special case that C is a rank-one tensor in the sense of the CP tensor format, i.e., that each entry fulfills In this case, the kth frontal face of C is of the form C (k) = w(k)uv T and thus The matrix (26) has rank at most p 3 , and the low rank factors can be given explicitly in terms of u, v, w.
Of particular interest is the case in which all three vectors u, v, w are canonical unit vectors, which arises, e.g., when measuring the sensitivity of f (A) with respect to changes in one specific entry of A [8,36]. Also interesting is when just two of the three vectors are unit vectors, which would occur when measuring the sensitivity with respect to changes in the same entry across all frontal, horizontal, or lateral slices of A.
We define a block Krylov subspace as the block span where d is a small positive integer denoting the iteration index. For more details on the theory and implementation of block Krylov subspaces, see, e.g., [10,12].
The Krylov subspace algorithm from [18,24] for approximating now proceeds by building orthonormal bases V d , W d ∈ C np×dr of the two block Krylov subspaces K d (A, C 1 ) and K d (A H , C 2 ), with A := bcirc(A), yielding the following block Arnoldi decompositions: In light of (25), the final approximation for the Fréchet derivative is then given by
Using the DFT to improve parallelism
Consider again (20), specifically the argument of f . Thanks to (8) and Theorem 1(iii), we can write with D A = blkdiag(D A 1 , . . . , D A p ), D C = blkdiag(D C 1 , . . . , D C p ), and Using (18), we can rewrite (28) as The following theorem, which can be seen as a Daleckiȋ-Kreȋn-type result for block diagonal matrices, will be helpful.
Proof. When A and C are block diagonal, then for any k ≥ 1, we have where Let a k z k be the power series representation of the analytic function f . Then, by (31)-(32), we have where L = blkdiag(L 1 , . . . , L p ) and By [13,Eq. (3.24)], the right-hand side of (34) coincides with L f (A i , C i ) and by (18), the matrix L in (33) equals L f (A, C), thus completing the proof.
Corollary 1. Let A, C ∈ C n×n×p and let f be 2np − 1 times continuously differentiable on a region containing spec(bcirc(A)). Further, let where the diagonal blocks L i , i = 1, . . . , p are given by Proof. Under the assumptions of the theorem, the existence of the Fréchet derivative is guaranteed by Lemma 1. By combining (20) with (29), we have According to Theorem 2, we have L f (D A , D C ) = blkdiag(L 1 , . . . , L p ) where the diagonal blocks are given by Further, by the definition of F , it holds that . We therefore have We now focus on the upper half of (38), as only this block is needed for evaluating (37). Due to the structure of L f (D A , D C ), we have where we have used that the DFT matrix fulfills F p e p 1 = 1 √ p 1. Inserting (38) and (39) into (37) completes the proof. Corollary 1 shows that by applying a DFT, the computation of the t-Fréchet derivative can be decoupled into the evaluation of p Fréchet derivatives of n × n matrices that are completely independent of one another, thus giving rise to an embarrassingly parallel method. However, as the matrices D A i , D C i occurring in (36) are in general dense and unstructured, computing these Fréchet derivatives is only feasible for moderate values of n (but possibly large p).
Applications of the t-Fréchet derivative
In this section, we briefly discuss two applications of the t-Fréchet formalism, namely condition number estimation for tensor functions and the gradient of the tensor nuclear norm.
The condition number of the t-function
In practical applications, one often works with noisy or uncertain data, and additionally any computation in floating point arithmetic introduces rounding errors. Therefore, when working with the tensor t-function in practice, it is very important to understand how sensitive it is to perturbations in the data. This is measured by condition numbers.
The (absolute) condition number of the t-function can be defined by simply extending the wellknown concept of condition number of scalar and matrix functions (see, e.g., [13,Chapter 3]), yielding where for our setting, · denotes the norm (5), but can in principle also be any other tensor norm. A relative condition number can be readily defined as .
Completely analogously to the matrix function case, the condition number of the t-function can be related to the norm of its Fréchet derivative.
Lemma 5. Let f and A be such that L f (A, ·) exists and denote Then the absolute and relative condition number of f (A) are given by Proof. The proof follows by using exactly the same line of argument as in the proof of [13, Theorem 3.1] for the matrix function case, which only requires linearity of the Fréchet derivative and working in a finite-dimensional space and thus holds verbatim in our setting.
Lemma 5 relates the condition number of the t-Fréchet derivative to the tensor-operator norm L f (A) , the computation of which might not be immediately clear (as the quantities on the righthand side of (40) are third-order tensors). The next result relates it to the spectral norm of the Kronecker form K f (A) .
Lemma 6. Let f and A be such that L f (A, ·) exists and denote by K f (A) the Kronecker form of the Fréchet derivative, as defined in (21). Then Proof. By the definition of the tensor norm (5), it is clear that B = vec(B) 2 for any tensor B. Thus For realistic problem sizes, it will typically not be feasible to compute the condition number of f (A) via (41). This is already the case for functions of n × n matrices, and it becomes even more prohibitive in the tensor setting. As outlined at the end of Section 3.4, simply forming the Kronecker form K f (A) has cost O(n 5 p 4 ) and requires O(n 4 p 2 ) storage. Even for moderate values of n and p, this is typically not possible.
Instead, we need to approximate the condition number. As a rough estimate is usually sufficient, a few steps of power iteration typically give a satisfactory result, as one is mainly interested in the order of magnitude of the condition number, so that more than one significant digit is seldom needed. Algorithm 2 is a straightforward adaptation of [13,Algorithm 3.20], which computes an estimate of K f (A) 2 by applying power iteration to the Hermitian matrix K f (A) H K f (A), exploiting that a matrix vector multiplication K f (A)v is equivalent to the evaluation of L f (A, unvec(v)), where unvec(v) maps the vector v to an unstacked matrix of the same size as A. In line 4, the function f is defined via f (z) = f (z). Remark 6. As Algorithm 2 boils down to a matrix power iteration, its asymptotic convergence rate is linear and depends on the magnitude of the ratio between the eigenvalue of largest and second largest magnitude of the Hermitian matrix K f (A) H K f (A); see e.g., [11,Eq. (7.3.5)]. It is quite difficult, however, to give meaningful a priori bounds on this ratio, as we do not have explicit formulas for the eigenvalues or singular values of K f (A) available (in terms of spectral quantities related to A), and deriving such relations is well beyond the scope of this work.
Also, note that typically only O(1) iterations of Algorithm 2 are sufficient due to the rather low accuracy requirements in condition number estimation; see our experiments reported in Section 6.3 as well as, e.g., [13,20] for the matrix function case. In these early iterations, the asymptotic convergence rate will likely not be descriptive concerning the actual behavior of the method, as it does not capture the fast reduction of contributions from eigenvectors corresponding to small eigenvalues.
Algorithm 2 is necessarily sequential with respect to calls of L f (A, ·). An alternative algorithm that would lend itself naturally to parallelization (especially in the case that n ≪ p) stems from Lemma 4 and Proposition 3, and is a variant implementation of Algorithm 1. In the first phase, K f (bcirc(A)) is computed but in a reduced fashion, whereby only n 2 applications of L f (bcirc(A), ·) are required, thanks to the shift relation proven in Proposition 2. This first step can be trivially parallelized, as it is known a priori exactly on which unit matrices to call for k = 1, . . . , p do 10: where cond abs (f, bcirc(A)) denotes the matrix function condition number in the Frobenius norm: the left-hand side of (42), when interpreted in terms of the underlying matrix function, only allows structured, block-circulant perturbations, while the right-hand side measures conditioning with respect to any perturbation. Often, such structured condition numbers can be significantly lower than unstructured condition numbers; see, e.g., [2,6]. In our experiments, we have actually observed equality in (42) in most test cases, at least up to machine precision, but it is also possible to construct examples in which the two condition numbers disagree by a large margin; see, e.g., the test script test cond counter ex.m in our code suite. It might be an interesting question for further research to find out whether there are conditions on f and/or A that guarantee equality holds in (42).
The gradient of the tensor nuclear norm
In this section, we highlight an example application of how our framework for the t-Fréchet derivative can be useful for deriving certain theoretical results in a rather straightforward fashion. The nuclear norm of a tensor is typically defined in terms of a tensor singular value decomposition (see, e.g., [27]), but it was recently shown that it can also be computed in terms of the t-square root as where trace (1) denotes the trace of the first frontal slice; see [4,Lemma 6]. Tensor nuclear norm minimization is an important tool in image completion, low-rank tensor completion, denoising, seismic data reconstruction, and principal component analysis; see, e.g., [3,16,23,25,27,28,38,39]. In these applications, it can be of interest to compute the gradient of the tensor nuclear norm for a gradient descent scheme. 4 We will now derive an explicit formula for the gradient of A ⋆ in terms of t-functions, which is reminiscent of similar results in the matrix case.
To do so, we first collect some auxiliary results on the trace (1) operator. Clearly, trace (1) is linear, and by direct computation, it is easy to verify that where · is the tensor norm defined in (5) and that defines an inner product on C n×n×p (which corresponds to the standard inner product on C n 2 p for the vectorized tensors). Further, the trace (1) operator inherits the cyclic property of the trace, with respect to the tproduct.
Similarly, the first face of B * A is Using the linearity and the cyclic property of the trace, it is clear that the traces of (44) and (45) agree, thus proving the result of the lemma.
Lemma 7 together with Lemma 3 leads to a useful representation for the derivative of trace (1) (f (A)) when f is analytic, involving the derivative of the scalar function f . By a slight abuse of notation, we write the Fréchet derivative (in the sense of the general definition (2)) of trace (1) at a tensor M as L trace (1) (M, ·), although it is clearly not a t-function. Lemma 8. Let A ∈ C n×n×p and let f be analytic on a region containing the spectrum of bcirc(A).
Proof. By the linearity of trace (1) we directly obtain As the chain rule, Lemma 2(iii), also holds more generally for any Fréchet differentiable functions, not necessarily t-functions, we have By Lemma 3, we can further rewrite (46) as where we have used the cyclic property of trace (1) with respect to the t-product from Lemma 7 for the second equality. The integral in (47) is the Cauchy integral representation of f ′ (A), thus completing the proof.
We are now in a position to state the main result of this section. Note that using the inner product (43), the gradient of the nuclear norm can be characterized by imposing the condition for all C ∈ C n×n×p .
where f is not a tensor t-function in the usual sense. As before, with slight abuse of notation, we write L f (M, ·) for its Fréchet derivative. From the definition of the t-product, it is straightforward to verify that Using the chain rule and Lemma 8, we have As g is the square root, we have g ′ (f (A)) = 1 2 (A T * A) −1/2 , so that by combining (49) and (50), we find where we have used the cyclic property of trace (1) for the second equality and the fact that trace (1) (M T ) = trace (1) (M), which directly follows from the definition of tensor t-transposition, together with the linearity of trace (1) for the third equality. Comparing (51) and (48) shows that thus concluding the proof.
To illustrate the theory, the script test t nuclear norm.m in our code suite implements a simple gradient descent scheme with backtracking line search for nuclear norm minimization, based on Theorem 3.
Numerical experiments
In this section, we detail a software framework for studying the performance of the proposed algorithms and present numerical results from several small-to medium-scale experiments.
Implementation details
We have developed our own modular toolbox, t-Frechet, hosted at https://gitlab.com/katlund/t-frechet. The basic syntax is derived from bfomfom 5 and LowSyncBlockArnoldi 6 . We note that in contrast to an existing t-product toolbox Tensor-tensor-product-toolbox 7 , a tensor A in t-Frechet is encoded as a MATLAB struct with fields mat and dim, which store unfold(A) and A's dimensions as a vector [n m p], respectively. Such tensor structs allow us to work with sparse tensors via builtin MATLAB functions and compute the actions of block circulant matrices without ever explicitly forming the full np × mp matrix. Our toolbox has been tested in MATLAB 2019b, 2022a, and 2023a on Ubuntu and Windows machines. Table 1 summarizes features of the three methods for approximating L f (A, C) that we have derived throughout the text. Regarding the dft approach, note that equation (35) can be trivially implemented on (dense) third-order arrays in MATLAB, thanks to fft and ifft; see comments in [21] as well as our test script test dft. A number of additional test scripts are included in t-Frechet that we do not discuss here; we have, however, kept them public to encourage further engagement with the community. bcirc, (20) bcirc
Comparing performance of t-Fréchet implementations
We consider a simple example for examining the performance of the proposed solvers by taking f (z) = exp(z) and A ∈ C n×n×p such that each face of A is a finite differences stencil for the spatial components of the two-dimensional convection-diffusion equation with the convection parameter ν drawn p times uniformly from the interval [0, 200]. We restrict both spatial variables to the unit square and take √ n points in each direction, where n ∈ {36, 144, 576}. The direction tensor C is dense and its entries are randomly drawn from the normal distribution.
All scripts are executed in MATLAB R2022a on 16 threads of a single, standard node of the Linux Cluster Mechthild at the Max Planck Institute for Dynamics of Complex Technical Systems in Magdeburg, Germany. 8 We report the total run time to reach a tolerance of 10 −6 , percentage speed-up, number of times the operator (see Table 1) is called, and the final error for all three approaches. Each approach is run 10 times, and the reported times are an average over these runs. Unless otherwise mentioned, B(FOM) 2 [10] with the classical inner product and block modified Gram-Schmidt was employed to compute the matrix functions. Note that aside from node-level multithreading, all algorithms are run in serial. The performance is similar for all algorithms for this small problem size, which leads to matrix function problems of size 360 × 360 for bcirc and low-rank, and 36 × 36 for dft. However, both low-rank and dft converge very quickly-1 and 2 iterations, respectively-and achieve high accuracy. Recall that both the low-rank and dft approaches rely on multiple operators per iteration. Accuracy for dft is measured as an average across all subproblems. See Table 6 Iteration Index With a larger problem size we begin to see clear performance differences among the three methods. Matrix function problems are now 1440 × 1440 for bcirc and low-rank, and 144 × 144 for dft. Both bcirc and low-rank struggle to compete with dft, which is an order of magnitude faster, due to computing with much smaller matrices. Furthermore, dft has no apparent accuracy issues, achieving near machine precision in 2 iterations, while low-rank achieves a similar accuracy in 1 iteration and bcirc just passes the desired tolerance after 14 iterations. See Table 6 As we quadruple the problem size, the situation remains nearly identical to when n = 144. The dft approach remains significantly faster than either bcirc, which still struggles to achieve better accuracy, and low-rank, which despite requiring only 1 iteration is overall as slow as bcirc. See Table 6
Accuracy and effort of t-condition number solvers
For testing condition number algorithms, we fix the t-Fréchet solver to be an "exact" (non-iterative) method. We then study how different approaches fare with respect to the number of times they invoke a t-Fréchet solver, simply denoted as t frechet. We take f (z) = exp(z) and A a dense n × n × p tensor, whose entries are drawn randomly from the normal distribution. We set a tolerance of 10 −2 for the power iteration, and we compare it with the "full" Kronecker form approach (Algorithm 1), which we also treat as ground truth, and the "efficient" Kronecker form approach (Algorithm 3). For all the tests in this section, we only look at a single run, as computing the full Kronecker form is time-consuming. For the first example, we consider the case where n > p. Results are summarized in Table 6.3.1. The power iteration is clearly the winning method here, with only 8 calls to t frechet necessary to achieve the desired tolerance. While the efficient Kronecker approach does reduce the overall time in comparison to the full Kronecker approach, it is not competitive with the power iteration. We now examine the scenario where n = p. Results are found in Table 6 We finally consider n ≪ p; see Table 6.3.3 for the results. The power iteration remains overwhelmingly faster than the efficient Kronecker approach, and still achieves the desired tolerance. A clear drawback of the analysis in this section is that, in practice, one will not be able to compute Fréchet derivatives with high accuracy. However, in most applications that require a condition number, accuracy is unimportant. In which case it is sufficient to replace the inner t frechet solves of the power iteration with, for example, the dft approach from Corollary 1.
When accuracy is important, however, the efficient Kronecker approach may be a viable competitor to the power iteration. In all examples, we see that the time per t frechet evaluation is roughly the same per method. Because all the t frechet problems are known a priori and they are far fewer than in the full Kronecker approach, the efficient Kronecker procedure is trivially parallelizable, unlike the power iteration, which is necessarily serial. In the case with many faces (i.e., n < p), where relatively few t frechet calls overall are necessary, a simple parallelization could easily give the efficient Kronecker approach an edge.
Conclusions
Thanks to the block circulant structure imposed by the t-product formalism, we have been able to take advantage of a rich mathematical framework not only in the definition of the Fréchet derivative of the tensor t-function but also in the development of efficient and accurate algorithms for its numerical approximation. We have proven a number of useful properties of the t-Fréchet derivative, including a Daleckiȋ-Kreȋn-type result. An expression for the gradient of the nuclear norm has also been derived and its utility demonstrated in a gradient descent scheme for nuclear norm minimization. We have affirmed the indispensability of the discrete Fourier transform (DFT) in accelerating the computation of the t-Fréchet derivative itself, as the DFT decouples the problem into p smaller problems that each converge in few iterations. We have further shown the utility of the t-Fréchet derivative in t-function condition number estimation. A tailored power iteration algorithm has proven efficient for reliably computing the condition number at a high tolerance. We have also demonstrated that the full Kronecker form of the t-Fréchet derivative can be computed in p times less work than a direct approach thanks to symmetries evoked by the block circulant structure. Finally, we have developed and made public a modular t-product toolbox that will prove foundational in exploring further, more challenging applications. | 2023-02-21T02:15:59.206Z | 2023-02-19T00:00:00.000 | {
"year": 2023,
"sha1": "83919eb911db0aba79ecc07568303c0228d122c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "83919eb911db0aba79ecc07568303c0228d122c9",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
252707467 | pes2o/s2orc | v3-fos-license | Structural modeling of Sama Bajo fishers social resilience in a marine national park
This article describes and compares three modelings of the relationship between Sama Bajo boat-dwellers Bagai land-dwellers social capital and the social resilience of Sama Bajo in three local social contexts of land-dwellers in Wakatobi National Park (WNP). The research was conducted from May 2018 until June 2019 in Mantigola Sama Bajo on Kaledupa Island, Lamanggau Sama Bajo on Tomia Island, and Mola Sama Bajo on Wangi-wangi Island. Information was collected from 240 respondents who were selected by spatial sampling technique. Using Structural Equation Modeling (SEM) analysis, we found that the structural model is effective for evaluating social resilience, particularly for Mantigola and Lamanggau Sama Bajo who interact with homogenous land-dwellers, namely Kaledupa and Tomia land-dwellers as well as a stepping stone to strengthen their social resilience capacity by taking into account social relation, livelihood, the human and financial capital of the land-dwellers in the marine preserve area. Despite the success shown, a key constraint is due to inadequacies when the structural modeling reflects the urban local social environment of Sama Bajo as stated by Mola Sama Bajo, who established their bridging capital to the heterogeneous land-dwellers. Future research should take limitations into account by identifying various land-dwellers who develop social ties with the boat-dwellers. Similar research should be taken into consideration to validate the modeling in Sama Bajo populations that live in open access types. This is crucial to determine if other characteristics of Sama Bajo social resilience appear in the social setting of a different kinds of marine preserve areas.
Introduction
For the last decade, social resilience in island communities has been intensively studied due to the high level of vulnerabilities in small islands. Much of the early work centers around the study of this concept. In the beginning, the resilience perspective appeared in ecology in the 1960s and early 1970s through the investigation of the interacting population like predators and prey and their functional response concerning the ecological stability theory (Holling 1961;Lewontin 1969;Rosenzweig 1971;May 1972cited by Folke 2006. Janssen et al. (2006) point out that the idea of resilience was presented by Holling (1973) in the area of ecology. Referring to the founder, the resilience concept decides the continuance of interrelationships inside a system and is a measure of the capability of systems to pervade the change of driving variables, state variables, parameters, and insists. From the beginning, the concept of resilience was used in the study of managing ecosystems and the subject of population ecology. Glaser et al. (2015) argue that islanders have a deal with improved challenges for integrated social-ecological management to a sustainable future for the island's system. Furthermore, questions on women's roles on the islands, social vulnerability and resilience, and also the marine resource-related attitude and future vision of islanders in this quickly changing environment, appeared as significant further themes.
As we have shown earlier, a social resilience concept is widely used in ecology, nevertheless, the concept definition and measurement are contested (Adger 2000). Then, Adger (2000) argued that it is essential to learn from this debate and to explore social resilience, both as an analogy of how societies work, drawing on the ecological concept and through observing the direct relationship between the two phenomena of socio-ecological resilience. Aldrich (2012) cited by Pfefferbaum et al. (2017) elucidates that social capital, more than socio-economic condition, population density, degree of damage, or amount of aid, is the ''core engine of recovery'' post-disaster. Interestingly, people who survive with connections to strong social networks have access to necessary information and recover faster than those without. Thereby, communities lacking strong social networks are likely to experience loss (Aldrich 2012 cited by Pfefferbaum et al. 2017).
More recently, researchers have started to point to social capital as perhaps the most significant driver of recovery and resilience (Kerr 2018). Resilient communities can strategically utilize their social relations to obtain access to capital beyond the community (Bakker et al. 2019). For fisher communities, this means using different forms of social capital to gain power in marine spatial planning negotiations, and to exercise influence in favor of community goals (Grafton 2005;Bakker et al. 2019). Later, social capital, in the context of community resilience, refers to the interconnectedness of community members and their willingness and ability to support too many activities that advance the community's goals (Pfefferbaum et al. 2017). It is interesting to note that Rosado et al. (2022) discovered that a variety of economic opportunities, social networks, and strong cultural community-level governance contributed to the resilience of the fishing community, with the villagers changing their social, economic, and religious behaviors to slow the spread of COVID-19. These research results represent an increased recognition that more attention needs to be paid to social capital related to the social resilience capacity, particularly for the small-scale fisheries communities.
This research focuses on Sama Bajo who is a famous seafaring ethnic in Eastern Indonesia. There is a great deal of debate surrounding Sama Bajo maritime communities who mostly live in coastal and small islands in Eastern Indonesia. From Stacey et al. (2018), we found that recent estimates of Sama Bajo boat-dwellers exhibit a Sama Bajo total population of approximately 1.1 million, nearly 347,000 in Malaysia (Sabah), 564,000 in the Philippines and 200,000, living in areas of high biodiversity in the islands of eastern Indonesia. Particularly in Southeast Sulawesi, the vast majority of Sama Bajo inhabits the coastal islets of Wakatobi and Tiworo. In Wakatobi, they are essential actors who utilized the littoral areas of Kapota, Kaledupa, and Tomia reefs as gleaning and demersal fishers for groupers and other reef fish beside the Lia and Tomia land-dwellers fishers. Also, they are tuna fishers who do fishing in the Banda seas. In Tiworo, the Sama Bajo has the main producer of blue swimming crab. Historically, the Wakatobi (Stacey 2007), the Maginti Tiworo, and the Bahari Sampolawa Sama Bajo boat-dwellers were traditional shark fishers in the Timor seas.
Despite their significant role in supporting protein diets for the land-dwellers and the global market, these seafaring groups have been negatively labeled by the land-dwellers as the rule breakers. For example, in Sama Bajo Saponda, land people acknowledge them as a center of blast fishers. Later, in the Tiworo strait, the Sama Bajo has been considered dynamite fishers, mini bottom trawlers, and illegal miners of coastal sand. This stigmatization has brought them to marginalization and vulnerability to stuck up in chronic poverty. Notably, a recent work (McWilliam et al. 2021) and the results generally seem to support the hardest situation of the Sama Bajo fishers. (McWilliam et al. 2021) argued that the Sama Bajo way of life is in danger because of the competition from overfishing and increased hazards from marine pollution (especially poorly regulated purse seine fisheries and illegal trawl fishing). Additionally, they must contend with Indonesia's tropical monsoonal climate's strong seasonality, which has an adverse seasonal impact on their ability to catch fish. In the southern seas, the southeast monsoon, which blows out of Australia from July to September, is accompanied by strong winds and high seas. Fishing is severely restricted by the weather for all homes with older boats (non-motorized, smaller [6-7 m], single-engine craft and rudimentary equipment).
The pivotal role of Sama Bajo as one of the main actors who utilized the coastal and the coral triangle has been attracting social researchers from all over the world. The most commonly cited culprit in literature for this Sama Bajo issue range from social transformation sociological studies to ethnographic studies about their tenurial history (Stacey 2007;Chou 1997;Chia 2019), women and gender studies (Pauwelussen 2015), Sama Bajau local knowledge (Awang-Kanak et al. 2018;Yakin 2013), language (Donohue 1996, social transformations (Hoogervorst 2012Wianti et al. 2012); Sama Bajo livelihood institution and food system (Gibson et al. 2018 andMcWilliam et al. 2021), and Sama Bajo identity and securitization (Acciaoli 2006;Stacey et al. 2018;Madlan 2014;Acciaioli et al. 2017).
Although extensive research has been carried out on the community in the social resilience issues like Pauwelussen (2016) in Sama Bajo Berau, nevertheless no single study had addressed the issue of Sama Bajo social resilience in the modeling analysis which adequately confirmed that there is a lack quantitatively research of the Sama Bajo boat-dwellers and Bagai land-dwellers identity, and bridging social capital between two groups, who live side by side, to coping with risk in different and unique socio-ecological landscapes. Brown et al. (2003), Stedman (2003 cited by Khakzad and Griffith (2016) argued that place attachments are connections to particular social and physical backdrops that provide various types of psychological and social benefits as well as the range of human activities and social processes that are carried out there. Therefore, to address this research's lacuna, in this paper we assess a variety of social fabrics concerning land-dwellers and the Sama Bajo boat-dwellers and in WNP that shape marginal Sama Bajo fishers' social resilience using the analysis of Structural Equation Model (SEM) Partial Least Square (PLS).
Methods
In this investigation, data were gathered using a structured interview for 240 respondents at various sites for two years, from 2018 to 2019. In the former year, we collected data regarding the objective well-being of both the boat-dwellers and land-dwellers among the three Islands. Also, we observed to address the subjective well-being of Sama Bajo. Meanwhile, in the latest year, we carried out research for looking social capital of Sama-Bagai relations in all field sites.
Respondents had been chosen using the spatial sampling technique. For each field site, we surveyed 40 respondents who represented the particular Bagai land-dwellers and Sama Bajo. Besides geographical indicators, respondents were selected both from land-dwellers and boat-dwellers. Care was taken to ensure that we have properly selected the respondents, in both two communities living side by side, who at least carried out social interactions with each other within a month when data collection was carried out. Every research site was represented by 80 respondents, for example, on Kaledupa Island, 40 respondents represent Bagai Horuo land-dwellers, and the rest corresponds to Sama Bajo Mantigola. In sum, the total number of respondents were 240 head of household at three research fieldworks in Wakatobi National Park (WNP): (1) Wangi-wangi Island; (2) Kaledupa Island; and (3) Tomia Island. These islands have been selected because there are Sama Bajo villages: (1) Sama Bajo Lamanggau village on Tomia Island; (2) Sama Bajo Mantigola on Kaledupa Island; and (3) Sama Bajo Mola in Wangi-wangi Island (Fig. 1).
We selected those field sites with consideration of the existence of mutual bridging social relation between the land-dwellers and the Sama Bajo. We did not select Binongko Island because there is no Sama Bajo boat-dwellers village. In other words, the Binongko people have not intimately engaged with the Sama Bajo in their daily life.
We have designed a model for assessing the role of Sama-Bagai social relation in Sama Bajo social resilience in WMNP (Fig. 2).
The research framework ( Fig. 2) later was translated to be structural modeling which describes the social resilience of Sama Bajo communities concerning their social relation to the varied land-dwellers ( Fig. 3 and Table 1).
As seen in Figs. 2 and 3, the endogenous latent variable is Sama Bajo social resilience (Z). The Z factor is counted from 2 indicators: (1) subjective well-being of Sama Bajo (X3); and (2) and objective well-being of the boat-dwellers (X2). The boat-dwellers subjective well-being is built from Sama-Bagai social identity and migration sub-indicators. These second-order factors were adapted from a 3D of social four main value attributes namely resource use, spatial mobility, autonomy and identity, and kinship as well as relational ties. Additionally, the boat-dwellers well-being is manifest across different spatial scales such as at a local or household level, as part of subregional membership of language groups, regionally in addition to transboundary across national states. Meanwhile, exogenous latent variables are (1) objective well-being of the land-dwellers (X1); (2) social relation of Sama Bajo boat-dwellers to Bagai land-dwellers (Y1); (3) and social relation of Bagai land-dwellers to Sama Bajo boat-dwellers. Endogen latent variables, such as Sama Bajo social relation to Bagai land-dwellers (Y1) and Bagai land-dwellers social relation to Sama Bajo (Y2), were also inspired by Stacey et al. (2018) as well as our previous work in WNP. Whereas, another exogenous latent variable categorized as Bagai land-dwellers objective well-being (X1) second-order factors were adapted from Khomsan et al. (2015) albeit selected and modified. In short, all of the endogenous latent variables and their factors are also for the exogenous latent variable factors presented in Table 2. This social resilience modeling is created by taking into account the social relations utilizing reciprocity between Sama Bajo and the land-dwellers who live side by side in an archipelagic area. Our comparative study outlines that the local social-economic context through social capital between Bagai land-dwellers and Sama Bajo boat-dwellers on each island will be different and has a critical role in the social resilience of Sama Bajo as a marginal community in WNP. To test this assumption, we hypothesized five postulates. The first postulate (H1) stated that there is a significant and positive correlation between Bagai land-dwellers objective well-being (X1) and Sama Bajo social resilience (Z), (X1-> Z). The second postulate (H2) affirmed that there is a significant and positive correlation between Sama Bajo Social relation to Bagai land-dwellers (Y1) and Sama Bajo social resilience (Z), (Y1-> Z). The third postulate (H3) presumed a linear and significant relationship between Bagai land-dwellers social relation to Sama Bajo (Y2) and Sama Bajo social resilience (Z), (Y2-> Z). The fourth postulate (H4) surmised a positive and significant relationship between Sama Bajo social relation to Bagai land-dwellers (Y1) and Bagai land-dwellers social relation to Sama Bajo (Y2), (Y1-> Y2). Lastly, the fifth hypothesis (H5) expected that there is a significant and positive correlation between Bagai land-dwellers objective well-being (X1) and Sama Bajo Social relation to Bagai land-dwellers (Y1), (X1-> Y1) (see Table 2).
Verification of Mola, Mantigola, and Lamanggau Sama Bajo social resilience measurement modelings
Several authors have attempted, regarding research SEM model validity, that the researcher who uses SEM PLS initially has to assess the models to figure out the average variance extracted (AVE), loading factor, and the composite reliability for every construct among the models. These analyses are in terms of the acceptability of the measurement model (Rela et al. 2020). Furthermore, the composite reliability criterion has to be conducted to verify the internal consistency (Oliveira et al. 2020). In this section, particularly in Tables 3 and 4, we provide full details of the loading factor for every item construct, AVE and the composite reliability, also the discriminant validity for every construct of the research models. AVE is used to measure the convergent validity and must be higher than a standard minimum level of 0.5 (Fornell andLarcker 1981 cited by Navimipour et al. 2018). In other words, the model requirements have good validity if each latent variable with a reflective indicator has an AVE above 0.5. It is apparent from Table 2 that the AVE value of each latent variable for all social resilience models has a value of > 0.5 and it can be said that the PLS research models meet the requirements of good convergent validity.
The next measurement is the reliability test of the models which is used to prove the consistency and accuracy of the instrument in measuring constructs. Reliability test by measuring composite reliability to a latent variable, which has a value more than ( >) 0.7, is reliable. Rahman et al. (2013) note from Hulland's argument (1999) that discriminant validity designates the extent to which a provided construct is different from other constructs. It is verified through analysis of average variance extracted by applying the criteria that a construct should share more discrepancy with its measures than it shares with other constructs in the model (Rahman et al. 2013). The specific details of the reliability measurement result have been reported in Table 4. The results illustrated show that all of the latent constructs have good, consistent and accurate reliability due they meet the requirements with the composite reliability value for every latent construct of more than 0.7.
Evaluation of Mola, Mantigola and Lamanggau social resilience structural models
In this study, we postulate that every land-dwellers local social context through their bridging social capital and economic sphere of life has a significant impact on the Sama Bajo community's social resilience in WNP small islands in different ways. This section outlines the results of the study on every site.
Mola Sama Bajo structural model evaluation
For Mola Sama Bajo social resilience model, Fig. 3 and Table 4 have shown that X1 has a direct and significant relationship to Y1 (t-score = 3.583) > t- Fig. 4, it is apparent that X1 correlates directly whereas negatively to Y1 (β = − 0.421). It means that the higher X1 which was measured with a high value of X1.6, X1.8, X1.12, and X1.17, interestingly, will decrease the bridging social relation of Mola Sama Bajo to the Bagai Mandati land-dwellers (Y1). Besides this, the correlation Y1 to Y2 (β = 0.797) means that the greater bridging relation of Mola Sama Bajo to the Bagai Mandati land-dwellers (Y1) will improve the bridging relation of Mandati Bagai land-dwellers to Mola Sama Bajo (Y2). As well Y1 has a relationship to Z (β = 0.645). In a similar vein, a significant increase in Y1 will improve the social resilience of Mola Sama Bajo social resilience (Z).
PLS-SEM does not have a standard goodness-of-fit statistic, and efforts to establish a corresponding statistic have proven highly problematic (Sarstedt, et al. 2014). Thereby, the assessment of the quality of the model is based on its ability to predict the endogenous constructs. Despite the structural model of Y1 having an R-square of 25.1%, nevertheless, the structural model of Y2 and Z have an R-square of 62.9% and 65% in a row. For Y2 R-square, indicates that the Y2 variety can be explained by 62.9% by the Y2 structural model. Meanwhile, the rest of the value 37.1% can be justified with other factors from outside the model. Equally, another endogen factor, Z has an R-square of 65%, while 35% of the Z structural model can be explained by other factors (Table 5).
Mantigola Sama Bajo structural model evaluation
Strong evidence of the relationship between Sama-Bagai social capital and the Sama Bajo social resilience was found in the Mantigola Sama Bajo community which relates to the Bagai Horuo Kaledupa land-dwellers. Overall, based on the result of the analysis in Fig. 5 and Table 6, all of the hypotheses are significant and have positive correlations. Bootstrapping result has shown that X1 has a direct and significant relation to the Y1, with the t-test (9.22) > t-table (1.96) or p < 0.05, at α 5%. Likewise, Y1 and X1 have significant associations with Y2 with the t-score (38.33; 5.44) > t-table (1.96) at α 5% in a row. The t-score value of Y1 to Y2 is the highest compared to Mola and Lamanggau. Also, Y1 and Y2 have strong correlation with Z (t-score 6.87; t-score 11.72) > t-table (1.96), respectively.
Another important finding was that the Mantigola Sama Bajo social resilience (Z) has a strong R-square of nearly 69.1%. This indicates that the model can explain the Z data variety for almost 69.1%, and the rest of 30.9% is explained by other factors outside the model. As well as that, the structural modeling of Y2 produced an R-square of 52.2%. In contrast, the model of Y1 only created an R-square of 15.2%. While 84.8% of diversity in Y1 was explained by other factors beyond the structural model of Y1.
Lamanggau Sama Bajo structural model evaluation
In this section, we will explain the findings in Lamanggau, Tomia Island which increasingly reinforced the previous results that the land-dwellers, through bridging relation, are essential factors for shaping the bonding social capital and resilience of marginal Sama Bajo in the WMNP. Result illustrated (in Table 7 and Fig. 6) show that strong correlations existed among constructs X1, Y1, and Y2 to Z (t-score 5.150; 4.529; 2.264 > t-table 1.96, respectively). Moreover, there was a strong and direct correlation observed between X1 and Y1 with a value of t-score (10.164) > t-table (1.96) at α 5%. As well that the highest value of correlation was between Y1 and Y2 (t-score 11.161 > t-table 1.96).
Turning now to the statistical evidence of Lamanggau R-square, from Table 6, the structural model of Y1 resulted in an R-square of approximately 36.8%, while the R-square of Y2 was nearly 41.9%. Just almost like Mantigola, the structural model of Z produced R-square nearly to 65.7%. It means that the Sama Bajo social resilience (Z) construct diversity that could be elucidated by the model was 65.7%. Meanwhile, the rest of the 34.3% of data diversity was justified by other factors outside the model.
Discussion
Several landmark studies observed that community social capital dimensions have been an important power with concerns to community-level of resilience capacities (Kerr 2018). In our research, after rigorous SEM examination among the Sama Bajo communities in WNP, it was discovered that social resilience is shaped by several endogenous latent variables. In general, the objective well-being of the boat-dwellers variables (X3) is strongest shaped by the Sama Bajo asset score of fishing catch technology (X3-6), Sama Bajo respondent formal education (X3-9); informal education level of respondent's family members (X3-11), and Sama Bajo respondent's household debt (X3-12). One of the most important findings in this paper is related to the X3-12. Many authors, for instance, described how the boat-dwellers have been involved in complicated indebtedness. The Sama fishers are truly considered the Punggawa-Sawi, through patron-client relationships, as a crucial source of assistance or, if necessary, informal social protection or a significant impact on dependent households' ability to access and maintain the abundant benefits they can derive from the sea (McWilliam et al. 2021). Nevertheless, while the crew (Sawi) of the Punggawa are laboring directly for their patrons, the semiindependent fisher is, thus, somewhat better off in terms of income, autonomy, and a sense of well-being than the Sawi (McWilliam et al. 2021;Stacey et al. 2018). The indebtedness is also found in the Sama Bajo livelihood which relates to aquaculture, for instance in seaweed cultivation social relations (Aslan et al. 2022). The results are interesting and help to justify how to measure as well as improve the social resilience capacity of the boat-dwellers communities in the marine preserve area. Subsequently, besides the objective welfare dimension, the Sama Bajo social resilience capacities are also formed by the communities' subjective well-being namely the identity as a part of the Sea Nomads tribe (X2-1); and migration frequency (X2-2). Both of these variables are the core cultural value of the boat-dwellers communities. The influential work of (Stacey et al. 2018), gave rise to a renewed interest in the identity of the boat-dwellers which is other strongly subjective dimensions of their autonomy. According to the initial work, we postulate that the identity of being the landdwellers as well as maintaining to be a Sama community is a source of the community's social resilience. We assume, on the contrary, that if the Sama identity is too strong as well as leaving their identity as Sama or being Bagai, it can reduce the resilience capacity of the boat-dwellers. In light of this, human and financial assets and also cultural dimensions have created the social resilience capacity of the boat-dwellers community in WNP.
Besides the aforementioned, a closer look at an endogen latent variable of Sama Bajo social relation to the land-dwellers (Y1) among all three islands concerning the Sama social resilience (Z) shows that the Sama has a great (Fig. 4), although no direct significant association is found between both the objective economic well-being of landdwellers (X1) and the social resilience dimension of the boat-dwellers (Z), the Y1 has been a catalyst for X1 to support the social resilience of the Sama Bajo community (Z) in the Mola through indirect association. On the structural model analysis, the X1 correlates with the Y1 albeit negative (t-score = 3.583 and p = − 0.421). This is because mostly the low class of Mandati Bagai land-dwellers who are stapled food farmers and wholesalers in central Mola Market have an intimate relationship with the Mola Sama Bajo rather than the upper class of Mandati land-dwellers. Essentially in the other two cases, the X1 showed a positive significant association with the Y1 variable. Take the Lamanggau Sama Bajo, who live together with the Tomia land-dwellers who are the high-skill demersal fishers as well as main producers of the staple food for the boat-dwellers on the island, as an example. In that case, the t-score of X1 to Y1 has shown a high association with a positive value path coefficient of 0.560. These results are in agreement with our expectations that the land-dwellers have a significant role, using the social-economic relation in both the Sama and the Bagai, in creating the social resilience of the boatdwellers communities. Eventually, the relation among X1, Y1, and Z has confirmed what was recently suggested by the results (Richmond and Casali 2022) that several scholars have found concerning strong social capital in many different forms which is essential for a community to achieve sustainable livelihood, well-being, and economic growth.
Subsequently, only on Tomia Island, the occupation of the land-dwellers as a fisherman has a positive relationship with the Lamanggau Sama Bajo community. The technology of Sama Bajo fishing activity (X1-9), as a second-order construct of X1 (objective well-being), has a positive and strong correlation (t = 0.909) with the economics of Bagai Tomia land-dwellers (X1). This relation is a sign that identical livelihood can facilitate mutual interaction. This similarity has fostered Sama Bajo's social relation to the land-dwellers (Y1), and social resilience (Z). This relation encourages equal position, collective action between them, and improved sharing of knowledge for doing sustainable fishing activities like Tomia fishers do. This fact is similar to Bakker et al. (2019). His finding in the Orkney Islands that ties the fishermen together is, thus, their shared passion for the sea and a shared understanding of the adversity of their occupation, the challenges they cope with daily and the working mentality necessary to thrive in fisher livelihood. Through this shared understanding, fishers can achieve a sense of belonging to the community. Furthermore, Bakker et al. (2019) stated that this sense of belonging is reinforced by indicating the boundaries of the fisher community: those who do not support the same norms and values, or who are unable to cope with the severe working condition at sea, are outsiders and will not become a genuine part of the community.
Conversely to the Lamanggau, the Mola Sama Bajo loading factor values, we presented the negative correlation between the objective well-being of the Mandati landdwellers (X1) to the Sama Bajo Mola social relation to the land-dwellers (Y1) and also for the Mola Sama Bajo social resilience (Z). Another most intriguing finding is that, in the Mola Sama Bajo structural modeling ( Fig. 3 and Table 4) which is represented an urban Sama Bajo local social context, the result does not demonstrate a direct correlation between the social relation of the Bagai community to the Sama (Y2) to the Z endogenous latent variable (t-score = 0.457; p = 0.067). It means that in Mola Sama Bajo's case, the bridging capital created by the Mandati Fig. 6 Structural model of Lamanggau Sama Bajo social resilience land-dwellers does not considerably work for strengthening Mola Sama Bajo's social resilience. Comparing the results in both Mantigola and Lamanggau structural modeling, the Y2 has significantly associated with the Z (t-score = 11.72; p = 0.506 and t-score = 2.264; p = 0.165, respectively).
These findings demonstrate that the structural model may be a useful tool in assessing the social resilience of Sama Bajo and the social relation between the boat-dwellers and the homogenous land-dwellers communities. We commence with some examples of the Kaledupa land-dwellers and Tomia land-dwellers. Despite the success demonstrated the complex relationship among exogenous and endogenous latent variables, a significant limitation of the structural modeling is when they describe the urban local social context of Sama Bajo as revealed by Mola Sama Bajo who build their bridging capital to the heterogeneous land-dwellers. Particularly, we only focused on observing the Mandati landdwellers who live closely to the Mola Sama Bajo and the majority of the land-dwellers in Wangi-wangi Island, and we overlooked other land-dwellers communities such as Lia and Waha land-dwellers. This has impacted the failure for confirming the role of land-dwellers in terms of causal pathways from their objective well-being (X1) as well as their social cohesion to the boat-dwellers (Y2) to strengthen the boat-dwellers social resilience (Z). Thereby, the limitation of this study must be considered in future studies. It is also important to take into account in future research for comparing these structural modeling of social resilience of Sama Bajo communities in a marine protected area and Sama Bajo who lives in an open resource area.
Despite this limitation, the findings of this study are important because those structural modelings have tried for quantifying the Sama Bajo subjective well-being (X2-3) dimensions which have been constructed by Stacey et al (2018) together with Sama Bajo objective well-being (X2-2). Importantly, this research is a stepping stone toward a more profound understanding of the role of land-dwellers capital not only in their objective well-being but also in their social capital to the boat-dwellers to form the Sama Bajo social resilience. Moreover, this research has found an innovative pathway for the Sama Bajo to improve their social and livelihood resilience in small islands by involving pre-existing bridging social capital between land-dwellers and the Sama Bajo as well as supporting collaboration management in a marine national park.
Conclusion
In summary, this novel study shows our contribution advances by revealing the critical significance of interdependency analysis to measure the social resilience of Sama Bajo communities. Using the three structural models of Sama Bajo social resilience, we present that using the community social capital perspective in the different levels of relationship based on the Bagai land-dwellers unique social contexts (X1, Y1, Y2), we have shown that the structural model works for examining social resilience (Z), specifically Mantigola and Lamanggau Sama Bajo who interact with homogenous land-dwellers, to be specific Kaledupa and Tomia land-dwellers, as well as a stepping stone to strengthen their social resilience capacity by taking into account social relation, livelihood, the human and financial capital of the land-dwellers in the marine preserve area. The main limitation, however, is inaccuracies when the structural modeling describes the urban local social context of Sama Bajo as indicated by Mola Sama Bajo, who built their bridging capital to the heterogeneous land-dwellers. Regarding the research results, future research should consider the limitation, particularly for urban Sama Bajo, by identifying varied land-dwellers who build social relations with the urban boat-dwellers. Equally important, similar work should be considered to validate the modeling in Sama Bajo communities who live in open access type. This is essential to examine whether other aspects emerge in Sama Bajo social resilience in the opposite type of marine preserve social context. | 2022-10-05T15:12:19.983Z | 2022-10-03T00:00:00.000 | {
"year": 2022,
"sha1": "0197723eda0ede64ef1637bd32733e46fec03004",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40808-022-01526-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5c6d7e56e54c6afd978c756b5b443768e72f6e5",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
3624808 | pes2o/s2orc | v3-fos-license | Preconditioned cues have no value
Sensory preconditioning has been used to implicate midbrain dopamine in model-based learning, contradicting the view that dopamine transients reflect model-free value. However, it has been suggested that model-free value might accrue directly to the preconditioned cue through mediated learning. Here, building on previous work (Sadacca et al., 2016), we address this question by testing whether a preconditioned cue will support conditioned reinforcement in rats. We found that while both directly conditioned and second-order conditioned cues supported robust conditioned reinforcement, a preconditioned cue did not. These data show that the preconditioned cue in our procedure does not directly accrue model-free value and further suggest that the cue may not necessarily access value even indirectly in a model-based manner. If so, then phasic response of dopamine neurons to cues in this setting cannot be described as signaling errors in predicting value.
Introduction
Behaviour is often divided into two broad categories. One, termed goal-directed or model-based, utilizes an associative map of the task at hand, which can be navigated to anticipate likely outcomes and their desirability. Maps acquired separately can be linked and the value of outcomes updated on-the-fly to allow flexible responding. The other, contrasting category of behaviour, termed modelfree or habitual, reflects simpler associations linking cues to the responses that have been reinforced in their presence. Behaviours in both categories are typically described as reflecting value, however in the former category, the value is inferred and reflects value stored downstream, whereas in the latter, the value is directly attached or 'cached' in the antecedent cue.
Our lab has recently used sensory preconditioning to identify neural systems critical for modelbased behaviour (Jones et al., 2012;Wied et al., 2013;Sadacca et al., 2016;Sharpe et al., 2017). These data include the demonstration that midbrain dopamine neurons exhibit error-like activity to preconditioned cues. Our use of this task is based on the belief that the design is particularly effective in isolating model-based behaviour from behaviour reflecting model-free value. In sensory preconditioning, two neutral cues are paired together in close succession such that a relationship can form between them (e.g. AfiB). While there are no observable changes to behaviour during this phase, the existence of this association can be revealed if cue B is paired with reward, which causes subjects to start responding to A as if they expect reward to be delivered. Indeed, responding to cue A is sensitive to the current desire for the food reward at the time of the probe test (Blundell et al., 2003). From data such as these, it is thought that subjects respond to the preconditioned cue either because A evokes a representation of B and B leads to thoughts of reward during the test phase, or because B evokes a representation of A during conditioning that allows A to become directly associated with reward (Jones et al., 2012;Wimmer and Shohamy, 2012;Gershman, 2017). Thus, sensory preconditioning seems to be an iconic example of a model-based behaviour.
However, while it is clear that sensory preconditioning utilizes model-based associations, this procedure may also permit the preconditioned cue to directly accrue value. Specifically, if presentation of cue B were to evoke a representation of cue A during conditioning, then the value of the food might become directly associated with A (Wimmer and Shohamy, 2012;Doll and Daw, 2016). Importantly, this question is not resolved by the effect of food devaluation on responding to the preconditioned cue, since the cue could maintain any such model-free value subsequently, independent of the new value of the food as the association between cue A and the devalued food has not been directly experienced. If this were occurring in our procedure, it would introduce difficulties in its use to strictly isolate model-based neural processing. For example, the ability of a preconditioned cue to evoke phasic activity in a dopamine neuron could be easily explained by existing proposals that dopaminergic transients reflect errors in predicting model-free value (Schultz et al., 1997).
Here we directly addressed this question by assessing the ability of a preconditioned cue trained in our task to support conditioned reinforcement. For comparison, we also assessed conditioned reinforcement supported by cues trained to predict reward directly or through second-order conditioning. Conditioned reinforcement -or the ability of a cue to support acquisition of an instrumental response in the absence of any reward -is generally conceptualised as a test of cue value. Notably, subjects will work for a cue predicting food even if the food reward has been devalued (Parkinson et al., 2005), indicating that model-free value is normally sufficient to support conditioned reinforcement. Accordingly, we found that both directly conditioned and second-order cues would support conditioned reinforcement. However, a preconditioned cue would not. These data show that, at least for our procedure in rats, the preconditioned cue does not acquire model-free value during training. Further they suggest that the cue also does not automatically or by default access value cached in events downstream in a model-based manner, such as through the other cue or the sensory properties of the reward.
Preconditioned cues do not support conditioned reinforcement Preconditioning
Rats were first presented with the neutral cues (AfiB; CfiD) in close succession 12 times each to promote the development of a relationship between them. As expected, since training did not involve presentation of reward, the rats spent little time in the magazine during this phase, and there were no differences between cues ( Figure 1A). ANOVA revealed no main effect of cue (F (3,63) =2.12, p>0.05).
Conditioning
Following preconditioning, rats underwent conditioning for 4 days. Each day, rats received 12 presentations of cue B followed by the delivery of two sucrose pellets (Bfi2US) and 12 presentation of cue D without reward (Dfi no US). As training progressed, all rats acquired a conditioned response to cue B as indexed by a greater time spent in the magazine during presentation of this cue as training progressed ( Figure 1B). A two-factor ANOVA (cue Âday) showed main effects of cue (F (1,21) =87.47, p<0.05) and day (F (3,63) =4.45, p<0.05) and an interaction between these factors (F (3,63) =21.42, p<0.05).
Conditioned reinforcement tests
Following Pavlovian training, we next gave rats two conditioned reinforcement sessions. In the first test, pressing one lever led to a 2 s presentation of cue A (R1fiA), and pressing the other lever led to a 2 s presentation of cue C (R2fiC). Here, we found that rats made a small number of lever presses on each lever and did not show any difference in the number of lever presses made for presentation of either cue ( Figure 1C; left).
To ensure that we could obtain conditioned reinforcement in this cohort of rats, we gave rats another conditioned reinforcement test. In this test, one lever press led to a 2 s presentation of cue B (R1fi B) and the other lever press led to a 2 s presentation of cue D (R2fiD). In contrast to the first conditioned reinforcement test, during this session rats showed a higher rate of lever pressing on the lever which produced the reward-paired cue B and a low level of lever presses for non-rewarded cue D ( Figure 1C; right).
The difference in the pattern of results seen across the first and second session of the conditioned reinforcement tests was confirmed with statistical analyses. A two-factor ANOVA [cue type (preconditioned vs. conditioned)Âreinforcement (rewarded or non-rewarded)] showed no effects of cue type (AC vs BD; F (1,21) =0.82, p>0.05) or reinforcement (AB vs. CD; F (1,21) =1.44, p>0.05), however there was a significant interaction between these factors (F (1,21) =10.92, p<0.05). Simple-main effects analyses showed that the source of this interaction was due to a significant elevation in lever pressing for B that was not observed for the other cues (vs A: F (1,21) =7.64, p<0.05; vs D: F (1,21) =7.38, p<0.05; C vs. D: F (1,21) =3.08, p>0.05; A vs. C: F < 1). Thus, preconditioned cues did not support conditioned reinforcement in the same rats that readily showed conditioned reinforcement for the cue directly paired with reward.
Pavlovian probe tests
It is plausible that the reason we failed to see effective conditioned reinforcement with the preconditioned cue A was because rats failed to learn the relationship between A and B. In this case, they would be failing to press the lever because they were failing to generate the normal expectation, after conditioning, that A might lead to reward. In order to test this hypothesis, we next gave rats two Pavlovian probe tests to assess learning. In the first session, we gave rats unrewarded presentations of A and C; in the second session, we gave rats unrewarded presentations of B and D. We found that rats made more entries into the food port during presentation of either cue A or B, demonstrating effective conditioning and sensory preconditioning ( Figure 1D). A two-factor ANOVA [cue type (preconditioned vs. conditioned)Âreinforcement (rewarded or non-rewarded)] revealed a main effect of reinforcement (AB vs CD; F (1,21) =15.11, p<0.05). There was also a main effect of cue type (AC vs BD; F (1,21) =9.39, p<0.05), likely reflecting that the A vs. C extinction tests were given prior to the B vs D tests since the A vs. C test is the critical comparison. Importantly, however, there was no interaction with cue type (F < 1). Thus, rats spent a greater amount of time in the food port during presentation of cues A and B relative to cues C and D, and there was no difference in the magnitude of this difference. In order to full rule out any possibility that the lack of conditioned reinforcement observed to the preconditioned cue A was due to a failure of sensory preconditioning, we also separately tested the difference between A and C. This analysis revealed a significant difference between responding to A and C (F (1,21) =5.35, p<0.05).
Second-order conditioned cues do support conditioned reinforcement
Our first experiment showed that a preconditioned cue is insufficient for conditioned reinforcement, whereas a cue directly paired with a valuable reward was sufficient. To confirm that this effect was not simply the result of the introduction of an additional cue between the preconditioned cue and the reward, we conducted a second experiment in which we tested the ability of a second-order conditioned cue to support conditioned reinforcement. Importantly, the second-order cue is trained exactly like the preconditioned cue except that the pairing of the neutral cues (AfiB; CfiD) occurs after rather than before training with reward (Bfi2US; Dfi no US).
Conditioning
Conditioning lasted for 4 days. Each day, the rats received 12 presentations of cue B followed by delivery of two sucrose pellets and 12 unrewarded presentations of cue D. As training progressed, all rats acquired a conditioned response to cue B (Figure 2A). A two-factor ANOVA (cue Âday) revealed a main effect of cue (F (1,14) =37.13, p<0.05), a main effect of day (F (1,14) =6.32, p<0.05), and a significant interaction between these factors (F (1,14) =8.47, p<0.05). Second-order conditioning Following conditioning, rats were presented with the neutral cues (AfiB; CfiD) in close succession 12 times each to promote the development of a relationship between them. Rats spent more time in the magazine during cues A and B relative to cues C and D ( Figure 2B). This was confirmed with statistical analyses. A two-factor ANOVA [cue type (second-order conditioned vs. conditioned)Âreinforcement (rewarded or non-rewarded)] a main effect of reinforcement (AB vs CD; F (1,14) =17.13, p<0.05), but no interaction (F (1,14) =2.19, p>0.05) nor main effect of cue type (AC vs BD; F (1,14) =4.04, p>0.05). Thus, rats spent a greater amount of time in the food port during presentation of cues A and B relative to cues C and D, and there was no difference in the magnitude of this difference.
Conditioned reinforcement tests
Following second-order conditioning, we again gave rats two conditioned reinforcement tests. In the first, rats could press either lever for a 2 s presentation of cue A or C (R1fiA; R2fiC). In the second, rats could press these levers for either a 2 s presentation of cue B or D (R1fi B; R2fiD). In both tests, we found that rats would press the lever more for the cue paired either directly or indirectly with reward (i.e. A and B relative to C and D; Figure 2D). A two-factor ANOVA [cue type (secondorder conditioned vs. conditioned)Âreinforcement (rewarded or non-rewarded)] showed a significant main effect of reinforcement (AB vs.CD; F (1,14) =5.07, p<0.05), but no main effect nor any interaction with cue type (AC vs BD; F < 1). Thus A and B both supported conditioned reinforcement and did so to a similar degree.
Pavlovian probe tests
Following the conditioned reinforcement tests, we gave rats two probe test to assess the ability of the cues A and B to promote entry into the food port. In the first, we gave rats unrewarded presentations of cue A and C. In the second, we gave rats unrewarded presentations of cue B and D. Rats spent a larger proportion of time in the magazine during presentation of cues A and B relative to cues C and D, confirming the second-order conditioning effect. A two-factor ANOVA [cue type (preconditioned vs. conditioned)Âreinforcement (rewarded or non-rewarded)] revealed a main effect of reinforcement (AB vs CD; F (1,14) =14.07, p<0.05) but no main effect nor any interaction with cue type (AC vs BD; F < 1). Thus, rats spent a greater amount of time in the food port during presentation of cues A and B relative to cues C and D and there was no difference in the magnitude of this difference.
Discussion
Here we have shown that preconditioned cues do not support conditioned reinforcement. Rats showed no evidence of increased lever pressing for the cue trained to predict a cue that was later paired with reward. This was true despite strong responding at the food cup for the preconditioned cue in a subsequent probe test and robust conditioned reinforcement for the cue paired directly with food in the same rats. Further, in a second experiment, we also showed that a second-order cue supports conditioned reinforcement. Critically, our second-order conditioning procedures were identical to those used for sensory preconditioning, except for the order of training in second-order conditioning, which allowed the initial cue in the series to be paired with something of value at the time of conditioning.
In interpreting these data, it is important to emphasize that conditioned reinforcement is normally insensitive to devaluation of the food reward (Parkinson et al., 2005;Burke et al., 2007;Burke et al., 2008). In other words, if the food reward is devalued by pairing it with illness prior to conditioned reinforcement training, a cue that was previously paired with that reward will still support acquisition of lever pressing. Thus, value cached in the cue is normally sufficient to support the behaviour. Given this, our failure to detect any evidence of conditioned reinforcement for a preconditioned cue is strong evidence that a preconditioned cue does not accrue model-free value in this task.
This result has important implications for recent work using this task to investigate the neural circuits involved in model-based learning and behaviour (Sadacca et al., 2016;Sharpe et al., 2017). For example, we have recently shown that dopamine neurons exhibit phasic responses to both directly-and pre-conditioned cues (Sadacca et al., 2016). We interpreted this result as showing that model-based information is reflected in dopaminergic error-signals, based on the presumption that the behaviour directed at the preconditioned cue is due to inference or model-based processing. This conclusion would be contrary to current proposals that these signals only reflect model-free value (Sutton and Barto, 1981;Schultz et al., 1997;Schultz, 1998;Waelti et al., 2001;Schultz, 2002;Cohen et al., 2012). However, it was proposed that the firing of the dopamine neurons to the preconditioned cue could reflect value that accrues to the cue via mediated learning in the conditioning phase or some other form of post-training rehearsal (Doll and Daw, 2016). The current results are inconsistent with this alternative interpretation. In particular, while our data do not rule out mediated learning as an underlying mechanism, they suggest that if responding to the preconditioned cue in our task is supported by mediated learning, as has been suggested in other designs and species (Wimmer and Shohamy, 2012), then that process does not cause the preconditioned cue to accrue model-free value.
Our data also raise questions as to whether preconditioned cues access, at least automatically or by default, any sort of stored value. As noted earlier, one way to think about responding to the preconditioned cue is as reflecting an inferred or model-based value. This is a value stored in downstream events and accessed through the associative model of the task acquired during prior training (Jones et al., 2012;Wimmer and Shohamy, 2012;Gershman, 2017). That is, in the probe test, the preconditioned cue evokes a representation of the sensory properties of the food reward, either directly or indirectly, and thereby activates the current value of the food. This view is consistent with the effects of devaluation, which normally eliminates responding to the food cup upon presentation of the preconditioned cue (Blundell et al., 2003). Yet if the preconditioned cue accesses the value stored in the food in this model-based manner, then one might have expected this cue to support conditioned reinforcement. This would make intuitive sense and is consistent with evidence that model-based value can support conditioned reinforcement (Burke et al., 2007;Burke et al., 2008). The failure of the preconditioned cue to support conditioned reinforcement suggests that it does not have automatic access to the value stored in the food, perhaps because it is never directly paired with anything that has value at the time. While speculative, this conclusion would have profound implications for interpreting the firing of dopamine neurons in this setting and perhaps in other tasks, where they exhibit phasic responses that are not obviously value based (Horvitz, 2000;Tobler et al., 2003;Bromberg-Martin and Hikosaka, 2009;Sadacca et al., 2016;Takahashi et al., 2017). These transient responses may signal the sensory, state, or informational error inherent in these designs, rather than anything related to a representation of value, model-based or otherwise.
Materials and methods Subjects
Thirty-seven experimentally-naïve male Long-Evans rats (NIDA breeding program) were used in these experiments. Rats were maintained on a 12 hr light-dark cycle, where all behavioural experiments were conducted during the light cycle. Prior to behavioural testing, rats were placed on food restriction and maintained on~85% of their free-feeding body weight. All experimental procedures were conducted in accordance with Institutional Animal Care and Use Committee of the US National Institute of Health guidelines.
Apparatus, cues, and general procedures
Training was conducted in eight standard behavioural chambers (Coulbourn Instruments; Allentown, PA) individually housed in light-and sound-attenuating chambers. Each chamber was equipped with a pellet dispenser that delivered one 45 mg pellet into a recessed magazine when activated. Access to, and duration spent in, the magazine was detected by means of infrared detectors mounted across the mouth of the recess. The chambers contained an auditory stimulus generator, which delivered the tone and siren stimulus through a common speaker on the top right-hand side of the front chamber wall when activated. A second speaker on the back wall of the chamber, connected to another auditory stimulus generator, delivered the white noise stimulus. Finally, a heavy-duty relay delivering a 5 kHz clicker stimulus was located on the top left-hand side of the front chamber wall. During conditioned reinforcement tests, two levers were placed in the behavioural chamber, on the left or right side of the front wall, and the magazine and pellet dispenser were removed. A computer equipped with Coulbourn Instruments software (Allentown, PA) controlled the equipment and recorded the responses. Cues A and C were either a white noise or clicker, and cues B and D were either a tone or siren (counterbalanced across rats). During Pavlovian training, stimuli were 10 s in length, and the order of trials was randomly intermixed and counterbalanced, with inter-trial intervals (ITI) averaging 6 min. During conditioned reinforcement testing, lever pressing produced 2 s of the relevant cue. Prior to training, all rats were shaped to enter the magazine to retrieve reward (two 45 mg sucrose pellets; 5TUT, Test Diet, MO), receiving 30 pellets in the magazine across a one hour period. Subsequently, rats received 2 sessions of training each day, one in the morning and one in the afternoon.
Sensory preconditioning
Rats began with 2 sessions of compound cue training. In each session, rats received 6 presentations of serial compounds Afi B and Cfi D, where cues A or C were immediately followed by presentation of cue B or D. Subsequently, rats underwent conditioning where cue B was followed by presentation of sucrose pellets while D was presented without reward. Rats received a total of 8 conditioning sessions with each consisting of six reinforced presentations of B and six non-reinforced presentation of D.
Second-order conditioning
Rats began with 8 sessions of conditioning. In each session, rats received six reinforced presentations of B and six non-reinforced presentation of D. Subsequently, rats underwent 2 sessions of compound cue training, consisting of 6 presentations of serial compounds Afi B and Cfi D, where cues A or C were immediately followed by presentation of cue B or D.
Conditioned reinforcement and pavlovian probe tests
Following Pavlovian training, rats received two conditioned reinforcement tests each lasting 30 min. For these tests, levers were inserted in the chamber and the food magazine was removed (Burke et al., 2008). In the first test session, pressing one lever resulted in immediate 2 s presentation of cue A, while pressing the other lever resulted in a 2 s presentation of cue C (counterbalanced). In the second, the lever presses resulted in an immediate 2 s presentation of either cue B or D. To ensure that all animals learnt the associations promoted by sensory preconditioning, we also conducted two probe tests following conditioned reinforcement. In these tests, the levers were removed, and the food magazine was put back into the chamber. In the first probe test, rats received 6 presentation of cue A and C and magazine entries were measured. In the second, rats received six presentations each of cue B and D. No reward was presented during either the conditioned reinforcement or probe tests.
Statistical analyses
Conditioned responding was measured as the fraction of time that the rats spent in the food magazine during cue presentation. This was restricted to the last five seconds when cues led to reward or a reward-paired cue, reflecting the normal escalation of responding towards the end of the cue when the reward is more likely to be delivered (i.e. inhibition of delay). Analyses on data from the final Pavlovian probe tests were conducted on the first two trials of each cue in the test session. Conditioned reinforcement was measured as the sum of the lever presses made across the full 30 min of each test session. All statistics were conducted using SPSS 24 IBM statistics package (Sharpe and Killcross, 2014). Generally, analyses were conducted using a mixed-design repeated-measures analysis of variance (ANOVA). All analyses of simple main effects were planned and orthogonal and therefore did not necessitate controlling for multiple comparisons. | 2018-03-02T02:33:40.869Z | 2017-09-19T00:00:00.000 | {
"year": 2017,
"sha1": "d788920c7ea08b2977ddbf642f0b0071945257ac",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.7554/elife.28362",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d788920c7ea08b2977ddbf642f0b0071945257ac",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
250377630 | pes2o/s2orc | v3-fos-license | Real-World Effectiveness of Mix-and-Match Vaccine Regimens against SARS-CoV-2 Delta Variant in Thailand: A Nationwide Test-Negative Matched Case-Control Study
The objective of this study is to explore the real-world effectiveness of various vaccine regimens to tackle the epidemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Delta variant in Thailand during September–December 2021. We applied a test-negative case control study, using nationwide records of people tested for SARS-CoV-2. Each case was matched with two controls with respect to age, detection date, and specimen collection site. A conditional logistic regression was performed. Results were presented in the form vaccine effectiveness (VE) and 95% confidence interval. A total of 1,460,458 observations were analyzed. Overall, the two-dose heterologous prime-boost, ChAdOx1 + BNT162b2 and CoronaVac + BNT162b2, manifested the largest protection level (79.9% (74.0–84.5%) and 74.7% (62.8–82.8%)) and remained stable over the whole study course. The three-dose schedules (CoronaVac + CoronaVac + ChAdOx1, and CoronaVac + CoronaVac + BNT162b2) expressed very high degree of VE estimate (above 80.0% at any time interval). Concerning severe infection, almost all regimens displayed very high VE estimate. For the two-dose schedules, heterologous prime-boost regimens seemed to have slightly better protection for severe infection relative to homologous regimens. Campaigns to expedite the rollout of third-dose booster shot should be carried out. Heterologous prime-boost regimens should be considered as an option to enhance protection for the entire population.
Introduction
Coronavirus disease 2019 (COVID- 19) is now recognized as one of the most serious health threats in human history. The causative pathogen of COVID-19 is severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1,2]. In the latter half of 2021, the world was severely hit by the SARS-CoV-2 Delta variant. In June 2021, the World Health Organization (WHO) reported that the Delta variant became the dominant strain globally [3,4]. At the time of writing, the global case toll has exceeded 416 million with approximately 5.8 million cumulative deaths [5].
Thailand is among numerous countries severely suffering from the Delta wave. The first COVID-19 wave in Thailand was caused by the original SARS-CoV-2 strain during March-May 2020, followed by the second Alpha wave originated from a cluster of cases in the inner city of a vicinity province of Bangkok [6,7]. The third wave in April 2021 was still attributed to the Alpha variant. Then, the country was hardest hit by the fourth wave, Vaccines 2022, 10, 1080 2 of 12 caused by the Delta variant, in the second half of 2021 [7]. During that time, the Thai government implemented a lockdown policy to reduce the case and death tolls. Apart from aggressive social measures, vaccine rollout was deemed as an ultimate weapon to halt the pandemic. The journey of COVID-19 national immunization plan in Thailand commenced in February 2021 when the government imported CoronaVac from China to alleviate the rise of cases in response to the second wave. The initial plan of the government was using domestically produced viral vector vaccine, ChAdOx1, as the dominant vaccine for the Thai population. However, the advent of the Delta wave prompted a huge demand for vaccines, far outstripping the pace of domestic production and there was significant global concern that viral vector or inactivated viral vaccines might be less effective in tackling the Delta variant, compared with mRNA vaccines [8,9]. Another compelling reason that required the government to adjust the national vaccine plan was evidence suggesting that the immunity level of the previously immunized population by CoronaVac in early 2021 rapidly declined in the first few months [10].
To this end, the government adjusted the national immunization plan by purchasing a huge bulk of various vaccine types, including mRNA vaccines (BNT162b2), in combination with an acceleration of domestic vaccine production. In addition, massive campaigns for booster (third) dose and proposals of mix-and-match vaccine schedules attracted remarkable academic and political attention [11]. By late 2021, the National Vaccine Committee (NVC) approved mix-and-match vaccine schedules, starting with CoronaVac as the first dose followed by ChAdOx1 as the second dose (one month apart), since domestic study suggested comparable immunogenicity with the standard two-dose regimen of ChAdOx1, which required a three-month interval between doses [12]. So far, the government has endorsed many more mix-and-match vaccine regimens in the current national COVID-19 immunization plan.
Though various vaccine regimens have been applied in the field, little is known about the real-world effectiveness at the nationwide scale. Moreover, epidemiological research on heterologous vaccine regimens is quite sparse. Therefore, the objective of this study is to determine the effectiveness of various vaccine regimens during the Delta wave in Thailand (September-December 2021) using real-world immunization data for the whole Thai population.
Study Design
We applied a test-negative matched case-control design.
Data Sources
The data retrieval process consisted of three steps. First, we explored the Co-Lab database, which is the national laboratory recording system of the Department of Medical Sciences (DMSc), Ministry of Public Health (MOPH). Both public and private health facilities reported health service records of people undertaking the polymerase chain reaction (PCR) test for SARS-CoV-2 to the Co-Lab. By late 2021, the national guideline for COVID-19 diagnosis allowed rapid antigen diagnostic test (Ag-RDT) to replace PCR where PCR capacity is limited. Therefore, about 10-20% of the cases stored in the Co-Lab system were identified by Ag-RDT instead of PCR. During that period, the official count of Ag-RDT positive cases was limited to professional-use only. Thus, a person with positive Ag-RDT self-test was not included in this study.
A case was defined as a Thai national with positive SARS-CoV-2 by PCR test between 1 September 2021 and 31 December 2021 while a control was defined as a Thai national showing negative PCR test for SARS-CoV-2 or professional use Ag-RDT in the same period. We selected all cases and identified two controls per case. The matching of triples (a case and two controls) was performed with respect to age (allowing a three-year margin), laboratory detection date (allowing a seven-day margin), and provincial residence of testing sites (exact match). Apart from the test result, the Co-Lab database also provided reasons (such as for contact tracing due to being high-risk contact of a COVID-19 case, for active case finding, and for other reasons). Then, to retrieve information about illness severity of each case, we joined the Co-Lab database with the other two databases, namely, Co-Ward database and the COVID-19 Death database. The Co-Ward database, managed by the Department of Medical Services (DMS), is the national surveillance system to monitor clinical severity and hospital bed capacity. The COVID-19 Death database, governed by the Department of Disease Control (DDC), is the national monitoring system for all COVID-19 related deaths. We obtained the immunization history (vaccination date, number of vaccines, and type of vaccines) of each individual from the MOPH Immunization Centre (MOPH-IC).
We combined cases and controls appearing in the aforementioned databases (by using an encrypted national identification number, an official identity for all Thai nationals, as a primary key to link same individuals across databases). Moreover, we excluded cases (and their matched controls) whose laboratory collection occurred within fourteen days after the last vaccination date to avoid ambiguity of the vaccine status since global evidence suggests that immune response needs about fourteen days after the last shot to have adequate protective effect against the virus [13]. Finally, about 1.5 million records were included in the analysis for VE estimate. A summary of the data retrieval process is given in Figure 1.
sons for each test record (such as for contact tracing due to being high-risk contact of a COVID-19 case, for active case finding, and for other reasons). Then, to retrieve information about illness severity of each case, we joined the Co-Lab database with the other two databases, namely, Co-Ward database and the COVID-19 Death database. The Co-Ward database, managed by the Department of Medical Services (DMS), is the national surveillance system to monitor clinical severity and hospital bed capacity. The COVID-19 Death database, governed by the Department of Disease Control (DDC), is the national monitoring system for all COVID-19 related deaths. We obtained the immunization history (vaccination date, number of vaccines, and type of vaccines) of each individual from the MOPH Immunization Centre (MOPH-IC).
We combined cases and controls appearing in the aforementioned databases (by using an encrypted national identification number, an official identity for all Thai nationals, as a primary key to link same individuals across databases). Moreover, we excluded cases (and their matched controls) whose laboratory collection occurred within fourteen days after the last vaccination date to avoid ambiguity of the vaccine status since global evidence suggests that immune response needs about fourteen days after the last shot to have adequate protective effect against the virus [13]. Finally, about 1.5 million records were included in the analysis for VE estimate. A summary of the data retrieval process is given in Figure 1. For more details, we began with selecting two controls per case since 1 July 2021. Then, we dropped records during July-August 2021 to avoid the influence of the Alpha variant-the dominant strain nationwide during the first half of 2021. By allowing a seven-day margin in the matching process, the drop of records during July-August 2021 made the ratio of a case per controls approximately equate 1:2 instead of keeping the exact ratio of 1:2. For more details, we began with selecting two controls per case since 1 July 2021. Then, we dropped records during July-August 2021 to avoid the influence of the Alpha variant-the dominant strain nationwide during the first half of 2021. By allowing a sevenday margin in the matching process, the drop of records during July-August 2021 made the ratio of a case per controls approximately equate 1:2 instead of keeping the exact ratio of 1:2.
Data Analysis
We started with an overview of the data by descriptive statistics. Then, we applied conditional logistic regression to estimate the odds of infection in the vaccinees (for all brands combined and for specific regimens) relative to the odds in the unvaccinated group. The findings appeared in the form of odds ratio (OR) and 95% confidence interval (CI). For communication convenience, we present the results in the form of vaccine effectiveness (VE) and 95% CI where VE equated one minus OR.
All VE estimates were determined in two strands: against any infection, and against severe infection. For the VE estimate against any infection, controls were defined as noninfectee samples. For the VE estimate against severe infection, a case was classified as samples undergoing severe infection or death while controls comprised a combination of non-infectees and non-severe infectees. Note that a severe case for this study was defined as a person experiencing hypoxemic pneumonia or an intubated patient or a death.
We also assessed if the main analysis was still robust if the observations were restricted to high-risk samples. The high-risk samples were defined as people undertaking SARS-CoV-2 test for contact tracing and active case finding.
We later examined if the VE waned over time by dividing the analysis into four periods according to time since the last vaccination date: 15-29 days, 30-59 days, 60-89 days, and 90 days onward. Subsequently, we examined the VE for each vaccine regimen over time with a special attention on the two-dose and three-dose schedules which were widely distributed at that time. For the two-dose vaccinees, we focused on the following regimens: BNT162b2 + BNT162b2, ChAdOx1 + ChAdOx1, CoronaVac + Coro-naVac, ChAdOx1 + BNT162b2, CoronaVac + ChAdOx1, and CoronaVac + BNT162b2. For the three-dose schedule, we focused on CoronaVac + CoronaVac + ChAdOx1 and CoronaVac + CoronaVac + BNT162b2.
Results
We obtained a total of 1,698,588 records (558,865 cases and 1,139,723 controls). People aged between 18 and 59 years constituted the majority of the study participants. About 1.8% of the cases developed severe symptoms and half of the severe cases (50.2%) were the elders (>60 years). Forty-three percent of the participants were classified as high-risk samples. Almost a fifth of the participants undertook testing in Bangkok and five adjacent provinces (Nakhon Pathom, Pathum Thani, Nonthaburi, Samut Prakan, and Samut Sakhon), so-called Greater Bangkok, Table 1. in the analysis. We later analyzed these samples by conditional logistic regression. For these samples, about a third of the participants were unvaccinated. Participants receiving two doses constituted the greatest share of overall vaccinees. Approximately two-thirds (66.1%) of the severe cases were unvaccinated. Only 8.8% of the participants had received the third shot. The percentage of three-dose vaccinees was more pronounced in the control group (12.0%), Table 2. Table 2. Vaccination status of cases and controls included in the conditional logistic regression.
Characteristics
All-n (%) (N = 1,460, The volume of participants declined over time, from more than 380,000 controls and about 170,000 cases in September to approximately 137,000 controls and 76,000 cases in December. The percentage of two-dose vaccinees grew substantially from 20.5% to 61.9% throughout the study period. By December, the proportion of three-dose vaccinees was 13.1%, far larger from the proportion in September (6.2%). More details are presented in Supplementary Files S1 and S2. Figure 2 demonstrates that the three-dose vaccination, regardless of the regimens, provided a high level of VE estimates for both any infection (90.3% (90.0-90.5%)) and severe infection protections (98.3% (97.6-98.8%)). The VE estimate against any infection for a single shot was 9.7% (8.6-10.8%). Two-dose vaccination exhibited a moderate degree of protection against any infection (45.9% (45.4-46.5%)) but the protection effect against severe infection was still high (85.4% (84.0-86.6%)). The restricted sample analysis on highrisk samples followed the same pattern as full sample analysis. In general, the restricted samples demonstrated slightly higher VE estimates than the full samples.
The analysis tallied by time since the last shot found that the VE estimate for threedose vaccination for all regimens combined remained high (over 80%) with a negligible decline over time for both any and severe infections. The most distinctive protection benefit was found in severe infection protection among three-dose vaccinees (98-99% over time). In contrast, a remarkable drop of VE estimate for two-dose vaccination was observed. The most obvious waning of VE estimate was found in any infection analysis where the estimate dropped from 54.1% (53.2-55.1%) before day 30 to 40.3% (38.8-41.8%) after day 90. A decline of VE estimate against severe protection was also observable but this was less evident compared to the estimate against any infection, Figure 3.
The breakdown analysis found that all regimens exhibited varying degree of the VE estimate over time. Overall, heterologous prime-boost, ChAdOx1 + BNT162b2 and CoronaVac + BNT162b2, manifested the largest protection level (79.9% (74.0-84.5%) and 74.7% (62.8-82.8%)) within 30 days, and relative stable VE until day 90 and onward. BNT162b2 + BNT162b2 showed a protection level of 74.2% (71.8-76.3%) within 30 days but declined to 57.0% (43.6-67.2%) on day 90 and onward. ChAdOx1 + ChAdOx1 and CoronaVac + ChAdOx1 provided initial moderate protection level and declined relatively quick over time whereas CoronaVac + CoronaVac provided moderate protection after day 30 and onward. The three-dose schedules (CoronaVac + CoronaVac + ChAdOx1, and CoronaVac + CoronaVac + BNT162b2) expressed very high degree of VE estimate (above 80.0% at any time interval), Table 3. Note that the time lag between laboratory collection date and last vaccination date for the vaccinees whose laboratory collection date occurred Vaccines 2022, 10, 1080 6 of 12 at least 90 days after the last vaccination date did not vary much by vaccine regimens as detailed in Supplementary Table S1. The breakdown analysis found that all regimens exhibited varying degree of the VE estimate over time. Overall, heterologous prime-boost, ChAdOx1 + BNT162b2 and Coro-naVac + BNT162b2, manifested the largest protection level (79.9% (74.0-84.5%) and 74.7% (62.8-82.8%)) within 30 days, and relative stable VE until day 90 and onward. BNT162b2 + BNT162b2 showed a protection level of 74.2% (71.8-76.3%) within 30 days but declined to 57.0% (43.6-67.2%) on day 90 and onward. ChAdOx1 + ChAdOx1 and CoronaVac + ChAdOx1 provided initial moderate protection level and declined relatively quick over time whereas CoronaVac + CoronaVac provided moderate protection after day 30 and onward. The three-dose schedules (CoronaVac + CoronaVac + ChAdOx1, and CoronaVac + CoronaVac + BNT162b2) expressed very high degree of VE estimate (above 80.0% at any time interval), Table 3. Note that the time lag between laboratory collection date and last vaccination date for the vaccinees whose laboratory collection date occurred at least 90 days after the last vaccination date did not vary much by vaccine regimens as detailed in Supplementary File S3. Concerning severe infection, in general, almost all regimens displayed very high VE estimate (highest at 99.1% and lowest at 80.3%). For the two-dose regimens, heterologous prime-boost seemed to have slightly better protection for severe infection compared with homologous regimens. The three-dose vaccinees benefited from the vaccine by about 97.2-99.1% effectiveness against severe infection. The largest protection effect was observed in CoronaVac + CoronaVac + ChAdOx1 between day 30 and day 59. The waning of VE estimate was minimal in all regimens, compared with the analysis on any infection. Of note is that, in certain time intervals, there were fewer than 10 severe cases amongst the vaccinees. As a result, accurate VE estimate could not be determined (hence we specified the VE estimate in that period as "not applicable"). For instance, there was only one severe case out of 435 vaccinees (0.2%) of CoronaVac + BNT162b2 in the earliest time interval (15-29 days), Table 4.
Discussion
This study is probably one of the very first studies on COVID-19 vaccine effectiveness using real-world service data in southeast Asia, and perhaps in Asia. Overall, the full (two-dose) vaccination (regardless of the vaccine brands) contributed to moderate degree of protection against SARS-CoV-2 Delta variant infection by approximately 50%, while the three-dose regimens provided about 90% effectiveness. The two-dose effectiveness from our findings was fairly inferior to the finding from recent meta-analysis by Zheng et al., which suggested an 89.1% estimate among fully vaccinated individuals [14]. Such a difference was likely due to dissimilarity in the analysis period between the two studies. Zheng et al. gathered literature published during August 2020-October 2021, the period before the Delta variant prevailed across the globe. In contrast, our study focused on the latter half of 2021, when the Delta variant constituted the dominant share of all SARS-CoV-2 variants globally [15].
Although the effectiveness against any infection of the two-dose regimen seemed to be mediocre, the value of severe infection protection was still obvious (85.4% (84.0-86.7%)). This justifies the merit of massive and speedy vaccine rollout in the population. Evidence from many nations also confirmed this. For example, Haas et al. reported that the rapid mass roll-out of the Pfizer-BioNTech vaccine in early 2021 helped reduce thousands of deaths and hospitalizations; and, combined with strict non-pharmaceutical interventions, the massive vaccine rollout contributed to the rebound of the Israeli economy by 5.5% in 2021 [16]. Suthar et al. found that, in the United States (US), during the first half of 2021, when the Alpha variant of SARS-CoV-2 took a lion share, the COVID-19 mortality rate diminished by 81% in counties with high vaccine coverage, compared with counties that had very low coverage. The impact on mortality followed the same pattern during the second half of 2021 when the Delta variant became a dominant strain, despite smaller effects on the case incidence [17]. By late 2021, the Thai government set mass vaccine rollout as a national agenda to address the pandemic. Yet, several challenges remain as massive immunization is not just a matter of individual propensity to accept the vaccine, but is also involved with many system angles, such as affordability, allocation, deployment, and production capacity [18].
This study affirmed the benefit of a COVID-19 vaccine booster shot though no perfect protection against breakthrough infection. The VE estimate against any infection of the three-dose regimen varied about 90-95% with little waning over time. In contrast, for two-dose vaccination, the effectiveness against any infection markedly fell as time passed by. The decline of immunity was also observed in many studies abroad. Goldberg et al. indicated that, in Israeli residents, the immunity against the Delta variant of SARS-CoV-2 waned in all age groups a few months after receipt of the second dose of BNT162b2 [19]. For the three-shot individuals, the effectiveness saw a minimal decline and the overall effectiveness throughout the six-month period was about 92% against any infection and 99% against severe infection and death. This discovery coincided with many studies from Europe and the US, which corroborated the value of the booster shot [20][21][22]. Thompson et al. indicated a 94% effectiveness against hospitalization, fourteen days after the third shot of BNT162b2 [21]. A study in Israel by Barda et al. pointed to a 93% effectiveness against hospitalization for individuals receiving the third dose of BNT162b2 [20]. A study in the United Kingdom (UK) by Andrews et al. suggested that the relative effectiveness against symptomatic infection about a month after taking BNT162b2 or mRNA-1273 (Moderna) booster shot with the use of ChAdOx1-S and BNT162b2 as a primary course varied between 85% and 95% [22]. Domestic evidence by Kanokudom et al. and Yorsaeng et al. revealed higher neutralizing activity against all variants of concern of SARS-CoV-2 amongst the recipients of a third dose of ChAdOx1 (after two-dose CoronaVac) than those completing two-dose CoronaVac or ChAdOx1 alone [23,24]. It is worth noting that though the merit of the booster shot is apparent, further research is still needed to explore the proper timing of receiving the booster shot. It is possible that people are advised to receive annual COVID-19 vaccination as long as SARS-CoV-2 continues to circulate within the global population. A notable discovery in this study is that heterologous prime-boost regimens (especially, ChAdOx1 or CoronaVac followed by BNT162b2) provided favorable protection benefit which was relatively stable over time compared with homologous regimens, including BNT162b2 + BNT162b2. There is increasing international interest in heterologous primeboost COVID-19 regimens to mitigate supply shock or shortage of vaccines that might otherwise reduce the speed of vaccine rollout [25,26]. Recent studies, although few in number at the time of writing, pointed to the same direction that mix-and-match regimens, if exercised appropriately, can serve as another powerful tool to combat the pandemic. A prospective cohort immunogenicity study in Thailand found that receptor-binding domain (RBD)-specific antibody responses against wild-type and variants of concern of SARS-CoV-2 were higher in the heterologous CoronaVac + ChAdOx1 and homologous ChAdOx1 + ChAdOx1 regimens in comparison with CoronaVac + CoronaVac regimen [27]. Another study from China revealed that heterologous prime-boost strategy significantly enhanced neutralizing antibody titers and improved T-helper 1 (TH1) responses [28]. A large nationwide cohort study in Sweden estimated that using ChAdOx1 as the first dose, followed by either the BNT162b2 or mRNA-1273 as the second dose, resulted in 67% and 79% effectiveness against symptomatic COVID-19 infection, respectively [29]. Recent evidence also suggested that the prime-boost schedules showed mild adverse events and favorable safety, comparable to the homologous counterparts [27,30,31].
The mix-and-match of vaccines from different platforms has long been practiced before the advent of SARS-CoV-2. A number of possible mechanisms for the higher immunity caused by heterologous vaccine schedules have been proposed. The underlying mechanism for higher immunity of heterologous prime-boost schedules has not been clearly described. However, it is possible that by using dissimilar vaccine formulations, different arms of the immune system are evoked. As a result, a combination of cellular and humoral immunity engenders higher and more prolonged immunity [32,33]. Future research that unravels and compares the immunological mechanism of between homologous and heterologous prime-booster regimens is of huge academic value.
Regarding the methodology, this study contains both strengths and limitations. The use of routine nationwide service data is one of the key strengths since the findings can truly reflect the real-world vaccine effectiveness in the backdrop of the day-to-day health system performance. Yet, some limitations remain. First, the study relied on secondary data from different sources, each of which had its own data collecting protocol. During the data merging process, some information, which was not collected across the board, was dropped, such as occupation profiles, underlying diseases, and risk history of each individual. Therefore, residual confounding may still exist. To address this concern, the matching by age, living province, and time of specimen collection helped minimize these confounding effects. Moreover, the findings from restricted samples (high-risk participants) were quite similar to the main analysis. This implies that individual infection risk was soundly controlled by the matching process. Second, information bias cannot be evaded since the identification of cases was performed by either PCR or Ag-RDT. Ag-RDT is widely acknowledged for the inferior test performance, particularly sensitivity, relative to PCR. Hence, misclassification of infection status might occur. As the non-vaccinee group might have a larger fraction of severe cases, and owing to the admission protocol for many Thai hospitals, the severe cases were obliged to undertake PCR prior to admission, it is possible that our VE estimate was underestimated. Moreover, the VE estimate might be diluted if the proportion of participants undergoing Ag-RDT did not vary much by vaccination status, especially among asymptomatic or mild cases (non-differential misclassification bias). However, the volume of cases identified solely by Ag-RDT in the Co-Lab system was still far lower than PCR confirmed cases, and we included only professional-use Ag-RDT while excluding self-test Ag-RDT. Therefore, such bias might not severely compromise the result validity and the potential marginal underestimation of the effect suggested that the true VE might be even higher than the values observed in this study. Last, the measures gained from test-negative case control design do not always reflect those acquired from population-based case control design. It is universally accepted that the true number of COVID-19 cases is under-reported as some infectees are asymptomatic or have very mild symptoms, making them unaware of their infective status. In other words, the cases identified by the Co-Lab system do not necessarily mirror the true case volume in the population. Nonetheless, we deemed that the test-negative design is practically valid for studying VE in this context since the design has key advantages in controlling for similar participation rates, initial presentation, and diagnostic suspicion tendencies between cases and controls [34].
Conclusions
Though the degree of protection against any infection varied across vaccine regimens, all regimens revealed favorable effects against severe infection. As the effectiveness of two-dose regimens declined over time, a third-dose booster shot plays critical role for a country to achieve population herd immunity. The mix-and-match of vaccine regimens demonstrated acceptable outcomes with regard to the protection against both any and severe infections. Viral vector vaccine followed by mRNA vaccine exhibited the greatest protection level. Heterologous prime-boost regimens should be considered as an alternative to address vaccine shortage and accelerate the national vaccine rollout plan. Further monitoring on the effectiveness of various vaccine regimens while accounting for the advent of many more SARS-CoV-2 variants in the future is recommended.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/vaccines10071080/s1, Figure S1: Number of cases and controls over months, Figure S2: Proportion of each vaccination status over months, and Table S1: Descriptive statistics of the time lag between laboratory collection date and last vaccination date for the vaccinees whose laboratory collection date occurred at least 90 days after the last vaccination date. Funding: The funding for field investigation was from the DDC core resources. The publication fee was supported by the Thailand MOPH-U.S. CDC Coordinating Unit.
Institutional Review Board Statement:
We followed the tenets of the Declaration of Helsinki and obtained ethical clearance from the institutional ethics committee of the DDC, MOPH (letter head: FWA00013622). All individual information was encrypted from the beginning of the data retrieval process. The findings were presented in a way that could not be traced back to each study individual.
Informed Consent Statement:
As we used secondary datasets from the MOPH, direct informed consent from the participants was not applicable. Data Availability Statement: Not applicable. | 2022-07-09T15:08:37.250Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "3616dcfd7436ffa431e1f50cfb1d1b2c1a488a8f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/7/1080/pdf?version=1657029002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "659c8688dbaf7f4e556b3a548ed1660194c4c50a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
208212237 | pes2o/s2orc | v3-fos-license | The effect of myeloperoxidase isoforms on biophysical properties of red blood cells
Myeloperoxidase (MPO), an oxidant-producing enzyme, stored in azurophilic granules of neutrophils has been recently shown to influence red blood cell (RBC) deformability leading to abnormalities in blood microcirculation. Native MPO is a homodimer, consisting of two identical protomers (monomeric MPO) connected by a single disulfide bond but in inflammatory foci as a result of disulfide cleavage monomeric MPO (hemi-MPO) can also be produced. This study investigated if two MPO isoforms have distinct effects on biophysical properties of RBCs. We have found that hemi-MPO, as well as the dimeric form, bind to the glycophorins A/B and band 3 protein on RBC’s plasma membrane, that lead to reduced cell resistance to osmotic and acidic hemolysis, reduction in cell elasticity, significant changes in cell volume, morphology, and the conductance of RBC plasma membrane ion channels. Furthermore, we have shown for the first time that both dimeric and hemi-MPO lead to phosphatidylserine (PS) exposure on the outer leaflet of RBC membrane. However, the effects of hemi-MPO on the structural and functional properties of RBCs were lower compared to those of dimeric MPO. These findings suggest that the ability of MPO protein to influence RBC’s biophysical properties depends on its conformation (dimeric or monomeric isoform). It is intriguing to speculate that hemi-MPO appearance in blood during inflammation can serve as a regulatory mechanism addressed to reduce abnormalities on RBC response, induced by dimeric MPO.
MPO can also regulate the function of immune and nonimmune cells via its nonenzymatic effects. MPO binding with the cell surface of platelets [9,10], neutrophils [11,12], and erythrocytes [13,14] are able to activate the processes of intracellular signaling, leading to changes in the structural and functional properties of these cells.
Native MPO, released into the extracellular space as a result of neutrophil degranulation, is a homodimer, consisting of two identical protomers connected by a single disulfide bond, each containing light, heavy chains and heme [15]. Synthesis of dimeric MPO from monomeric ones is carried out at the stage of promyelocyte differentiation into granulocytes, as a result of which a dimeric glycosylated heme-containing MPO is formed [16,17].
Under in vitro conditions, the monomeric form of MPO, termed hemi-myeloperoxidase (hemi-MPO), can be easily formed by a cleavage of disulfide bond by reduction and alkylation, linking two identical protomers in native MPO [18]. Recently, we have shown that monomeric MPO can be formed in vitro by HOCl-induced disulfide bond oxidation [19]. These results suggest the possibility of hemi-MPO formation in inflammatory foci, where the generation of reactive halogen species is increased and various redox reactions are initiated. Indeed, recently we have shown the presence of hemi-MPO in the plasma of patients with marked inflammation [20]. Under in vivo conditions, the appearance of hemi-MPO is also possible as a result of incomplete processing to the mature enzyme [16,17].
One of current interest is the question of whether the functional properties of two MPO isoforms are different or similar and whether hemi-MPO, as well as the dimeric form, are able to bind to cell surface and regulate intracellular signaling processes.
Recently, we have shown that hemi-MPO induced cytosolic Ca 2+ -rise, as well as lysozyme and elastase degranulation in human neutrophils, but these effects were much weaker than observed in the case of dimeric MPO [20]. It should be noted that hemi-MPO has the same as dimeric MPO peroxidase and chlorinating activity and retains its bactericidal ability [16,17,21].
In this work, we carried out a comparative analysis of the hemi-MPO (obtained by disulfide bond reduction in dimeric MPO) and dimeric MPO effects on the structural and functional properties of red blood cells (RBCs). We have shown that hemi-MPO, as well as the dimeric form, bind to the glycophorins A/B and band 3 protein on RBC's plasma membrane, that led to changes in transmembrane potential, RBC morphology, reduced RBC deformability and reduced resistance to hemolysis. It was for the first time demonstrated that both dimeric and hemi-MPO induced the exposure of phosphatidylserine (PS) to the outer surface of the RBC membrane. However, all observed effects of hemi-MPO were significantly weaker than in the case of dimeric MPO. According to these data, it is intriguing to speculate that decomposition of native MPO into monomers in vivo may serve as a regulatory mechanism aimed to correct RBC function under inflammatory conditions.
Isolation of dimeric MPO
The HL-60 cell line (promyelocytic leukemia) was used as a source of dimeric MPO. MPO isolated from HL-60 was identical to MPO isolated from human neutrophils by size exclusion chromatography, SDS-PAGE, Western blotting, N-terminal sequence analysis and have the same peroxidase and chlorinating activities [23]. Cells were cultivated at 37 °C and 100% humidity in RPMI-1640 medium, containing 10% FCS, 2 mM glutamine and 25 mM HEPES buffer (pH 7.4), in roller bottles for suspension culture. Once a week, cells were sedimented by centrifugation at 1500 g, the pellet was resuspended in a minimum volume of fresh medium, and 1/5 of the volume of this cell suspension was transferred to a roller bottle containing fresh medium, while the remaining cells were washed three times with phosphatebuffered saline (PBS, 10 mM Na 2 HPO 4 /KH 2 PO 4 , 137 mM NaCl, 2.7 mM KCl, pH 7.4), resuspended in 2 volumes of 100 mM Na-acetate buffer (pH 4.7) and frozen. Dimeric MPO was isolated from the extract of thawed HL-60 cells lysed by ultrasound (44 kHz) and purified by affinity chromatography on heparin-Sepharose, hydrophobic chromatography on phenyl-Sepharose, and gel filtration on Sephacryl S-200 HR [24]. Using this method, it is possible to isolate a homogeneous preparation of dimeric MPO with a high specific activity and a purity index (A 430 /A 280 ) greater than 0.85.
Preparation of hemi-MPO [25]
The hemi form of MPO was prepared by treating dimeric MPO (145 μM) with 2-mercaptoethanol (1:4 molar ratio of MPO to 2-mercaptoethanol) for 30 min at 37 °C in 100 mM Na-carbonate buffer, pH 9.4, as described elsewhere [18,25]. SH-groups were then blocked with iodoacetamide for 30 min at 4 °C (1:20 molar ratio of MPO to iodoacetamide). The resulting protein solution was concentrated in VivaSpin 20 ultrafiltration units (Sartorius, Germany) with a molecular weight cut-off of 30 kDa, with the buffer being exchanged for 100 mM Na-acetate buffer (pH 5.5). Traces of dimeric MPO were separated from hemi-MPO by gel filtration on a Sephacryl S-200 HR column (114 × 1.5 cm) equilibrated with 100 mM Na-acetate buffer (pH 5.5). SDS-PAGE in non-reducing conditions showed a complete absence of the dimeric form in hemi-MPO preparation. It was shown that there were no differences in peroxidase, chlorinating and bactericidal activity between hemi-MPO and dimeric MPO [25]. Concentration of dimeric and hemi-MPO was determined spectrophotometrically using an extinction coefficient of 112,000 M −1 ·cm −1 per heme of MPO.
Isolation of RBCs
Washed RBCs were obtained after two centrifugation cycles at 400 g for 5 min of capillary blood (100 µl) in 10 ml of PBS or venous blood collected in tubes containing 3.8% (w/v) sodium citrate as anticoagulant at a ratio of 9:1 and stored in PBS, containing 10 mM d-glucose at 4 °C. Washed RBCs from capillary blood (1% hematocrit, unless otherwise indicated) were used for AFM, hemolysis, patch-clamp and flow cytometry assays whereas washed RBC from venous blood were used to prepare RBC ghosts (RBCGs) by hypoosmotic hemolysis. Venous blood samples were obtained from healthy donors at Federal State Budgetary Scientific Institution "Institute of Experimental Medicine". All blood donors were volunteers and gave informed consent.
RBCGs preparation
Washed RBCs were mixed with cold hemolysis buffer (10 mM Tris-HCl, 1 mM EDTA, pH 7.6, 4 °C) at a 1:20 ratio by volume and incubated at 4 °C for 5 min. Then, the sample was centrifuged twice at 30,000 g (30 min, 4 °C) and the RBCG pellet was resuspended with cold hemolysis buffer: by 10 volumes (first centrifugation) and by 3 volumes (second centrifugation). The final RBCG suspension was used for downstream procedures.
Detection of MPO-binding proteins using ligand western blot assay
RBCGs were lysed in SDS-Tris sample buffer (125 mM Tris-HCl, pH 6.8, 2% SDS, 0.1% 2-mercaptoethanol, 0.001% bromphenol blue, and 50% glycerol) at a ratio 1:5 by volume, and 100 µg of total protein was loaded per well of polyacrylamide gel [26]. Using a semi-dry method [27] the separated proteins were transferred on nitrocellulose membranes and the blocking procedure was performed using a blocking solution BSA-T (1% BSA amd 0.05% Tween 20 in PBS). To detect RBC proteins, which bind with MPO isoforms, the membranes were incubated for 30 min with hemi-or dimeric MPO in BSA-T solution, followed by exposure for 1 h to HRP-labeled rabbit anti-human MPO antibody. Each step was accompanied by washing of the membranes three times with BSA-T solution for at least 5 min per washing step. The peroxidase activity was visualized using 4-chloro-1-naphthol plus H 2 O 2 system. In the absence of HRP-labeled antibody, basal MPO peroxidase activity was not manifested. There were no difference between MPO and hemi-MPO in binding to the horseradish peroxidase (HRP)labeled antibody against MPO as was shown in control dotblotting experiments. The identity of MPO-binding protein bands on SDS-PAGE gels was confirmed by mass spectrometry after in situ tryptic digestion [28].
Hemolysis detection
A suspension of washed RBCs (30 µl) treated or not with monomeric/dimeric MPO was added to 60 mM NaCl solution (300 µl) to induce hypotonic hemolysis or to phosphate-citrate buffer containing 155 mM NaCl and 4.1 mM Na 2 HPO 4 /7.9 mM citric acid (300 µl) to induce acidic hemolysis. The process of hemolysis was recorded as changes in light transmission at 670 nm and 37 °C of constantly stirred cell suspensions using analyzer AP2110 (SOLAR, Minsk, Belarus). To quantify the hemolysis process the following parameters were used: G, maximal extent of hemolysis, i.e., the maximal level of light transmission of cell suspension at the plateau, and t 50 , the time point when the change in light transmission has reached its half-maximal value.
Atomic force microscopy (AFM) measurements
RBCs were treated with monomeric/dimeric MPO for 10 min at room temperature and then fixed in 1.5% glutaraldehyde for 30 min. Fixed RBCs were washed by fourstep centrifugation at 400 g for 3 min and the RBC pellet was resuspended twice in PBS and twice in distilled water. Washed RBCs were placed on a glass slide and air-dried for several hours. All steps were performed at room temperature.
3
The images of RBC's surface membrane were obtained using a NT-206 microscope (MicroTestMachines, Minsk, Belarus) working in the contact mode using the software of the microscope. Standart cantilevers NSC 11A (« Mikro-Masch » Co, Estonia) with a spring constant of 3 N/m were used. Tip radii were checked by using a standard TGT01 silicon grating from NT-MDT (Moscow, Russia) and were 10 nm for topography visualization and 60 nm for cell stiffness determination. Surface profiles were obtained using scan sizes of 14 × 14 mm at a scan rate of 3 Hz. The resulting image (topographic image) was recorded as a surface height distribution Z (X, Y). For each scanned cell, the height H (maximum cell height), the concave depth h (minimum height of the cell), the diameter of RBCd and the relative concave depthk were determined: The force spectroscopy regime was used to determine local elastic properties of RBCs. At least three force curves from the peripheral part of the randomly selected cells (7-10 cells) for each treatment were recorded. The cell Young's modulus was calculated as described earlier [29] using Hertz model and used as a measure of RBC stiffness. The indentation depth was 15 nm to avoid the influence of a rigid substrate on the magnitude of the estimating Young's modulus [30].
Light microscopy
To observe changes in RBC morphology, induced by MPO isoforms, RBCs were suspended in PBS, pH 7.4, with 1 mM CaCl 2 , placed in a Petri dish and transferred to an optical microscope for analysis. The transmitted light images of the RBCs were recorded before (control) and after MPO addition to cell suspension at time intervals of 15-60 s for 15 min using an optical microscope Olympus BX51WI (Tokyo, Japan), LUMPlan objective (40 ×/0.80) and digital camera OSCAR 45 (Taiwan). Quantitative analysis was performed using the analyzer Meco-Hemo (Mecos, Russia) counting approximately 500 cells per each image.
Measurement of RBC membrane potential by patch-clamp technique
Washed RBCs (5 µl) were carefully placed in the bottom of a Petri dish, filled with 5 ml of external buffer solution (145 mM NaCl, 10 mM HEPES, 10 mM d-glucose, 5 mM KCl, 1 mM MgCl 2 , 1 mM CaCl 2 , pH 7.4, osmolarity 290 mOsm). Patch pipettes with tip resistance 10-20 MΩ were prepared from borosilicate glass before each experiment using a puller Sutter P-97 (HEKA Elektronik, GmbH) and filled with internal buffer solution (5 mM NaCl, 10 mM HEPES, 145 mM KCl, 1 mM MgCl 2 , 0.3 mM CaCl 2 , 3 mM EGTA, pH 7.2, osmolarity 280 mOsm). A micromanipulator MP-225 (Sutter Instrument) was used to bring the patch pipette close to a single RBC and then a small negative pressure was applied to the pipette, leading to giga-seal formation (3-10 GΩ). Patch-clamp recordings of membrane potential were carried out in cell-attach configuration in current-clamp mode using an amplifier HEKA EPC 8 (HEKA Elektronik, GmbH), filtered at 1 kHz. When the successful cell-attached configuration was achieved and membrane potential reached the constant values (15-20 mV), dimeric or hemi-MPO was added to bath solution and changes in membrane potential were recorded.
Flow cytometry
To probe PS exposure, washed RBCs were suspended at 0.015% hematocrit in PBS, pH 7.4, with 2 mM CaCl 2 , treated with monomeric/dimeric MPO or ionomycin/PMA for 15 min at room temperature, stained with Annexin V-Alexa Fluor 647 (100 µg/ml) under protection of light for 5 min at room temperature und used immediately for flow cytometry assay. PS exposure was measured in the FL-6 channel (660 nm) excited at 638 nm. 10,000 cells were measured per each sample.
To measure intracellular Ca 2+ , RBCs were incubated with 3.5 µM Fluor-3/AM in PBS for 60 min at 37 °C in the dark, followed by centrifugation (300 g, 7 min) and subsequent washing in PBS three times. Fluor-3-loaded RBCs were exposed to 100 nM of hemi-MPO or 100 nM of dimeric MPO or 1 µM of ionomycin (as a positive control) and the aliquots were sampled every minute to detect changes in Fluor-3 fluorescence (525 nm) excited at 488 nm.
Both flow cytometric assays were performed on a Navios (Beckman Coulter, USA) system.
Statistical analysis
Data are expressed as mean ± SD or mean ± SEM, as indicated in the captions to the figures and tables. To analyze differences between mean values of the two groups, the Student t test was used. Differences between mean values of more than two groups were analyzed by ANOVA followed by Student-Newman-Keuls test. Statistical analysis was performed using Origin 7.0 (Northampton, USA) or Statistica software. A p value < 0.05 was considered to be significant.
Interaction of RBCG proteins with hemi-MPO
Recently, it was shown that binding of dimeric MPO to RBC surface is based mostly on electrostatic interactions with the participation of sialic acids and its main targets are band 3 protein (B3) and glycophorin A and B [13,31]. To check 1 3 if hemi-MPO binds to the same targets on RBC surface, RBCG proteins were separated by SDS-PAGE (Fig. 1, panel 1) and transferred to a nitrocellulose membrane. Their interaction with hemi-MPO and dimeric MPO were analyzed using ligand Western blotting, using rabbit anti-MPO antibodies labeled with HRP. Rabbit antibodies against MPO did not react with RBCG proteins without preliminary addition of MPO (Fig. 1, panel 2). The membrane showed that five dimeric MPO-binding regions were revealed using ligand Western blot assay (Fig. 1, panel 3) corresponding to the band 3 protein (B3) and glycophorin A and B (GpA2, GpAB, GpB2, GpA). These glycoproteins were identified earlier with help of periodic acid-Schiff reagent and by massspectrometry [13]. Similar patterns of hemi-MPO binding to the five protein areas were detected (Fig. 1, panel 4). These results indicate that hemi-MPO as well as the dimeric MPO binds to the band 3 protein and glycophorin A and B of the RBC plasma membrane.
To be sure that hemi-and dimeric MPO stably bind to RBC surface proteins in their native environment, we incubated washed RBCs with MPO isoforms for 15 min and then measured MPO concentration in cell supernatants as described earlier [32]. The decrease of dimeric MPO as well as hemi-MPO content in cell supernatant (Supplementary Materials, Fig. S1) indicates that both isoforms stably bind with RBCs.
Effect of hemi-MPO on the RBC elastic properties and their resistance to hemolysis
Hemolysis was initiated by reducing the ionic strength of the medium (osmotic hemolysis) or pH (acidic hemolysis).
As shown in Fig. 2a, b hemi-MPO as well as dimeric MPO augmented acidic and osmotic hemolysis in a dose-dependent manner. Thus, the degree of osmotic hemolysis (Fig. 2c) increased, and the half-time of acidic hemolysis decreased (Fig. 2d) for RBCs treated with both MPO forms in comparison to control, indicating a decrease in cell resistance to hemolysis. However, the effect of hemi-MPO was lower in comparison with native dimeric MPO (Fig. 2c, d). It should be noted, that unrelated to MPO positively charged protein human lactoferrin (hLF) with molecular mass 76 kDa similar to that of hemi-MPO did not affect acidic and osmotic hemolysis (data not shown) indicating the specificity of MPO isoforms' effect on RBC resistant to hemolysis.
As MPO can induce the production of hypohalous acids, which are known to initiate RBC hemolysis [33,34], we next examined the observed effects in the presence of MPO enzymatic activity inhibitor -4-ABH. As shown in Fig. 2e 4-ABH (50 µM) failed to abrogate hemi-MPO-mediated increase in hemolysis. Furthermore, under hypotonic and acidic conditions (used in present study) MPO peroxidase activity decreased by at least 97%. 4-ABH (50 μM) almost completely suppressed the rest of MPO enzymatic activity (data not shown).
Differences in the hemi-MPO and dimeric MPO effects on RBC mechanical properties were also shown by AFM. To assess the RBC surface elastic properties and cell stiffness, we determined the local Young's modulus for intact RBCs and RBCs treated with both MPO isoforms (Fig. 2f). Hemi-MPO and dimeric MPO caused increase of Young's modulus values by approximately 10% and 30%, respectively (Fig. 2f). These data indicate that both MPO isoforms lead to RBC membrane stabilization and increase in their mechanical stiffness but to a various stage.
Thus, it can be concluded, that binding of hemi-MPO, as well as native MPO with RBC plasma membrane, initiates similar changes in cell structural and functional properties. These hemi-MPO effects do not depend on the catalytic activity of the enzyme and are rather weaker than in the case of dimeric MPO.
Effect of hemi-MPO on size and morphology of RBCs
We have recently shown that RBC treatment with native dimeric MPO led to their volume increase, as evidenced by a marked increase in the number of stomatocytes and microspherocytes [13]. Moreover, the maximum change in cell morphology occurred within the first two min and then the cells reverted back to the morphology of normocytes. In this work, we examined the effect of hemi-MPO on cell morphology and compared it with the effect of dimeric MPO.
During the period of observation (15 min) the morphology of control (untreated) RBCs did not change over time. As expected dimeric MPO addition to RBCs suspension induced cell swelling during the first 15 s as was evidenced by appearance of significant amounts of stomatocytes (Fig. 3e), reduction in echinocyte number (Fig. 3b) and after 15 min of observation led to significant rise in the number of microspherocytes (Fig. 3d). Although hemi-MPO did not induce significant changes in the number of stomatocytes and echinocytes (Fig. 3e, b), the final increase in the number of microspherocytes was significant (Fig. 3d), however, this effect was less pronounced than in the case of dimeric MPO. It should be noted that the observed appearance of microspherocytes in cell suspension indicates about the MPOinduced increase in cell volume. Indeed, changes in RBC volume, induced by both MPO isoforms, were observed by AFM (Fig. 4, Table 1). It was shown that RBCs treatment with dimeric MPO led to a decrease in concave depth, as evidenced by a significant change in the parameters h and k, while other linear cell sizes (height, H and diameter, d) were unaffected (Table 1, Fig. 4c). In the presence of hemi-MPO, a decrease of the relative concave depth (k) was also observed, however, this change was lower, compared to native dimeric MPO (Fig. 4).
Thus, the obtained results indicate that hemi-MPO, similarly to the dimeric isoform of the enzyme, induces changes in RBC morphology and increase in their volume, but to a much lesser extent than dimeric MPO.
Hemi-MPO effect on RBC membrane potential
Changes in morphology and RBC volume are closely linked to ionic conductivity of plasma membrane. Thus, recently, we have shown that MPO-induced increase in RBC volume is associated with depolarization of plasma membrane, while the subsequent restoration of cell morphology and volume -with plasma membrane hyperpolarization [13]. In the present work, we also examined whether hemi-MPO had an influence on RBC membrane potential. Using a "cellattach" patch clamp technique, it was shown, that like in the case with dimeric MPO, the addition of hemi-MPO to RBC suspension induced a two-stage change in membrane potential: fast membrane depolarization, followed by a prolonged hyperpolarization (more than 10 min) (Fig. 5a). As expected, the effect of hemi-MPO at both stages: depolarization and hyperpolarization were lower compared to dimeric isoform of MPO (Fig. 5a, b).
It should be noted, that all described changes in structural and functional properties of RBCs, induced by both MPO Fig. 3 Changes in RBC morphology after incubating the cells with dimeric MPO (100 nM) or hemi-MPO (100 nM). The number (in %) of normocytes (a), echinocytes (b), cup-shaped cells (c), microspherocytes (d) and stomatocytes (e) was calculated for 15 s, 2 min, and 15 min after MPO addition. The data are presented as mean ± SEM (n = 500-550). *p < 0.05 comparing means to untreated control ▸ 1 3 isoforms, were observed only in the medium containing Ca 2+ ions. No apparent changes in morphology, cell sizes or ion permeability occurred in calcium-free medium (data not shown). Actually, we have shown previously [13], that binding of native MPO to RBC plasma membrane induces Ca 2+ entry into the cytosol of cells. In present work, hemi-MPO was also capable to induce rise in cytosolic Ca 2+ concentration as measured by flow cytometry in Fluor-3 loaded RBCs (Fig. 6) but this effect was lower compared to the Ca 2+ -response induced by dimeric MPO and Ca 2+ -ionophore ionomycine. Since intracellular Ca 2+ -rise can activate phospholipid scramblase, that bidirectionally and nonspecifically transports phospholipids, leading to PS exposure on cell external leaflet [35] and considering recent data, that PS exposure is controlled by membrane hyperpolarization due to Ca 2+ -dependent Gardos channel opening [36], it was intriguing to investigate if native dimeric and hemi-MPO lead to PS exposure on the RBC's membrane.
PS exposure in RBCs, treated with dimeric and hemi-MPO
To determine if MPO isoforms are able to induce PS exposure on the outer RBC's leaflet, cells were preincubated with native dimeric or hemi-MPO for 15 min and stained with annexin V for PS detection by flow cytometry. As a positive control, we used calcium ionophore ionomycin (1 µM) and protein kinase C activator PMA (5 µM), which were shown to induce PS exposure in RBCs [36][37][38]. As shown on Fig. 7 RBC treatment with both dimeric MPO and hemi-MPO led to a significant increase in PS exposure by 34% and 22%, respectively. The effect of dimeric MPO was comparable to that of ionomycin and PMA. However, according to the previous results, the effect of hemi-MPO was less pronounced.
Discussion
Today, along with wide investigation of MPO enzymatic activity, great attention is paid to its ability to bind to plasma membrane of blood cells and regulate their structural and functional properties. This ability doesn't depend on the catalytic activity of the enzyme, but is largely due to the peculiarities of the structure of the MPO molecule. In this work, we have shown, that the decomposition of dimeric MPO into monomers is accompanied by a decrease in its ability to regulate the structural and functional properties of red blood cells.
The peculiarity of MPO structure is that mature MPO, which is stored in azurophilic granules of fully differentiated neutrophils, is a dimer (~ 145 kDa), consisting of identical heme-containing protomers connected by a disulfide bond. Native dimeric MPO is able to bind to the plasma membrane and regulate the functional responses of various cells.
Thus, binding of native MPO to CD11b/CD18, a major neutrophil adhesion receptor, leads to tyrosine phosphorylation of a number of proteins and as a result stimulates degranulation [12], adhesion, and also increases the survival of these cells [39]. However, as has been shown previously [20], abnormal MPO conformation is accompanied by a decrease in its ability to regulate the functional activity of neutrophils. The reductive alkylation of MPO leads to its inability to enhance neutrophil adhesion [40]. Recently, we have shown that hemi-MPO, as well as MPO modified by hypochlorous acid (MPO-HOCl), lost its ability to prime NADPH-oxidase of neutrophils [20]. In addition, it was found that hemi-MPO to a much lesser extent than dimeric MPO-stimulated rise in cytosolic calcium and lysozyme exocytosis in neutrophils, and the capacity of monomeric MPO to delay apoptosis of neutrophils and increase their lifespan was weaker than that of dimeric MPO [20]. Previous studies with RBCs demonstrated that MPO-HOCl, in contrast to native dimeric MPO, lost its ability to bind to plasma membrane of RBC and regulate their structural and functional properties [13]. Apparently, this effect was due to a decrease in the net positive charge of the MPO molecule, resulted from halogenation of its amino groups by HOCl, that led to a decrease in the electrostatic interaction with negatively charged RBC plasma membrane proteins. In present study, we have shown for the first time that in contrast to MPO-HOCl [13], hemi-MPO, obtained from native MPO by disulfide cleavage, retained the ability of the enzyme to bind to RBC surface (Fig. 1). Since dimeric MPO dissociation into two hemi-MPO molecules due to disulfide bond reduction preserves the charge of the hemi-MPO molecules, then, apparently, the electrostatic interaction of hemi-MPO with RBC proteins is conserved.
Binding of hemi-MPO, as well as binding of dimeric MPO with RBC's membrane proteins, reduced cell resistance to osmotic and acidic hemolysis as well as cell elasticity (Fig. 2), led to significant changes in cell volume, morphology (Table 1, Figs. 3, 4), the conductance of plasma membrane ion channels (Fig. 5) and cytosolic Ca 2+ concentration of RBCs (Fig. 6). It has been shown for the first time that both dimeric and hemi-MPO contribute to the formation of PS-positive RBCs (Fig. 7). These results are of great importance, as the exposure of PS on the outer membrane leaflet of RBCs serves as a signal for eryptosis, a mechanism for the RBC clearance from blood circulation and also lead to adhesion of RBCs to endothelium in some diseases such as sickle cell anemia, malaria, and diabetes [41].
However, the effects of hemi-MPO on the structural and functional properties of RBCs were lower compared with those of dimeric MPO. The possible reason is the presence of two receptor-binding sites on native dimeric MPO molecule in contrast to one binding site for hemi-MPO. Dimeric MPO, being a bivalent ligand, when binds to its corresponding receptors, can lead to their clustering that may have a significant effect on intracellular signaling [42,43]. On the other hand, it was shown that MPO-binding proteins on RBC membrane: band 3 protein and glycophorin A, form a complex [44,45]. Furthermore, as bivalent ligands may possess higher binding affinity to clustered receptors compared to monovalent ligands [42,43], dimeric MPO effect on the structural and functional RBC properties may be more pronounced compared to hemi-MPO.
Thus, the ability of MPO protein to influence RBC's biophysical properties depends on its conformation (dimeric or monomeric isoform). It is intriguing to speculate that hemi-MPO appearance in blood during inflammation, as it was shown earlier [20], can serve as a regulatory mechanism addressed to reduce abnormalities on RBC response. | 2019-11-22T16:19:35.732Z | 2019-11-21T00:00:00.000 | {
"year": 2019,
"sha1": "7cd41d89a2c683a3884f21be6877d4cbd9d11c90",
"oa_license": "CCBY",
"oa_url": "https://elib.bsu.by/bitstream/123456789/234626/1/2019%20The%20efect%20of%20myeloperoxidase%20isoforms%20on%20biophysical%20properties.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "8f7cebecffd82176820e400647562ea85933133c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
53721407 | pes2o/s2orc | v3-fos-license | X-ray-activated long persistent phosphors featuring strong UVC afterglow emissions
Phosphors emitting visible and near-infrared persistent luminescence have been explored extensively owing to their unusual properties and commercial interest in their applications such as glow-in-the-dark paints, optical information storage, and in vivo bioimaging. However, no persistent phosphor that features emissions in the ultraviolet C range (200–280 nm) has been known to exist so far. Here, we demonstrate a strategy for creating a new generation of persistent phosphor that exhibits strong ultraviolet C emission with an initial power density over 10 milliwatts per square meter and an afterglow of more than 2 h. Experimental characterizations coupled with first-principles calculations have revealed that structural defects associated with oxygen introduction-induced anion vacancies in fluoride elpasolite can function as electron traps, which capture and store a large number of electrons triggered by X-ray irradiation. Notably, we show that the ultraviolet C afterglow intensity of the yielded phosphor is sufficiently strong for sterilization. Our discovery of this ultraviolet C afterglow opens up new avenues for research on persistent phosphors, and it offers new perspectives on their applications in terms of sterilization, disinfection, drug release, cancer treatment, anti-counterfeiting, and beyond.
Introduction
Persistent luminescence is an optical phenomenon in which a material stores excitation energy in excited states and the resulting luminescence lasts for an appreciable time after the excitation has stopped 1,2 . Phosphors exhibiting persistent luminescence have received significant attention and have been commercialized for a wide range of applications [3][4][5][6][7][8][9][10][11][12] . Since the pioneering work of Matsuzawa et al. 13 blue persistent CaAl 2 O 4 :Eu 2+ , Nd 3+ (ref. 14 ), green persistent SrAl 2 O 4 :Eu 2+ , Dy 3+ (ref. 13 ), and red persistent Y 2 O 2 S:Eu 3+ , Mg 2+ , and Ti 4+ (ref. 15 ) have been developed. Recent efforts have resulted in the creation of a series of near-infrared (NIR) persistent phosphors, such as Cr 3+ -doped zinc gallogermanates that can be employed for the in vivo biological imaging and the in vitro targeting of cancerous cells [16][17][18][19][20][21][22][23][24][25][26] . Despite these significant achievements, most of the persistent phosphors reported thus far luminesce in the visible and NIR spectral regions 27 , and no persistent phosphors exhibiting ultraviolet C (UVC) luminescence are known to exist. It is well known that UVC light in the 200-280 nm range is germicidal, i.e., it can kill bacteria, viruses, and other pathogens by destroying nucleic acids and disabling their ability to multiply 28 . However, the design and synthesis of such a long-lasting phosphor has not been possible thus far.
Two classes of centers, emitters and traps, are required in persistent phosphors. Emitters are optical centers that luminesce after being excited, whereas traps, which are usually associated with defects in host lattices, store excitation energy and then release it slowly to the emitters by virtue of thermal stimulations 2 . Since the emitter determines the emission wavelength, the first required condition for the design of UVC-persistent phosphors is the choice of a suitable emitter that can luminesce in the UVC. Rare earth (RE) ions have been widely adopted as emitters in persistent phosphors owing to their intraconfigurational transitions. It is noted that some RE ions (e.g., Pr 3+ ) possess strong, high-energy, interconfigurational transitions, which thus could be employed as candidate emitters in UVC persistent phosphors [29][30][31] . The second required condition is the selection of the candidate host materials with a large bandgap and appropriate traps. In this respect, fluoride crystalline lattices can be considered because of their relatively large band gaps and the easy creation of anionic defects 32 . The third condition is that the material system can be efficiently activated (i.e., charged) by suitable excitation sources. Given the large bandgap required for the host, X-ray irradiation may be appropriate. Another requirement relates to the persistent time and intensity of the UVC emission, which should be as long as possible and strong enough to satisfy some practical applications.
Here, we demonstrate a strategy for the creation of a new generation of persistent phosphor featuring UVC emission (which we refer to as a UVC persistent phosphor) by judiciously selecting defect-bearing fluoride elpasolite (i.e., Cs 2 NaYF 6 ) as a host and Pr 3+ ions as emitters. The resulting phosphors show bright UVC emissions that can last over 2 h after X-ray irradiation. To our knowledge, our work is the first discovery of persistent phosphors capable of luminescing in the UVC. A broad range of experimental characterizations combined with first-principles calculations suggest that oxygen introduction-induced fluorine vacancies act as electron traps, enabling the system to capture a large number of electrons upon X-ray irradiation. This results in strong UVC persistent luminescence corresponding to the 4f5d-4f 2 transition of Pr 3+ when releasing trapped electrons. We show that the UVC persistent luminescence of this phosphor is strong enough to be used for sterilization. Importantly, the phosphors can be charged repeatedly by X-ray or poststimulated by NIR light in the first biological window, thus offering an attractive prospect for applications, such as the in vivo killing of pathogens and cancer cells. This work offers a protocol for the design and preparation of UVC persistent phosphors and opens up new avenues for a wide variety of practical applications.
Results
We chose elpasolite Cs 2 NaYF 6 as the host and Pr 3+ ions as emitters with the ideas that (1) the Cs element has a strong X-ray absorption capability that makes the system capable of charging by X-ray irradiation, (2) Cs 2 NaYF 6 has a large bandgap and a propensity for the formation of structural defects that are likely to act as electron traps 32 , and (3) the 4f5d-4f 2 transition of Pr 3+ ions can result in UVC emission [29][30][31] . The micrometer-sized phosphor with a nominal composition of Cs 2 NaY (1−x) F 6 :xPr 3+ was synthesized through a solid-state reaction method (Supplementary Fig. 1). The major reflections of X-ray diffraction (XRD) patterns for the products can be indexed with an Fm-3m space group that corresponds to the cubic elpasolite ( Supplementary Fig. 2), consistent with JCPDS no. 74-0043. In this double perovskite structure, both Y and Na coordinate with six fluorine atoms, and doped Pr 3+ ions are expected to substitute for Y 3+ ions (Supplementary Fig. 3). We first characterized the photoluminescence properties of the yielded product. Under 288 nm excitation, the product shows luminescence bands at 486 and 610 nm, which can be assigned to the 3 P 0 → 3 H 4 and 3 P 0 → 3 H 6 transitions of Pr 3+ , respectively 33 (Supplementary Fig. 4). We note that after photoexcitation ceases, the photoluminescence quickly disappears, indicating that excitation at 288 nm cannot result in the charging of this system, which is a necessity for long persistent luminescence. Interestingly, we find that after X-ray irradiation, the sample displayed strong, longlasting UVC persistent luminescence peaking at~250 nm owing to the 4f5d → 3 H 4 electronic transition of Pr 3+ , accompanied by visible bands of 4f 2 -4f 2 intraconfigurational transitions (Fig. 1a). Figure 1b displays the afterglow decay of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ detected at 250 nm following irradiation with an X-ray source for 1000 s that corresponds to a dose of 20 Gy ( Supplementary Fig. 5). We note that all data regarding the afterglow intensity versus time were taken using a spectrofluorometer from 5 min after stopping the X-ray irradiation, to avoid the measurement artifact owing to fast early decay. Clearly, the persistent luminescence of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ can last over 2 h, and after 2 h, the intensity is still over one order of magnitude stronger than the background signal of the detection system (Fig. 1b). Additionally, we find that the afterglow behavior is intimately associated with the concentration of Pr 3+ ions and the X-ray irradiation duration. The optimal concentration is determined to be x = 0.01 ( Supplementary Fig. 6), and 1000 s of X-ray irradiation of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ gives rise to the strongest UVC persistent luminescence ( Supplementary Fig. 7). The afterglow spectra recorded at different times reveal that the lineshape of the luminescence changes with times; the UVC emission decays faster than the visible emissions (Fig. 1b, Supplementary Fig. 8). In addition to the emission bands mentioned above, we note that an emission shoulder at~340 nm in the afterglow spectra occurs, which is confirmed to be from the host material ( Supplementary Fig. 9).
We further confirmed the UVC emission by using a UVC imager. Figure 1c and Supplementary Fig. 10a display the dependence of the UVC emission intensity on the decay time for Cs 2 NaY 0.99 F 6 :0.01Pr 3+ irradiated by the X-ray for 1000 s. Clearly, the decay decreases much faster during the first several minutes, and then it occurs slowly ( Supplementary Fig. 10b). The initial afterglow UVC emission intensity of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ after stopping the X-ray irradiation was estimated to be over 10 milliwatts per square meter (Materials and methods). Heating the 24-h-decayed phosphor at 200°C gave rise to strong UVC luminescence again, with a lasting time of over 5 min (Fig. 1d, Supplementary Fig. 10c). Interestingly, we find that these trapped electrons can also be liberated by laser irradiation with various photon energies ( Fig. 1e-g, Supplementary Fig. 11). Specifically, irradiation of 450 nm light causes stronger emissions than those of 730 and 793 nm light, suggesting that there are more residual trapped electrons located in deeper traps. Collectively, these observations provide an indication that the UVC persistent phosphors developed here feature a diversity of traps, and that a large number of residual electrons is located in deep traps after the room temperature release of electrons in shallow ones. We underscore that this characteristic is extremely attractive and of vital importance for some applications of these UVC phosphors. For instance, although a relatively long-time X-ray irradiation is required to charge the system at present, the release of NaY 0.99 F 6 :0.01Pr 3+ and X-ray irradiation for 1000 s. a The afterglow spectra recorded at different times after ceasing X-ray irradiation. The emission band peaking at 250 nm and the emission shoulder at 270 nm can be assigned to the transitions of 4f5d → 3 H 4 and 4f5d → 3 H 5 , respectively. In addition to the UVC emissions, visible emission bands were also observed. b Afterglow decay detected at 250 nm as a function of time. The data were taken from 5 min after stopping the X-ray irradiation. c UVC images of phosphors taken at different afterglow times; the images after 300 s are shown in Supplementary Fig. 10a. d UVC images of the 24-h-decayed phosphors heated to 200°C on a hot plate. e-g UVC images of the 24-h-decayed phosphors under laser irradiation with different wavelengths of (e) 793 nm, (f) 730 nm, and (g) 450 nm. The excitation power density is 1.77 W/cm 2 for the 793, 730, and 450 nm excitation wavelengths stored electrons partly in the form of UVC photons under NIR-light stimuli renders it attractive for the in vivo killing of pathogens and cancer cells.
The above results clearly show that Pr 3+ ions act as emitters, but the identity of the traps remains unclear in this persistent phosphor. To understand the mechanism of persistent luminescence observed here, we next performed detailed experimental characterizations of the composition, structure, and possible defects in this system using a wide range of techniques. The composition of the yielded powders was first characterized by transmission electron microscopy and energy-dispersive X-ray spectroscopy (TEM-EDS). Interestingly, in addition to the presence of the expected constituent elements of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ , we find that the oxygen element is nearly homogeneously distributed in the particle (Fig. 2a). This elemental distribution was also verified by the scanning electron microscopy (SEM)-EDS measurements ( Supplementary Fig. 12), yielding an average O/F molar ratio of 12.3%. To ascertain that the oxygen distribution is not limited to the surface region, we also performed the X-ray photoelectron spectroscopy (XPS) measurements. As illustrated in Supplementary Fig. 13, the phosphor has a significant amount of oxygen, which is observed even after argon plasma etching to remove the outer surface layer. This suggests that the oxygen atoms extend throughout the bulk of the material instead of merely on the surface. We surmise that this distribution may be caused by the insufficient fluorination of oxide precursors by NH 4 F during synthesis. Additionally, given that Pr ions are expected to play a role in the energy harvest and release within this UVC persistent phosphor, we next probed the oxidation state of Pr ions by Pr L III -edge X-ray absorption near-edge structure (XANES To obtain more information concerning the structure of the phosphor, we performed a high-resolution synchrotron XRD measurement. Most of the diffraction pattern is readily indexed with an Fm−3m space group, and some weak extra diffraction lines assigned to yttrium aluminum garnet (YAG) were detected. The occurrence of YAG in the product results from the reaction between the precursors and the corundum boat used for the synthesis. We note that the afterglow performance of phosphors synthesized using different corundum boats does not show much difference ( Supplementary Fig. 14), suggesting good reproducibility. Rietveld refinement based on a two-phase model was performed using the general structure analysis system (GSAS) software package 34 . Assuming the Pr at the Y site and the O and F occupying the same Wyckoff site, the Rietveld refinement of the data immediately converged to R p = 4.38% and R wp = 5.91% (Fig. 2c, Table S1). The site occupancy factor for the F and O atoms was also refined and was determined to be 0.956 (3), resulting in a chemical formula of Cs 2 NaY(Pr)F (O) 5.736 □ 0.264 , where □ represents anion vacancies. We stress that the existence of a trace amount of Gd 3+ in the phosphor, owing to the unavoidable contaminant by the precursors (the Y and/or Pr precursors), does not notably affect the afterglow behavior ( Supplementary Fig. 15a). The occurrence of a large number of fluorine vacancies can be tentatively attributed to the replacement of F − by O 2− that forces the release of F − to satisfy the charge neutrality. We also stress that the Pr 4+ ions are absent in both as-synthesized and charged products, as suggested by the XANES and electron spin resonance (ESR) spectroscopy, respectively (Fig. 2b, Supplementary Fig. 15b). Based on these information, we speculate that the chemical formula of the as-synthesized product can be written as Cs 2 NaY 0.99 Pr 0.01 F 5.472 O 0.264 □ 0.264 . We point out that this corresponds to a molar ratio of oxygen to fluorine of 13.0% when considering the existence of YAG in the product (Table S1), which is comparable to that by the EDS measurement. Collectively, the structural analysis clearly justifies the existence of anion vacancies in the product, which are thus conceived to act as electron traps.
To ascertain the possibility of anion vacancy-mediated trapping of electrons in our product, we performed density functional theory (DFT) calculations. We underscore that there are many possibilities for defects or defect complexes in this type of mixed-anion system with vacancies. To simplify the discussion, we primarily focused on the effect of fluorine vacancies on the electronic structure of Cs 2 NaYF 6 . We first considered three models for defective Cs 2 NaYF 6 featuring a single fluorine vacancy at the apical site of the [YF 6 ] octahedron and two fluorine vacancies at two apical sites or at one apical and one equatorial site of the [YF 6 ] octahedron. We note that the calculated band gap for the pristine Cs 2 NaYF 6 is 9.67 eV, which is comparable to the experimental value 33 . The calculated density of states (DOS) for both pristine and defective Cs 2 NaYF 6 is shown in Fig. 3a-d. Interestingly, we find that the creation of one fluorine vacancy at the apical site of the octahedron introduces four in-gap states, at approximately 0.18, 0.60, 1.45, and 2.86 eV below the conduction band minimum (CBM) (Fig. 3b). Similarly, creating two fluorine vacancies at two apical sites of one [YF 6 ] octahedron gives rise to three in-gap states, at approximately 0.24, 1.26, and 2.85 eV below the CBM (Fig. 3c). By contrast, the formation of two fluorine vacancies at one apical and one equatorial site of the octahedron leads to more complex in-gap states, with the deepest defect level being 3.11 eV below the CBM (Fig. 3d). We also calculated the DOS of the defective Cs 2 NaYF 6 with two fluorine vacancies at two apical sites or at one apical and one equatorial site of the [NaF 6 ] octahedron, and we found similar in-gap states (Supplementary Fig. 16). All these results unambiguously indicate that the introduction of fluorine vacancies in Cs 2 NaYF 6 results in the appearance of a series of in-gap defect levels with different depths with respect to the CBM. We stress that the deeper traps with>1.3 eV below the CBM predicted here can be supported well by the photostimulated luminescence as shown in Fig. 1e-g. The existence of fluorine vacancy-related defect levels in our product was also experimentally confirmed by the thermoluminescence measurements. Figure 4a displays the thermoluminescence curve of the persistent phosphor at 48 h after stopping the X-ray irradiation. The trap depths (E) relative to the CBM can be calculated by the expression E = 0.002T m , in which T m is the temperature for which the thermoluminescence peak is the maximum (in kelvin, K) 2 . Three shallow traps, with respect to those>1.3 eV below the CBM, were observed to have the activation energy values of 0.69, 0.83, and 1.02 eV at 70, 141, and 238°C, respectively. We note that the determined shallow defects have good consistency with the results calculated by DFT (Fig. 3b-d), although other structural defects, beyond the cases in our DFT calculations (e.g., the formation of [YF 3 ] or [NaF 3 ] due to the loss of three fluorine in one octahedron or the formation of defect complex consisting of [YF 5 ], [NaF 5 ] or others), probably also contribute to the formation of these shallow traps.
On the basis of all these results, we posit that the defects associated with fluorine vacancies could act as electron trapping centers with diverse depths with respect to the CBM, making fluoride elpasolite an excellent X-ray-excitable UVC long-lasting phosphor. The use of X-ray as the excitation source signifies that the dominant mechanism for the persistent luminescence observed is by the direct recombination of released electrons with Pr 3+ ions. Nevertheless, the fact that the visible persistent luminescence decays more slowly than the UVC, both of which are due to the electronic transitions of Pr 3+ , indicates that other processes should also involve the electron detrapping. In view of the existence of the YAG phase in the product which may influence the afterglow, we further synthesized the sample using a platinum crucible; the XRD result suggests the absence of any impurity in the product (Supplementary Fig. 17a). Interestingly, we note that this phase-pure sample shows similar visible PL, but different photoluminescence excitation spectra with respect to the YAG-containing phosphor ( Supplementary Fig. 17b,c and Supplementary Fig. 4b). It is well known that the absorption wavelength of 3 H 4 → 4f5d of Pr 3+ in YAG is longer than that in the elpasolite 33,35 , suggesting that the UVC emissions observed in the YAG-containing phosphor originate from the elpasolite phase and that the observed photoluminescence excitation band in Supplementary Fig. 4 mainly originates from the YAG phase. Additionally, we find that, at 5 min after stopping the Xray irradiation, the relative intensity of the UVC to the visible emissions for the YAG-containing sample is smaller than that for the phase-pure product (Supplementary Fig. 17d), which signifies that the YAG phase, along with the elpasolite phase, contributes to the visible afterglow. We underscore that, similar to the YAGcontaining sample, the UVC afterglow decays faster than the visible cousin in the phase-pure sample (Supplementary Fig. 17e). The decay curve corresponding to the visible emission was further plotted as a function of reciprocal persistent luminescence intensity (I −1 ) versus time (t) (Supplementary Fig. 17f). The I −1~t at the 50-120 min period for the red persistent luminescence can be fitted linearly, suggesting that a tunneling-related process occurs 11 . Based on all these observations, we propose a plausible mechanism for the UVC and visible persistent luminescence, as schematized in Fig. 4b. Upon X-ray irradiation, the absorption of an X-ray photon yields an energetic, ionized free electron. This hot electron collides with atoms in the material, and it triggers the cascading production of additional ionized electrons 36 . Lower-energy collisions may cause the excitation of valence band electrons into the conduction band, leading to the creation of many electron-hole pairs (process 1 in Fig. 4b). The excited electrons and created holes are subsequently captured by electron traps and Pr 3+ ions, respectively, based on processes 2 and 3. After long-term X-ray irradiation, the traps are filled. After X-ray irradiation ceases, at the beginning, the electrons are primarily released from shallow traps, followed by transfer to the Pr 3+ ions through the conduction band (process 4). The Pr 3+ ion with an electron and a hole can be viewed as an excited Pr 3+ , which releases the energy either through the 4f5d-4f 2 transition or through the 4f 2 -4f 2 transition (process 5) that cause the UVC and visible afterglow, respectively. After the depletion of electrons captured in shallow traps, those in the deep traps can migrate directly to nearby Pr 3+ ions by tunneling and then they are captured by the 4f 2 energy levels of Pr 3+ , resulting in the visible emissions in the absence of the UVC emission (process 6). We point out that overexposing the phosphor under X-rays leads to weaker UVC emission (Supplementary Fig. 7). This may be caused by some X-rayinduced defects 37 , which remains an open question for further study. We stress that after releasing most of the stored electrons, the phosphor can be recharged, showing a nearly identical afterglow behavior ( Supplementary Fig. 18).
Discussion
Here, we present the discovery of UVC persistent luminescence in a defective, Pr 3+ -doped fluoride elpasolite, and we demonstrate that the development of UVC persistent phosphors is not an insurmountable goal. Specifically, we have found that incorporating oxygen into the lattice results in the formation of a large number of anion vacancies that can serve as electron traps. The extension of the spectral range of persistent luminescence 4 Fig. 4 Thermoluminescence spectrum and schematic illustration of the proposed afterglow mechanism. a The thermoluminescence spectrum of phosphors with a nominal composition of Cs 2 NaY 0.99 F 6 :0.01Pr 3+ . The sample was irradiated for 1000 s at room temperature, and then was left for 48 h before the thermoluminescence measurement. b Proposed afterglow mechanism. The purple, blue, and red lines represent the optical transitions corresponding to the UVC, blue, and red emissions, respectively. Note that the emission corresponding to the transition of 4f5d → 3 H 5 is partially in the UVC spectral range from visible and NIR to UVC opens up a diversity of potential applications. As is well known, Pseudomonas (P.) aeruginosa PAO1 is a common Gram-negative and monoflagellated bacterium that can survive in a diversity of environment conditions 38 . The organism can cause disease not only in animals and plants but also in humans. As a proof of concept, we show that the UVC persistent phosphor developed here can be used for killing P. aeruginosa PAO1. As shown in Fig. 5a, 100% viability can be maintained when keeping P. aeruginosa PAO1 under ambient conditions (i.e., room light, normal atmosphere) for 30 min. We used a total of four sheets of UVC persistent phosphors that were irradiated by X-rays for 2, 5, 10, and 16 min, respectively. The intensity of the persistent luminescence increases with the increasing irradiation time. We note that before irradiation, each sheet was fixed on a u-shaped bracket. At 2 s after the cease of irradiation, the longpersistent luminescent sheet was kept close to the culture dish. Interestingly, we observe that the survival of P. aeruginosa PAO1 is associated with the X-ray irradiation time of the sheet, and around 39.6% viability can be maintained for P. aeruginosa PAO1 in the dish with the sheet after 16 min of irradiation ( Fig. 5b-f). This observation provides direct evidence that the persistent luminescence from our UVC phosphor can be used for sterilization. Thanks to the outstanding penetrating ability of X-rays, the charging of our phosphors are barely influenced by biological tissues (Supplementary Fig. 19), thus highlighting its enormous potential for in vivo applications.
Conclusions
In summary, we have developed a new class of phosphor that exhibits strong and long-lasting UVC afterglow. A combination of experimental and theoretical results leads us to propose that the structural defects in the elpasolite that are associated with oxygen introduction-induced anion vacancies serve as electron traps, which render it capable of capturing and storing a large number of electrons as triggered by X-ray irradiation. Interestingly, the afterglow intensity of this phosphor is sufficiently strong for sterilization. We believe that the concept shown here may be applicable to other Pr 3+ -doped wide-bandgap compounds, suggesting a series of UVC persistent phosphors with excellent performance. Our finding of this UVC afterglow opens up a new frontier in persistent phosphors, and offers an opportunity for novel applications, such as sterilization, disinfection, drug release, the in vivo killing of cancer cells, anti-counterfeiting, and beyond.
Synthesis of UVC persistent phosphors
Pr-doped polycrystalline fluoride elpasolite phosphors, with nominal compositions of Cs 2 NaY (1−x) F 6 :xPr 3+ , were prepared by a solid-state reaction method. Cs 2 CO 3 (1.6290 g, 99.99%, Aladdin, Shanghai, China), NaHCO 3 (0.4200 g, 99.99%, Aladdin, Shanghai, China), Y 2 O 3 (0.5588 g, 99.99%, Aladdin, Shanghai, China), NH 4 F (2.2222 g, 99.99%, Aladdin, Shanghai, China), and Pr 6 O 11 (0.0085 g, 99.996%, Alfa, United States) powders were mixed together with 3 mL of acetone and then ground thoroughly. The obtained powders were thermally treated at 150°C in air for 7 h, followed by regrinding to obtain a fine powder. The mixture was first sintered at 450°C for 30 min in air atmosphere. The obtained powders were then reground, followed by sintering at 700°C for 10 h under a nitrogen atmosphere. The white powders were collected and stored for further characterizations. Corundum boats with a purity of 99% and a platinum crucible were used as vessels for the above synthesis.
Charging of persistent phosphors
The X-ray irradiation of the product was performed using a calibrated RS-2000 biological irradiator equipped with a tungsten target (160 kV, 25 mA), and the X-ray dose was tuned by changing the irradiation time. The wavelength of X-ray from the irradiator is 0.2106 Å.
Structural and morphological characterizations
XRD patterns were taken using an X'Pert-Pro MPD diffractometer (Netherlands PANalytical) with a Cu Kα X-ray source (λ = 1.540598 Å). TEM images and TEM-EDS mapping were taken with an FEI Tecnai G2 F20 S-TWIN TMP microscope (200 kV). SEM image and SEM-EDS mapping were taken with a Zeiss scanning electron microscope (Zeiss Supra55). We note that three different regions with a size of~5 μm × 5 μm for the samples were used for the EDS measurement; the O/F ratio was obtained by averaging these data, and determined to be (12.3 ± 2.0)%. XPS was performed on a Rigaku XPS-7000 spectrometer. The carbon peak at 284.6 eV was used as a reference to correct the charging effect.
Luminescence, afterglow, and thermoluminescence characterizations
Luminescence spectra were recorded by a spectrofluorometer (FLS980, Edinburgh Instruments Ltd.) equipped with a photomultiplier (R928P with an applied voltage of 950 V, Hamamatsu). The persistent luminescence spectra were taken at different time intervals after ceasing the X-ray irradiation. All data regarding the afterglow intensity versus time were recorded from 5 min after stopping the X-ray irradiation. Note that the slit widths of the detection monochromator used for the luminescence and afterglow measurements are 3 nm and 10 nm, respectively, which results in relatively broad afterglow emission bands with respect to photoluminescence bands (Figs. 1a and S4a). Thermoluminescence measurements were performed with a thermoluminescent dosimeter (FJ-427A1), with a heating rate of 1°C/s from room temperature to 300°C. The sample was irradiated for 1000 s at room temperature, and then it was left for 48 h before the thermoluminescence measurement.
Real-time afterglow measurements by a UVC imager
A homemade visible-blind UVC imager was used for this measurement. We note that the sensitive range of this imager is 240-280 nm, which was achieved by adding a bandpass filter. The UVC signals from the samples, which were recorded as the number of photons, were recorded by the UVC imager. To avoid the saturation of photon counts, a relatively low applied voltage was used for recording the initial UVC images. The afterglow decay curves shown in Figs. S10b, c and S11 were drawn based on these measurements. The photostimulated luminescence was monitored by this UVC imager under the excitation of laser diodes with emissions peaking at 450, 730, and 793 nm. To monitor the UVC signal under heating, the sample was heated by a hot plate set to 200°C and the UVC signal was measured by the UVC imager; note that the first image was taken at 5 s after putting the sample on the hot plate. The distance between the imager and the sample was 70 cm. Both photostimulated and thermostimulated UVC images were taken at 24 h after ceasing X-ray irradiation (the irradiation time: 1000 s). We emphasize that the sensitivity of the UVC imager used is poorer than that of photomultiplier used for afterglow measurements as shown in Fig. 1c.
Estimation of the power density of the UVC afterglow
We roughly evaluate the power density of the UVC afterglow using a Thorlabs PM200 power meter equipped a power sensor (S120VC, Thorlabs). The detailed measurement method is shown in Supplementary Fig. 20. After considering all possible factors that impact the measurement, the initial afterglow power density at the sample position was roughly estimated to be ca. 14.9 mW/m 2 .
Synchrotron X-ray measurement
We took the synchrotron XRD measurement using the BL02B2 beam line of SPring-8 to obtain high-quality diffraction patterns at 296 K. The sample was sealed into Hilgenberg glass capillaries with an inner diameter of 0.1 mm, and during the measurement, the capillary was continuously rotating. The X-ray wavelength used is 0.413745 Å. Rietveld refinement was preformed against XRD data utilizing the GSAS program 34 . The room temperature Pr L III -edge XANES was taken on the 1W1B beam line of the Beijing Synchrotron Radiation Facility with a stored electron energy of 2.5 GeV and average ring currents of 200 mA. A fixed-exit Si (111) double crystal monochromator was used. Pr 6 O 11 and Pr(NO 3 ) 3 ·6H 2 O powders were used as reference samples. Data were collected in the fluorescence mode for the studied sample and in the transmission mode for the reference samples. The XAFS data were analyzed using the IFEFFIT software package 39 .
Bactericidal experiment
The P. aeruginosa PAO1 in the culture dishes with the beef extract peptone medium was grown in an incubator at 35°C. Three days later, the PAO1 population in the culture dishes was~10 6 colony-forming units (cfu)/mL. The culture dishes were removed from the incubator and used for the following inactivation experiment. To perform the sterilization experiment, the samples were tableted by a homemade tablet machine. The Cs 2 NaY 0.99 F 6 :0.01Pr 3+ powders with a mass of 7.4 g were compressed into a sheet sample of 3 cm in diameter and 4 mm in thickness. Each sheet was fixed on a u-shaped bracket, followed by X-ray irradiation for 2, 5, 10, and 16 min. At 2 s after the end of irradiation, the longpersistent luminescent sheet was held close to the culture dish. The distance between the sheet surface and P. aeruginosa PAO1 is~2 mm. After 30 min, the afterglow sheet was removed, and then the PAO1 was diluted with deionized water and centrifuged at 3000 rpm for 20 min. The PAO1 was then dispersed in 20% NaCl solution and used for the following test. To evaluate the cell membrane integrity, a BacLight live/dead bacterial viability kit (L-7012, Molecular Probes) was used, which allows us to differentiate cells with intact (live) membranes from those with damaged (dead) membranes. The stain was then prepared by diluting 3 μL of each component into 1 mL of distilled water, and then it was kept in the dark for 15 min. We note that at least 2000 cells were scored per sample for the analysis. The P. aeruginosa PAO1 suspension was imaged using a confocal laser scanning microscope. A water immersion objective lens was used. The P. aeruginosa PAO1 suspension images corresponding to the afterglow sheets with X-ray irradiation for 2, 5, 10, and 16 min were compared to confirm the inactivation effect.
First-principles calculations
DFT calculations are performed using the Vienna Ab initio simulation package (VASP) 40 . We used the Perdew-Berke-Ernzerhof of generalized gradient approximate (GGA) functional for the description of the exchange and correlation energy of the electrons. Because the GGA usually underestimates the band gap of materials, an orbital-dependent potential was used, including an additional Coulomb interaction (Hubbard U) [41][42][43][44] . The underestimation of the intraband Coulomb interactions was corrected by the Hubbard U parameter and the values of U Cs (d) = 8 eV, U Na (p) = 2 eV, U Y (d) = 4 eV, and U F (p) = 10 eV were used. The ionic potential was described by the projector-augmented wave (PAW) pseudopotential. For k-point integration within the first Brillouin zone, a 2 × 2 × 3 Monkhorst-Pack grid for a 2 × 2 × 1 super cell was selected. A plane-wave cutoff energy of 450 eV was applied to the calculations. The convergence criteria for the maximum force and the total energy were set to 0.01 eV Å −1 and 1.0 × 10 −4 eV/atom, respectively. Based on the static states mentioned above, the DOS of pristine and defective Cs 2 NaYF 6 was calculated. | 2018-12-02T16:19:45.676Z | 2018-11-14T00:00:00.000 | {
"year": 2018,
"sha1": "c3420d07b79e5bf8e0f714806cb5c189e6089765",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41377-018-0089-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3420d07b79e5bf8e0f714806cb5c189e6089765",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
9677649 | pes2o/s2orc | v3-fos-license | Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging
Category-specific brain activation distinguishing between semantic word types has imposed challenges on theories of semantic representations and processes. However, existing metabolic imaging data are still ambiguous about whether these category-specific activations reflect processes involved in accessing the semantic representation of the stimuli, or secondary processes such as deliberate mental imagery. Further information about the response characteristics of category-specific activation is still required. Our study for the first time investigated the differential impact of word frequency on functional magnetic resonance imaging (fMRI) responses to action-related words and visually related words, respectively. First, we corroborated previous results showing that action-relatedness modulates neural responses in action-related areas, while word imageability modulates activation in object processing areas. Second, we provide novel results showing that activation negatively correlated with word frequency in the left fusiform gyrus was specific for visually related words, while in the left middle temporal gyrus word frequency effects emerged only for action-related words. Following the dominant view in the literature that effects of word frequency mainly reflect access to lexico-semantic information, we suggest that category-specific brain activation reflects distributed neuronal ensembles, which ground language and concepts in perception-action systems of the human brain. Our approach can be applied to any event-related data using single-stimulus presentation, and allows a detailed characterization of the functional role of category-specific activation patterns.
Introduction
Are mental representations of word meaning abstract, symbolic entities, or do perceptual and motoric representations play a critical role in 'grounding' word meaning in representations of the external objects and events that words denote? This question has a long history in psychology (Harnad, 1990;Paivio, 1991;Barsalou, 1999;Lakoff & Johnson, 1999;Pulvermüller, 1999;Glenberg & Kaschak, 2002;Pylyshyn, 2002;Rogers et al., 2004), and numerous neuroscientific studies have sought an answer by localizing the neuronal networks underlying semantic representations (see, e.g. Barsalou et al., 2003;Caramazza & Mahon, 2003;Pulvermüller, 2005;Martin, 2007). Several studies have shown that activation evoked in tasks that involve access to semantic knowledge can include motor and sensory processing areas (Martin et al., 1996;Kiefer & Spitzer, 2001;Hauk et al., 2004;Tettamanti et al., 2005;. However, measurement of the slow haemodynamic response provides low temporal resolution and therefore previous neuroimaging results have not been able to distinguish between activation evoked by elementary recognition processes (accessing the meaning of the word 'hammer', for example) and 'epiphenomenal' post-recognition processes, such as deliberate imagery (e.g. imagining using a hammer) or 'post-understanding translation' (Glenberg & Kaschak, 2002).
The main goal of this study was to gain new insights into the response characteristics of word category-specific brain activation, which could help distinguish between theories in which: (1) categoryspecific activation is critical for recognition; or (2) accounts in which such specific activation arises from post-recognition processes such as imagery. We assessed the effect of a lexical variable, the frequency of occurrence of written words, on neural responses to words in different categories, namely visually and action-related words (e.g. 'sun' and 'kick', respectively). Effects of word frequency on lexical decision times have been reported for several decades (e.g. Whaley, 1978;Gernsbacher, 1984). Although there has been some debate about the exact locus of the word frequency effect (e.g. Balota & Chumbley, 1984;Mccann et al., 1988;Paap & Johansen, 1994), recent behavioural studies have provided strong evidence that early word recognition processes are sensitive to word frequency (Allen et al., 2005;Cleland et al., 2006). This is further confirmed by a number of electrophysiological studies that reported word frequency effects within 200 ms after stimulus onset (Sereno et al., 1998;Assadollahi & Pulvermüller, 2003;Hauk & Pulvermüller, 2004a;Dambacher et al., 2006;Hauk et al., 2006). Importantly for our study, word frequency is not usually considered as a crucial factor in mental imagery or simulation (Paivio, 1986;Pylyshyn, 2002), consistent with the intuition that synonyms that differ in frequency (e.g. 'baby', 'infant') should not differ with respect to the effort it takes to evoke a mental image or to recall previous episodic experiences once word identification is complete.
In accordance with the negative correlation generally observed between word frequency and behavioural response times, electrophysiological (Sereno et al., 1998;Assadollahi & Pulvermüller, 2003;Hauk & Pulvermüller, 2004a;Dambacher et al., 2006;Hauk et al., 2006) as well as metabolic (Fiebach et al., 2002;Chee et al., 2003;Kuo et al., 2003;Kronbichler et al., 2004;Carreiras et al., 2006) neuroimaging research has consistently shown that rare words elicit stronger brain responses than words with high word frequency. On the basis of semantic accounts of category specificity, we therefore hypothesized that word frequency effects should correlate negatively with brain activation in object-related areas (e.g. in fusiform gyrus) for visually related words, and in action-related areas (such as middle temporal and precentral gyrus) for action-related words. Our approach allows more precise conclusions about the functional role of categoryspecific activation patterns than has been available from metabolic imaging techniques so far.
Stimuli and experimental design
Twenty one right-handed native speakers of English participated in the study (10 female and 11 male; mean age ± SD ¼ 24.5 ± 5.3 years). They had no history of neurological or psychiatric illness or drug abuse. Ethical approval was obtained from the Cambridge Local Research Ethics Committee. This task has been used successfully in several previous neuroimaging studies on visual word recognition (Mechelli et al., 2003;Hauk et al., 2004;Kronbichler et al., 2004). Stimuli were flashed briefly on the screen for 100 ms in order to minimize eye movements and variance in stimulus processing times. The stimulus onset asynchrony (SOA) was 2.5 s. Two-hundred and fifty monosyllabic and mono-morphemic English word stimuli were employed in the study. One-hundred and fifty referred to bodily actions (e.g. 'grasp', 'limp', 'bite'), and 100 to objects and visual attributes (e.g. 'snow', 'blond', 'cube'). These categories were matched on average familiarity, number of letters and number of phonemes. One-hundred and fifty baseline trials consisting of strings of hash marks varying in length were interspersed among the word stimuli. The average length of words and hash marks was matched. In addition, 50 null events were included in which a fixation cross remained on the screen. Two pseudo-randomized stimulus sequences were alternated between subjects.
For each of the 250 word stimuli, we obtained 21 psycholinguistic parameters, either from the CELEX database (such as word form, lemma, bi-and trigram frequencies; Baayen et al., 1993) or from a separate rating study (such as action-relatedness, imageability and familiarity; Hauk & Pulvermüller, 2004b). Some of these variables are highly correlated with each other (such as word form frequency, lemma frequency and familiarity), and their effects can therefore be impossible to estimate independently from each other. Furthermore, the questions underlying our study do not require the estimation of effects for each of these variables individually.
In order to reduce the information available for our stimulus set to a tractable number of variables for the multiple regression analysis, we combined some of the parameters into groups based on their functional relatedness and their intercorrelation pattern (see below). For each of these groups, every variable was z-normalized across stimuli, and a principal component analysis was computed. The first principal component of each group entered the multiple regression analysis. The final variables were: (1) Length and Neighbourhood Size; created from the parameters number of letters and neighbourhood size, which were strongly negatively correlated; (2) Typicality; created from orthographic bigram and trigram frequency (strongly positively correlated); (3) Frequency; created from word form frequency, lemma frequency and familiarity (strongly positively correlated); (4) Action-relatedness; created from body-and actionrelatedness (strongly positively correlated); (5) Imageability; created from imageability, concreteness and visually relatedness (strongly positively correlated). In the following we will use capital initial letters if we explicitly refer to those variables that entered our analysis (e.g. 'Frequency'), but use small initial letters if we refer to the variable in general (such as word 'frequency'). Note also the different use of the terms word 'variables' and 'categories': the former refers to the constituents of the regression analysis (e.g. Frequency, Actionrelatedness), the latter to the different word groups in the factorial analysis (action-related and visually related words).
Data acquisition and analysis
Twenty-one monolingual, right-handed, healthy native English speakers participated in the study. Their mean age was 24.5 years (SD 5.3), and their handedness score (from a reduced version of the Oldfield handedness inventory; Oldfield, 1971) was 87 (SD 15). Scanning took place in a 3T Bruker MR system using a head coil. Echo planar images (EPI) were acquired using a TR ¼ 3.02 s, TE ¼ 27 ms and a flip angle of 90 degrees. Reconstructed images consisted of 21 slices covering the whole brain, with slice thickness 4 mm, interslice distance 1 mm, field-of-view 25 cm and in-plane resolution 128*128. Seven subjects had participated in a similar electroencephalogram (EEG) experiment before the functional magnetic resonance imaging (fMRI) session (average delay 18 days, SD 11 days). The remaining 14 data sets were also part of the study of Hauk et al. (2004). Images were corrected for slice timing, and then realigned to the first image using sinc interpolation. Phasemaps were used to correct for inaccuracies resulting from inhomogeneities in the magnetic field (Jezzard & Balaban, 1995;Cusack et al., 2003). Any non-brain parts were removed from the T1-weighted structural images using a surface model approach ('skull-stripping';Smith, 2002). The EPI images were coregistered to these skull-stripped structural T1-images using a mutual information coregistration procedure (Maes et al., 1997). The structural MRI was normalized to the 152-subject T1 template of the Montreal Neurological Institute (MNI). The resulting transformation parameters were applied to the coregistered EPI images. Images were resampled with a spatial resolution of 2 · 2 · 2 mm 3 , and spatially smoothed with a 12-mm full-width half-maximum Gaussian kernel. This was done to capture variability across subjects, but to be able to separate activation in brain areas that are typically several centimeters apart. After global normalization of data from separate sessions, single-subject statistical contrasts were computed using a parametric general linear model (Buchel et al., 1998;Friston et al., 1998). Lowfrequency noise was removed with a high-pass filter (time constant 60 s). Imaging data were processed using SPM99 software (Wellcome Department of Cognitive Neurology, London, UK).
Multiple regression analysis
One important methodological concern in many previous neuroimaging studies on word recognition is that the statistical analysis methods employed assess the consistency of activation differences between two classes of words over a population of subjects, but do not assess the consistency (or otherwise) of activation differences over a population of linguistic items. It has been argued previously that multiple linear regression designs in combination with random effects group statistics Category-specific activations are semantic 1857 optimally account for appropriate sources of between-subject variance (Lorch & Myers, 1990). We therefore applied two linear regression designs to our data, each of which was optimized for a specific purpose.
In a first analysis, we sought to corroborate previous findings on action-relatedness and imageability. The five word variables described above were used as simultaneous linear regressors across all words, i.e. effects of variables that were not of interest for this study (length and orthographic typicality) were partialled out. Note that in order to detect effects of Action-relatedness and Imageability, the stimuli must exhibit sufficient variability with respect to these variables. This analysis therefore applied regression to all words including visually related and action-related words, as each of them by itself was chosen to be relatively homogenous with respect to one of the two variables. This yielded the results presented in Fig. 1 and Table 1.
The crucial prediction of our study was tested in a separate analysis that looked at the two critical word categories separately. Words were grouped into action-and visually related words, which were specified as different columns of the design matrix. Each of these two word groups was assigned the same five simultaneous regressors as in the first analysis, including the crucial variable, Frequency. This model yielded the results presented in Fig. 2 and Table 2.
Group data were analysed with a random-effects analysis. We will focus our main interpretations on activation peaks that both reached an uncorrected significance level of P < 0.001 (used for display), and were significant at P < 0.05 corrected after small volume correction (SVC) for volumes of interest that were defined based on previous findings (see below). Stereotaxic coordinates for voxels with maximal z-values within activation clusters are reported in the coordinate system of the standard brain of the MNI. Anatomical labels of nearest cortical grey matter for peak coordinates were obtained from the MRIcron software (http://www.sph.sc.edu/comd/rorden/mricro.html), based on the anatomical parcellation of the MNI brain published by Tzourio-Mazoyer et al. (2002). To address questions about the specificity of activations across several activation clusters, we computed parameter estimates for peak voxels for each individual subject using standard procedures implemented in the MarsBar software (Brett et al., 2002), and subjected them to anovas including a factor representing spatial location of the voxels. Error bars in the corresponding graphs indicate within-subject standard errors for the comparison of two conditions at each location, i.e. between-subject variability has been removed before computing the standard error. This is appropriate for displaying confidence intervals in repeatedmeasures designs (Loftus & Masson, 1994).
SVC
We formulated several hypotheses about activation patterns associated with our word variables. These were tested using SVC for spheres with radius 20 mm centred at coordinates taken from the literature, as will be described below. Where necessary, mean coordinates were computed in the coordinate system reported in the original study, and transformed to MNI coordinates afterwards. The hypotheses for these volumes were independent of each other, and were therefore tested separately.
We predicted that Imageability would activate object processing areas in the fusiform gyrus. We therefore compared our activation patterns to those of a study that reported activation for different kinds of objects, namely animals and tools, in the bilateral fusiform gyrus (Chao et al., 1999). The mean coordinates for object-related activation in ventral and lateral fusiform gyrus in this study were 33 ⁄ )54 ⁄ )16 (RH) and )33 ⁄ )54 ⁄ )16 (LH).
Action-relatedness should activate motor areas in the middle temporal gyrus and frontal cortex. The middle temporal cortex has been found to be activated by action-related objects and words in several previous studies (Martin et al., 1996;Chao et al., 1999;Grezes et al., 2003;Davis et al., 2004). For consistency reasons, we chose coordinates from the same study as above that were activated more strongly for tools than for animals, i.e. )47 ⁄ )55 ⁄ 3 (LH) and 45 ⁄ )54 ⁄ 3 (RH) (Chao et al., 1999). As a frontal motor region, we chose the coordinates for activations evoked by finger movements reported by Hauk et al. (2004), which were )36 ⁄ )8 ⁄ 60 (LH) and 38 ⁄ )20 ⁄ 48 (RH). Given that the present study employed a subset of subjects who also performed the motor localizer task published by Hauk et al. (2004), the corresponding coordinates are more likely to reflect brain areas involved in hand-action processing for our subjects than values from the literature.
Double dissociations between locations and word categories
The mass-univariate statistical approach applied to visually related and action-related words separately in the above SPM analysis cannot fully determine whether the apparent differences between word categories are due to the selected statistical threshold, or reflect a true double dissociation of the factors Location (fusiform vs middle temporal) and semantic variable (Imageability vs Action-relatedness). A further question is whether the Frequency modulations obtained by the second analysis indeed overlap with the activations from the first analysis.
Comparisons of effect sizes between different brain regions are complicated by the fact that the blood-oxygen-level-dependent (BOLD) response may vary due to differences in vasculature, neuron-vasculature coupling, and other anatomical or physiological factors unrelated to the experimental manipulations. However, such factors would not be able to explain a cross-over interaction or double dissociation for the factors location and word category, which are the crucial predictions of our study (see Henson, 2006 for a discussion of possible inferences that can be drawn from functional imaging data). We therefore performed the following tests: (1) Double dissociation for semantic variables. We tested for a double dissociation between different brain loci and semantic variables (Imageability, Action-relatedness). For this purpose, we performed a 2-by-2 anova with the factors Location (mean value for two peak voxels in left middle ⁄ superior temporal gyrus, as well as fusiform gyrus, from the first analysis) and Semantic Variable (Action-relatedness and Imageability).
(2) Double dissociation of Frequency modulation. We tested for a double dissociation between Frequency modulation at different brain loci and word categories (visually and action-related words). For this purpose, we performed a 2-by-2 anova with the factors Location (peak voxel in left middle temporal gyrus, as well as fusiform gyrus) and Word Category (Frequency modulation for visually related vs action-related words).
(3) Overlap of effects for semantic variables and Frequency. In order to test for an activation overlap, we extracted the parameter estimates for the variables Action-relatedness and Imageability in the voxels exhibiting the strongest modulation by Frequency in the second analysis. These values were entered into a 2-by-2 anova with the factors Location (fusiform vs middle temporal gyrus) and Semantic Variable.
Analysis of all words
We contrasted activation for all words to the baseline condition. The most dominant activation peaks occurred in the left hemisphere, i.e. in the left fusiform, precentral, middle temporal and inferior frontal gyrus (Table 1). Table 2 includes the coordinates of the most dominant peak voxels that showed negative correlation with Frequency across all words (i.e. action-related and visually related words combined). The main areas modulated by Frequency were located in the left and right inferior frontal cortex and left fusiform cortex. Splitting the whole stimulus set into subcategories (action-and visually related words in our case) is meant to increase sensitivity for detecting effects that are specific to these categories, but also means reducing statistical power for the analysis of effects that are characteristic for the whole stimulus set (all words). Therefore, this study focused primarily on categoryspecific effects of word frequency. The categories targeted here are action-and visually related words; a more detailed discussion of the results for the all word analysis will be presented elsewhere.
Effects of semantic variables
The most significant correlation between Imageability and brain activation occurred in the left fusiform gyrus, and in an almost symmetrical right hemispheric area ( Fig. 1 and Table 1). Both of these peaks fell in the vicinity (10 mm) of the mean coordinates computed from Chao et al. (1999). A t-test on parameter estimates for these peak voxels in the left and right hemisphere did not reveal a significant difference (t 20 ¼ )1.5, P > 0.1). Action-relatedness correlated positively with brain activation in the left middle temporal gyrus and left superior temporal gyrus, near the corresponding action-related areas reported by Chao et al. (1999). Interestingly, the peak in the left middle temporal gyrus was approximately 10 mm away from the location at which Davis et al. (2004) reported positive correlation with action-relatedness. SVC analysis around a symmetrical location in the right temporal cortex revealed a marginally significant activation near the right superior temporal gyrus. A t-test comparing parameter estimates between these peak voxels in the left and right hemisphere for Action-relatedness did not reveal a significant difference (t 20 ¼ 0.77, P > 0.4).
Further activation for Action-relatedness was found in a premotor area of the left middle frontal gyrus, i.e. an area corresponding to the dominant hand. It was located approximately 12 mm from the left dorsolateral activation spot related to finger movements reported in Hauk et al. (2004). However, although this activation was significant at an uncorrected threshold of P < 0.001, it was only marginally significant in the SVC analysis.
Correlations with Frequency for semantic word categories
In a second analysis step, a multiple regression with separate columns encoding action-related and visually related words was conducted. The peak voxels that showed significant negative correlations with Frequency are listed in Table 2, and the activation peaks rendered on a standard brain surface are shown in Fig. 2. For action words, the most dominant voxel occurred in the left middle temporal gyrus, 10 mm from the peak voxel in the left middle temporal gyrus that correlated significantly with Action-relatedness. Further voxels showing a significant negative correlation with Frequency for action words were found in the left inferior parietal lobe, left inferior and right medial frontal gyrus. The corresponding analysis for visually related words yielded two significant activation spots. The dominant one occurred in the left fusiform gyrus, 8 mm from the Imageability peak voxel in the left fusiform gyrus described above. Another area that showed significant negative correlation with Frequency for visually related words was located in the right middle frontal gyrus.
A double dissociation for semantic variables was substantiated by a 2-by-2 anova with the factors Location and Semantic Variable, which revealed a significant interaction (F 1,20 ¼ 9.67, P < 0.01). Actionrelatedness did not differ significantly from zero in the peak voxel for Imageability in the left fusiform gyrus, and analogously for Imageability in the left middle ⁄ superior temporal gyrus (both t-tests yielded P > 0.2). The interaction for right-hemispheric voxels (right superior temporal and right fusiform gyrus) yielded a qualitatively similar result (F 1,20 ¼ 11.95, P < 0.01). The corresponding parameter estimates are presented in Fig. 1. Accordingly, a double dissociation for Frequency modulation was documented by a 2-by-2 anova on parameter estimates, including the factors Location (peaks with largest Frequency modulation) and Word Category (action-and visually related), which yielded a significant interaction (F 1,20 ¼ 8.15, P < 0.02). Overlap of effects for semantic variables and Frequency was revealed by a 2-by-2 anova with the factors Semantic Variable (Action-relatedness and Imageability) and Location (peak voxels of modulation with Frequency, for action-related and visually related words, respectively), resulting in a significant interaction (F 1,20 ¼ 6.71, P < 0.02). Action-relatedness significantly differed from zero only in the left middle temporal gyrus, while the same was true for Imageability only in the left fusiform gyrus (all P < 0.05 or P > 0.1, respectively; Fig. 2). This overlap is further illustrated in Fig. 2A.
Discussion
We studied differences in brain activation for words with different semantic associations in an event-related fMRI study using a silent reading task. The main novel finding consists of a differential modulation of activation for visually and action-related words by the Table 1. MNI coordinates and Z-scores for voxels that showed significant activation for all words compared with a low level baseline (hash marks), and positive correlations with Imageability or Action-relatedness, respectively Table 2. (B) The left diagram presents parameter estimates for the variable Frequency for action-related words and visually related words separately, for peak voxels in the left middle temporal gyrus (LMT) and left fusiform gyrus (LFF). The right diagram shows parameter estimates for the same voxels but for the variables Action-relatedness (Act.-rel.) and Imageability (Imag.) across all words independent of word category. The error bars represent within-subject standard errors.
frequency of the words' occurrence, a variable commonly associated with word identification and lexico-semantic access. We also corroborated previous findings of category-specific brain activation in word recognition with respect to imageability and action-relatedness in the fusiform and lateral temporal lobe as well as premotor cortex, respectively. The negative correlation observed between Frequency and neural activity for written words was both specific to either visually or action-related words, and localized in brain areas that also showed differential effects for Imageability and Action-relatedness. For action words, the maximum negative correlation with Frequency fell within about 10 mm of the peak activations for Action-relatedness in the left middle temporal gyrus. Correspondingly, for visually related words and Imageability these activations were about 10 mm apart in the left fusiform gyrus. Furthermore, we found an overlap of effects for category-specific Frequency modulations on the one hand, and Action-relatedness and Imageability on the other.
Effects of word frequency
A number of previous metabolic neuroimaging studies have investigated effects of word frequency on brain activation (see Fiebach et al., 2002;Chee et al., 2003;Kuo et al., 2003;Kronbichler et al., 2004;Carreiras et al., 2006 for recent examples). They reported effects of word frequency mainly for 'classical' language-related areas, such as in the left inferior frontal or left inferior temporal cortex, but not for areas related to action-or object-related processing. However, these studies did not investigate the effect of word frequency on words from different semantic categories. Pooling data across items with very different semantic properties ) therefore increasing the variation of category-specific semantic brain activation ) might have obscured effects that are specific to certain semantic categories (Pulvermüller, 1999), such as the word frequency effects for action-related and visually related words reported in this study. The interpretation of metabolic imaging data is inherently limited by the inertia of the haemodynamic response. Importantly in the context of our study, it is generally difficult to attribute activation patterns to either early processing stages that are central to the recognition of written words or later post-access processes, which might reflect deliberate associations or mental imagery elicited in response to written words. Electrophysiological imaging studies, for example using electro-or magnetoencephalography (EEG ⁄ MEG), address this issue exploiting their millisecond temporal resolution. It is commonly argued that the earlier an effect occurs in the signal, the more likely it is to reflect elementary or automatic processes (Pulvermüller et al., 1996;Sereno & Rayner, 2003;Hauk & Pulvermüller, 2004b;Shtyrov et al., 2004;Pulvermüller, 2005;Hauk et al. 2006;Barber & Kutas, 2007;Kiefer et al., 2007). Several studies have reported category-specific effects within 250 ms after stimulus onset (Dehaene, 1995;Pulvermüller et al., 1995;Hauk & Pulvermüller, 2004b). Unfortunately, their comparatively low spatial resolution does not allow the inference that these early modality-specific effects arise from the same neural systems that are activated in metabolic imaging techniques. Although some correspondence between metabolic and electric brain activity has been demonstrated (Logothetis, 2002;Shmuel et al., 2006), and methods to constrain EEG ⁄ MEG source estimation are currently under development (Dale et al., 2000;Liu et al., 2002), it is no simple matter to link metabolic activation spots to EEG ⁄ MEG components or source estimates. Thus, methods that verify the functional significance of activation patterns in each of these modalities separately are still required, such as the correlation with word frequency as suggested in our paper.
We argued that differential modulation of brain activation by word frequency for action-related and visually related words supports the view that these brain activations reflect aspects of lexico-semantic processing, rather than mental imagery processes. A few previous studies approached the problem of disentangling conceptual ⁄ semantic processing from mental imagery in neuroimaging using semantic priming paradigms (Wheatley et al., 2005;Gold et al., 2006). Primes and targets were presented in rapid succession (250 ms), with the assumption that this would not leave enough time for the prime to evoke mental images before target presentation. Priming for semantically related word pairs was observed in several brain areas, which was interpreted as evidence that these activations reflect semantic processing, rather than mental imagery processes. Although the interpretation of these results supports our views, it is still conceivable that two consciously perceived words evoke mental images in parallel or in combination. If these interact or share part of their cognitive and neuronal processes, this could lead to decreased activation for semantically related word pairs. Thus, imagery processes might similarly explain reduced responses to paired stimuli. Furthermore, the aim of many studies is to draw conclusions about single word processing, and it would be desirable to show that category-specific activation evoked by single word presentation reflects semantic processing. In this paper, we introduced an approach that can be applied to event-related studies using single stimulus presentation, and does not impose particular constraints on the experimental design.
Our interpretation is based on the assumption that the word frequency effects observed in our study reflect central stages of word recognition related to retrieval of lexico-semantic information. In the behavioural literature, this assumption has been challenged by several authors claiming that word frequency affects only post-access decision or verification stages (e.g. Balota & Chumbley, 1984;Mccann et al., 1988;Paap & Johansen, 1994;McCann et al., 2000). For example, word frequency effects were larger in a lexical decision compared with a category verification and pronunciation task (Balota & Chumbley, 1984), suggesting that it depends on the familiarity-based decision process, rather than word identification per se. The insensitivity of naming or lexical decision times to pseudohomophones (e.g. 'brane') with respect to base-word frequency ('brain') has also been interpreted as evidence that this variable does not affect lexical access (Mccann et al., 1988). The resilience of word frequency effects in a dual-task paradigm, where a distractor task is assumed to interfere with early stages of word recognition processes, was presented as further evidence for a late locus of word frequency effects (McCann et al., 2000). These studies certainly demonstrated that effects of word frequency can be modulated by task demands. However, we did not use a lexical decision task in our study, nor any other task that required our subjects to make a decision. The argument that post-lexical processes specific to tasks that require a decision (lexical, phonological or semantic decisions) are the locus of the word frequency effects does therefore not apply in this case. Furthermore, recent behavioural studies have confirmed that effects of word frequency persist even when the task was chosen in order to minimize them, e.g. using very short exposure durations (Allen et al., 2005). Dual-task methodology demonstrated that word frequency exerts effects at early stages of word recognition (Cleland et al., 2006), supporting neurophysiological evidence cited above. These are therefore considered to be an indicator for the ease of word identification or lexical access in models of word recognition (Grainger & Jacobs, 1996). It can still be argued that in our silent reading task word frequency effects appeared at the phonological or 'lexeme' level, as was suggested in studies on speech production (Jescheniak & Levelt, 1994). Although we cannot rule out this interpretation for general effects of word frequency, it would not have predicted the category-specific differences for action-related and visual-related words in our study. Neuroimaging research has consistently shown that rare words elicit stronger brain responses than words with high word frequency (Fiebach et al., 2002;Chee et al., 2003;Kuo et al., 2003;Kronbichler et al., 2004;Carreiras et al., 2006), and electrophysiological studies have demonstrated that these effects can occur early after word presentation (Sereno et al., 1998;Assadollahi & Pulvermüller, 2003;Hauk & Pulvermüller, 2004a;Dambacher et al., 2006). These studies generally explain word frequency effects on the basis of lexicosemantic processing. We therefore hypothesized that if categoryspecific differences in brain activation arise on a lexico-semantic level, then word frequency should correlate negatively with activation amplitudes in the corresponding brain areas. This prediction was confirmed by our data.
It should be noted that word frequency is correlated with concept familiarity, i.e. with the frequency with which subjects encounter objects or actions in real life (Morrison & Ellis, 2000). One could argue that concept familiarity should also affect imagery processes. In the context of our study, it seems implausible to us that an effect of concept familiarity on mental imagery processes can explain our results, for two major reasons. (1) Subjects were not encouraged or forced by task instructions to form mental images of the objects and actions referred to by our stimulus words. It is therefore unlikely that our subjects engaged imagery processes for concepts for which this is difficult to do. Imagery is more likely to occur for concepts for which this can be accomplished with relatively little effort, i.e. concepts with high familiarity. This would predict more brain activation for concepts with higher familiarity, i.e. a positive correlation with Frequency, which is the opposite of what we found. (2) We included measures for imageability and action-relatedness into our analysis, and effects of these variables were partialled out. This directly aims at removing any effects that are caused by mental imagery of objects or actions, independently of the underlying mechanism, as long as they are reflected by the subjects' rating. Future studies should attempt to disentangle the effects of different types of word frequencies and concept familiarity in more detail. This may require selection of 'awkward' stimulus items, i.e. those that often occur in written or spoken language, but almost never in real life. We hold the view that the novel procedure applied in our present study presents a way to obtain essential information about the nature of category-specific activations from neuroimaging data.
Effects of semantic variables
In addition to providing novel results on category-specific effects of word frequency, we also aimed at corroborating previous results on semantic variables imageability and action-relatedness. Existing data on category-specific activation for highly imageable words are still inconsistent. In general, it appears that concrete and highly imageable words activate more brain areas than abstract and low imageability words, but with considerable variability (Fiebach & Friederici, 2004;Scott, 2004). Stronger activation in fusiform brain areas for imageable words has been reported by several previous studies (Wise et al., 2000;Fiebach & Friederici, 2004;Sabsevitz et al., 2005), but some failed to find such effects (Jessen et al., 2000;Binder et al., 2005). In our study, we found the most reliable modulations of brain activation for Imageability at almost symmetrical locations in the left and right fusiform gyri. We could also show that these were very close to activation spots that were previously reported for object processing (Chao et al., 1999;Martin & Chao, 2001), although slightly more anterior (10 mm) than the mean coordinate obtained from the Chao et al. (1999) study. It has been demonstrated that fusiform gyrus itself is not a homogenous structure, e.g. that the lateral-medial dimension distinguishes between objects and tools, and the posterior-anterior dimension between simple and more complex visual features (Martin, 2007). In our data, there is considerable overlap between our activated clusters and previous results reported for object-related processing. Such results, as well as the double dissociation between brain loci for action-relatedness and imageability on the one hand and for actionand visually related words on the other, demonstrates that words referring to highly imageable concepts share a neuronal substrate with the systems involved in perceiving the corresponding objects.
The most reliable positive correlations with Action-relatedness were found in two adjacent spots in the superior and middle temporal gyrus. These areas have previously been associated with naming actionrelated objects (Martin et al., 1996;Tranel et al., 2005), verb and action word processing (Wise et al., 1991;Martin et al., 1995;Perani et al., 1999;Devlin et al., 2002;Davis et al., 2004), and action observation (Grezes et al., 2003;. Martin et al. (1996) suggested that this region may be related to knowledge about biological motion, i.e. motion of animate agents such as during tool use (Puce & Perrett, 2003). In support of this hypothesis, it has been reported that eye, mouth and hand movements activate the posterior temporal cortex (Pelphrey et al., 2005). This is in line with the observation that neurons in the superior temporal sulcus in monkeys show activation in action observation, but do not have motor and therefore no 'mirror' properties (Rizzolatti & Craighero, 2004), and rather have been associated with perception of biological motion (Perrett et al., 1989;Jellema & Perrett, 2003). An fMRI study in humans showed that activation in the superior temporal sulcus can also be evoked by imagery of motion, again raising the question what level of processing such activation reflects in action-word comprehension (Grossman & Blake, 2001). In a post hoc analysis of their fMRI data, Davis et al. (2004) provided evidence that this region is modulated by action-relatedness of words in a one-back synonym monitoring task. We obtained this effect for silent single word reading, without explicit semantic task demands, and in addition showed that activity in this area for action words is modulated by word frequency.
Category-specific activations are semantic 1863 Further activation for Action-relatedness was located in hand premotor cortex of the left hemisphere. Although relatively weak, the lateralization and approximate location of this activation indicate that it corresponds to motor areas for the dominant right hand. It must be noted here that action-relatedness, as determined from our rating study, captured general action-related aspects (i.e. related to any body part or type of movement) rather than specific ones such as manipulability. It might therefore be surprising that we found activation specifically in the hand motor cortex of the dominant hand at all. Modulation of hand motor cortex activity by action observation has been documented previously by transcranial magnetic stimulation (TMS) (Fadiga et al., 1995;Strafella & Paus, 2000;Aziz-Zadeh et al., 2004) and neuroimaging studies (Buccino et al., 2001;Grezes & Decety, 2001). With respect to language, several behavioural and TMS studies have provided evidence that hand movements and hand motor areas are modulated by perception of speech in general and hand action-related words or sentences in particular (Seyal et al., 1999;Floel et al., 2003;Buccino et al., 2005;Pulvermüller et al., 2005;Boulenger et al., 2006), although the involvement of hand motor cortex in silent reading is still unclear (Seyal et al., 1999;Meister et al., 2003). Our data add further evidence that hand motor areas play a special role for general action understanding. It is also interesting to note that hand motor cortex is located in between face and leg motor areas, and might therefore capture the overlap of activations produced by all three action word categories. 1 It is therefore still possible that action-relatedness affects hand, face and leg motor areas, but was only detected in the hand area due to this overlap. In this case, hand motor cortex would not be 'special' with respect to action-relatedness. The correlation between Frequency and activation to action-related words in this area did not exceed our statistical threshold. The absence of such an effect can be explained by the fact that our action-word category comprised words referring to hand ⁄ arm, foot ⁄ leg as well as mouth ⁄ head actions. Different action word categories have been shown to activate different parts of the motor cortex in previous studies Tettamanti et al., 2005;. Variability across stimuli might therefore have obscured this effect. Our data demonstrate the feasibility of our approach for areas that are most consistently activated by action-and visually related words, but do not exclude the possibility that other brain areas exhibit category-specific effects for further subcategories of words. This issue should be addressed by future investigations. Interestingly, Frequency of action words also modulated brain areas in the left inferior parietal lobule and left inferior frontal cortex, which were associated with complex actions or action recognition in previous studies (Jeannerod et al., 1995;Rizzolatti & Luppino, 2001). This result suggests that correlating neural activity with word frequency, separately for different word categories, has the potential to reveal brain areas that are not found in conventional analyses.
Conclusion
Our study is the first to show that the frequency of occurrence of written words modulates brain activation for different word categories differentially. We interpret this as evidence that this activation reflects lexico-semantic processing stages. This has important implications for psychological conceptions of word meaning, as it shows that word meaning is grounded in systems that serve to interact with the external world, rather than in purely symbolic or abstract codes. Our approach of studying the effects of lexical variables on category-specific responses offers new perspectives for neuroimaging research into the neural basis of word and object processing. | 2014-10-01T00:00:00.000Z | 2008-04-01T00:00:00.000 | {
"year": 2008,
"sha1": "351344ca9b5d718b08458303032ac54b8ae352d6",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1460-9568.2008.06143.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "351344ca9b5d718b08458303032ac54b8ae352d6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
244486077 | pes2o/s2orc | v3-fos-license | Curing characteristics and rheological properties of bentonite- filled rubber blends
The study deals with the examination of the rheological behaviour of rubber blends which were filled with bentonite. The filler - polymer as well as the filler - filler interactions were studied and determined from the frequency sweep and strain sweep rheological measurements. The used natural bentonite was extracted from the locality called Jelsovy Potok. The natural bentonite had a fine fraction with a particle size of 15μm a 45 μm and it was added into rubber blends as a partial replacement of commonly used filler. The rubber blends were characterised on the basis of curing characteristics (minimum torque ML, maximum torque MH, optimum time of cure t(c90), processing safety of blend ts,). Moreover, the complex viscosity and Payne effect were also specified. The required measurements were done by using PRPA 2000.
Introduction
Recently, the properties of filled rubber blends have attracted a significant attention in many contexts. Many experiments have shown that the addition of even small loadings of fillers to the polymer matrix can result in order of magnitude property enhancements over that of the neat polymer [1].
The preparation and development of rubber blend prescriptions requires a number of factors that have to be taken into account. As the first of all, it is necessary to consider what kind of application the blend will be prepared for. The influence of the chemicals and additive, which affect the price, properties and processability of the rubber blend, is also very important factor. Moreover, the environmental aspect of production is necessary to be taken into account. Various mineral-based alternatives are used in this regard. These fillers are important in terms of lower environmental impact. However, in order to implement them into the practice, they have to be studied and tested precisely. Development is finished only if the product can be repeatedly and commonly produced in the required quality [2].
In recent years the use of layered silicates (clay minerals) as fillers in polymers has attracted great interest. The dispersion of individual high-aspect ratio clay platelets to form nanocomposites has been shown, even at very low filler concentrations, to lead to dramatic improvements in several properties: increased modulus and impact strength, better dimensional stability, higher heat distortion temperature, reduced flammability, and enhanced barrier properties. These improvements are primarily due to the stronger interfacial interaction between the matrix and clay platelets (approx. 1 nm thick), as compared with conventional filler-reinforced systems [3,4].
Bentonite is one of the most popular clay rocks with exceptional adsorption properties. The main clay mineral present in bentonite is montmorillonite, which belongs to the smectite mineral group. The properties of bentonite result from the crystal structure of this group. The particles of montmorillonite IOP Publishing doi:10.1088/1757-899X/1199/1/012037 2 have negative charges on their faces due to the isomorphic substitutions in its structure. This negative charge is compensated by the presence of the cations in the interlayer space, which are not fixed and have the character of so called "exchangeable cations" (i.e. Na + , K + , Li + , Mg 2+ , Ca 2+ ) [5,6]. This study is focused on the preparation of rubber blends containing the natural bentonite.
Materials
Our selected samples of natural bentonite were taken from the area of Jelsovy Potok. The prepared samples differed in particle size, representing the fine fraction with a particle size of 15 μm and 45 μm. Using EDX analysis, the content of individual oxides in natural bentonite samples was determined and the given oxides are listed in Table 1. The prepared samples were used as a partial replacement for commonly used carbon black fillers.
Preparation of bentonite-filled rubber blends
The five rubber blends were prepared by two-step mixing in laboratory mixer (Plastograph®EC plus Brabender) with chamber volume of 80 cm 3 and mixing process involved 50 revolutions per minute. The proper order of the steps in relation to additives was performed as it is predetermined. The accelerator and sulphur were added at the end of the mixing process to avoid the occurrence of premature crosslinking reaction. The prepared B 15 and B 45 samples were added with lower filling or loading (5 phr) and higher filling or loading (10 phr). The composition and designation of rubber blends is given in Table 2. 3 torque (ML), maximum torque (MH), processing safety of blend (ts) and optimum time of cure (t(c90)). Samples of the respective blends were tested at 160° C with oscillating arch of 1° and frequency of 1.67 Hz. The cure rate index (CRI) of the rubber blend was calculated, using the following equation: The low frequency region of the rheological plot reflects the effect of structure of natural bentonite on the viscoelastic properties of the nanocomposites. Therefore, to explore the influence of clay on the rheological behavior of the nanocomposites, the dependence of modulus and η* on the frequency (ω) was studied at low frequencies. The complex viscosity of the bentonite-filled rubber blends was determined by an oscillating rheometer (PRPA 2000).
Payne effect.
The Payne effect is known to be a special feature of the tensile behavior of rubber composites containing fillers. It manifests itself as a dependence of the storage and loss modules on the amplitude of the applied stress. Above a certain amplitude of the critical deformation, the accumulation modulus decreases rapidly with increasing amplitude at relatively large deformations [7,8]. The Payne effect was assessed on the basis of the storage shear modulus (Gˊ) data, obtained with an oscillating rheometer (PRPA 2000). The strain test was performed in the range from 0.28% to 100% strain at 1 Hz and 100°C. The Payne effect (ΔGˊ) was calculated by subtracting the values of storage shear modulus at 0.28% and 100% strain. The difference of Gˊ at 0.28% and 100% strain was taken as a measure of filler -filler interaction [9].
Curing characteristics
The curing characteristics of rubber blends clearly reveal the influence of the investigated fillers on the properties of the final vulcanizates and the tendency of the filler particles to interact [9]. The results of the curing characteristics for the bentonite-filled rubber blends and standard rubber blend (S) are shown in Table 3. Figure 1 shows the curing characteristics for standard blend and bentonite-filled rubber blends. The values of minimum torque (ML) for all bentonite-filled rubber blends were lower in comparison with the standard blend. The lower the ML value, the weaker the filler -filler interaction, resulting in lower viscosity of the blend [10].
In the case of maximum torque, almost all of the rubber blends exhibited lower values in comparison with the standard blend. This can be attributed to the decrease in the cross-linking density of vulcanizates and to the viscosity of rubber compounds and it indicates the low rubber -filler interaction [11,12]. Table 3 shows the variation of maximum torque value for B 15-5 sample, which has higher value in IOP Publishing doi:10.1088/1757-899X/1199/1/012037 4 comparison with the standard blend. Higher maximum torque indicates a good state of cure and the high stiffness suggests that the blend is equal to have good mechanical properties [13].
In practice, the time (ts) is the most suitable for the safety of the blend because until the given time, the disulfide bridges are formed, when exposed to temperature and pressure in the blend and this process is connected with formation a three-dimensional network, causing the change in material from plastic state to elastic. Compared with the standard (S), the values increase slightly. From this fact, we can assume that fillers, which are based on natural bentonite, have a positive effect on the processing properties of blends [14]. Time (t(c90)) shows us that the optimum time of cure should not exceed the measured time. This is mainly due to the cure itself and it is related to the so-called pre-cured blends. All rubber blends exhibit the decrease in the values of the optimum time of cure, compared with the standard blend. The values of processing safety for all rubber blends were higher, compared with the standard blend (S).
In the case of curing rate index (CRI) (Figure 2), all blends exhibited higher values, compared with standard blend. Higher CRI implies that the cure rate is fast and it is desirable for increasing the yield in the most of industrial applications [13].
Payne effect
The quality of the polymer -filler and filler -filler interactions was evaluated by determining the dynamic properties of the filled blends, with emphasis on the strain-dependence of the storage shear modulus. Results from dependence of storage shear modulus on deformation for rubber blends are shown in Figure 3 and Figure 4. The measured results show that the B15 blend with lower filling reached almost identical storage shear modulus before and after curing as the S blend. It can be seen from Figure 4 that in the area of ow deformation, the value of the storage shear modulus of the blend B 15 with higher filling is significantly lower in compared with S blend. The dependence of the storage shear modulus on the deformation of S and B 45 blends with higher and lower filling, before and after cure, shows that in the B 45 blend with the higher filling, there was a significant decrease in storage shear modulus values in the area of low deformation of the blend. In the B 45 blend with lower filling in the area of low deformation before cure, the values of storage shear modulus decreased slightly, compared with S but after cure, they decreased more significantly.
In this study, the Payne effect was calculated from the difference between storage shear modulus before and after cure and the results are shown in Table 4. In the Figure 5, we can see that from the aspect of the given interaction, Payne effect values for almost all of the examined samples were lower significantly, compared with S blend. For blends, where the G´ value decreased significantly, this phenomenon can be identified as the consequence of the interruption of the filler -filler interaction, and there could also be a "disassembly" of the three-dimensional rubber -filler network [9]. However, the B 15 blend with lower loading showed the higher Payne effect value in comparison with the standard sample, indicating the better filler -filler interaction.
Complex viscosity
The following figures show the dependence of the complex viscosity (η*) on the oscillation frequency for the bentonite-filled rubber blends as well as the standard blend. Figure 6 shows that the complex viscosity of all blends decreases with increasing frequency, indicating non-Newtonian, pseudoplastic behavior of the materials [15]. The values η* of the B 15 mixture with a lower filling approach S with increasing frequency and this finding indicates a good interaction of the filler in the polymer matrix and this fact is also confirmed by the results referring to the minimum torque [4]. On the other hand, the B 15 blend with a higher filling shows a slight decrease in the viscosity values, which was probably caused by a higher filling amount. From Figure 7, we can conclude that the viscosity values for B 45 blends, with higher and lower filling, decreased significantly, compared with the standard blend. We hypothesize that this behavior may have been influenced by the larger particle size, indicating a worsening of the filler -filler interaction.
Conclusion
The work dealt with the preparation of model rubber blends, using fillers, based on natural bentonites. Its main aim was to assess the interaction of natural bentonite in a polymer matrix which was studied | 2021-11-23T20:06:55.335Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "c806e98d0c7da51feed81051d8e491db37bf6f3e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1199/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c806e98d0c7da51feed81051d8e491db37bf6f3e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
252566238 | pes2o/s2orc | v3-fos-license | The evolution of physiotherapy in the multidisciplinary management of persons with haemophilia (PWH): A scoping review
Abstract Introduction Haemophilia is a rare congenital bleeding disorder, and the most common manifestation is spontaneous bleeding in muscles and joints. Despite the benefits linked to recent and dramatic pharmacological advances at least in high income setting, many patients still develop musculoskeletal dysfunctions during their lifetime, which must be managed by physiotherapists in the frame of a multidisciplinary team. The aim of the scoping review is to map the available evidence by providing an overview on the past and present physiotherapy scenario in persons with haemophilia (PWH). Materials and methods The review was conducted according to the guidelines of the PRISMA extension for scoping reviews. Scientific articles on physiotherapy and sport interventions for PWH published from 1960 up to September 2021 have been included. Search was conducted on the e‐databases PubMed and PEDro without restrictions for the study design. Results Sixty eight articles were included, 52 related to rehabilitation and preventive physiotherapy, 16 to sport. The results have been reported in chronological order and divided into two categories: (1) rehabilitation and preventive physiotherapy; (2) sport activities. Conclusions This is the first scoping review on physiotherapy in haemophilia, based on the existing evidence on this topic which allowed us to underline how the role of the physiotherapist changed over time. Historically this specialist did intervene only after an acute bleed or surgical operation, but now he has a pivotal role in the multidisciplinary team that acts to improve from birth the quality of life of the PWH. His activity is also closely intertwined with sport promotion and supervision.
INTRODUCTION
Haemophilia is a rare inherited disorder characterised by the deficiency of coagulation factor VIII (FVIII) in haemophilia A or factor IX (FIX) in haemophilia B. 1 The main clinical manifestation is recurrent bleeding, resulting in different degrees of organ damage. Haemorrhagic manifestations depend largely on the degree of coagulation factor deficiency and the most common clinical signs occur in the musculoskeletal system, such as haemarthrosis, synovitis, haematomas and chronic arthropathy. 2,3 Haemarthrosis are the hallmark of haemophilia. After the occurrence of three or more bleeds into a single joint within a consecutive 6-month period, the joint is being referred to as target joint. 4,5 Ankles, knees and elbows are those most frequently affected, followed by shoulder and hip. 6,7 The clinical signs of joint illness are reduced mobility, swelling due to synovial hypertrophy but also muscle and capsular contractures. 6 Synovitis has long been thought to be the triggering event, that evolves in parallel with cartilage damage, influencing each other and being both sustained by the presence of blood in the joint.
In PWH the ability of the synovium to remove blood is thwarted by repeated haemorrhages, leading to deposits of haemosiderin and synovial hyperplasia. 7,8 The inflamed synovium is highly vascularised and friable, and thus bleeds easily even following minor trauma, resulting in a vicious circle that is difficult to break. Repeated episodes of haemarthrosis lead to joint remodelling and ultimately to arthropathy, a disabling chronic condition characterised by damage in the cartilage and bone, chronic pain and reduced quality of life. 9,10 Another frequent complication of haemophilia is the occurrence of haematomas, that typically result from traumatic events, even minor ones. They can be subdivided into subcutaneous, subperiosteal or more frequently muscular. Muscle haemorrhages occur in approximately 10%-20% of PWH and account for 10%-15% of all haemorrhagic events, causing motion limitations, disability, and impaired quality of life. 11,12 Muscle bleeding increases the risks of developing the compartment syndrome, cysts and pseudotumours.
Until 50 years ago, the pharmacological treatment of PWH was almost non-existent. Whole blood or plasma were the only available weapons of very limited efficacy, so that patient life expectancy was 15-20 years. The cases who survived adolescence had severe musculoskeletal damage and were often confined to wheelchairs or bedridden. 13 Treatment has changed over the years from an episodic therapy useful to stop acute bleeding to prophylactic regimens aimed at preventing bleeding. Many drugs are currently available, from coagulation factor concentrates with standard or extended plasma half-lives to new subcutaneous non-replacement drugs such as emicizumab and others in the pipeline. These products have made possible to improve quality of life and life expectancy in the PWH, provided there is around him a multidisciplinary team that helps to maintain what is obtained with prophylaxis. 14,15 The physiotherapist is one of the specialists who must be part of the comprehensive team and should be present throughout the PWH life: in children, for a primary action that avoids the establishment of incorrect postures and behaviours that risk to undermine the musculoskeletal structure; in adults, for post-surgical rehabilitation or to maintain the residual functional activity after chronic joint damage; in all ages, to promote and supervise exercise and sport activities. 15 With this background, the aim of this scoping review is to describe the past, present and future role of physiotherapy in the multidisciplinary management of PWH.
MATERIALS AND METHODS
The PRISMA model for scoping reviews was followed. 16 were used as found in titles and abstracts of the articles. Articles that met the following inclusion criteria were selected: -Population: included male subjects with inherited haemophilia A or B, with no age restrictions.
-Intervention: articles that described physiotherapy and/or sport programs for PWH, whether for preventive or recovery purposes.
-Language of publication: articles written in English and Italian.
-Years of publication: from 1960 to September 2021.
-Study design: no restrictions on the design of selected articles.
-Relevance to the research aim.
RESULTS
A total of 68 articles were included in the scoping review. 1 In the articles before the year 2000, focus was almost exclusively on rehabilitation following surgical interventions. It was only in the first two decades of the current century that rehabilitation started to be featured both pre-and post-surgery and that physiotherapy gradually took a key role in the comprehensive management of PWH.
Rehabilitation and preventive physiotherapy
In the 60s and 70s of the last century only chapters of textbooks dealt with physiotherapy describing it exclusively as a rehabilitative weapon following an acute bleeding event, [17][18][19][20][21][22] and we have to wait until 2005 when Stephensen 29 first published an article that emphasizes the role of physiotherapy not only after surgery but also pre-operatively, with the goal to allow a faster post-operative recovery.
In the 90s, first Heijnen 23 The specific role of the physiotherapist as a healthcare professional is evaluated in five articles. 15,36,37,41,50 Initially, the physiotherapist was seen as a specialist who comes into action only in the post-trauma or post-surgery recovery phase, but since the 2000s this professional acquired a much wider role and responsibilities. 23 In 2011, Souza et al. 43 published a systematic review on physical activity for PWH. The authors highlighted that the PWH from an early age often chooses to limit the performance of any physical activity. Developing a specific exercise programme that can be easily and consistently done is therefore of pristine importance. In 2018, Boccalandro et al. 45 published the results of a study on the effectiveness of a multidisciplinary physical activity programme tailored for older PWH born before 1975, that is, at time when replacement therapy was still in its infancy and arthropathy was inevitable.
Very important is also resistance training. Engelbert et al. 49 showed that a lower aerobic capacity in children with haemophilia than in healthy controls is associated with lower levels of performed physical activity. Furthermore, a systematic review published in 2020 53
Sport activities
We identified 16 articles dealing with the performance of sports by PWH supervised by a physiotherapist. [67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82] In 1996, Buzzard 69 published for the first time a review dealing with sport in PWH. The review highlighted the need for PWH to start physical activity as soon as pos-sible, encouraging them to continue it regularly. This strategy was at variance with that previously prevailing, that is, to limit motion as much as possible in PWH to reduce the bleeding risk. Among the sports identified as suitable for PWH were swimming, golf, 73 reported that sport practice followed by an improvement in the health-related quality of life, with no an increased risk of bleeding nor development of target joints.
Physiotherapy programmes carried out in water have long been employed, as reported by von Mackensen et al., 57 Passeri et al. 58 and Mazloum et al. 59 It appears that hydrotherapy helps PWH to improve resistance, physical strength and more generally the quality of life. [57][58][59] Physiotherapy in water has always been a rehabilitation cornerstone, as witnessed by the fact and deed that when PWH were advised to start a sport the first recommended choice was swimming, a sport considered to have a low bleeding risk. 49
Conclusive remarks
This scoping review on haemophilia and physiotherapy and has allowed us to underline how the role of the physiotherapist changed over During the first two decades of the 2000s, the drugs available for treatment of underwent further improvements, so that a more comprehensive and multidisciplinary management of PWH involved the orthopaedist, physiotherapist, and haematologist together. The importance that the physiotherapist has acquired in the comprehensive management of the PWH within the multidisciplinary team has been more and more recognized, so that nowadays this professional is playing a pivotal role in the management of haemophilia. 15,23 The improvement of therapies and the multidisciplinary manage-
Future developments
Recently the "European Haemophilia Consortium and EAHAD Physiotherapy Committee 86 " published eight principles that outline the standards that the physiotherapists dealing with haemophilia should follow. These professionals will need to collaborate with other specialists in the management of PWH, who in turn must have easy and consistent access to rehabilitation treatments.
The innovative therapies that became available in the last 10-20 years are a definite benefit for PWH, through the attainment and maintenance of consistent levels of haemostatic competence and the avoidance of the peaks and troughs that characterized the traditional therapeutic approaches. This situation is a bonus also for the physiotherapist, who can handle these patients with much more confidence.
The efficacy of these therapies makes patients much more independent from the specialist in the treatment centre. Thus, we envisage the increasing development of telemedicine approaches, designed to allow home rehabilitation and exercising but also allowing a supervision at distance by the physiotherapist.
DATA AVAILABILITY STATEMENT
This is a scoping review, all data here described are present in published reports and are available at: https://pubmed.ncbi.nlm.nih.gov/, https:// pedro.org.au/, or upon request. | 2022-09-29T06:17:33.517Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "5a6e3965c63b368880f2b158ae84a0e690771736",
"oa_license": "CCBYNCND",
"oa_url": "https://www.research.unipd.it/bitstream/11577/3457578/1/Haemophilia%20-%20Physio.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "5132169f8a4a7ab32befd797d2ace8361efcc805",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18952249 | pes2o/s2orc | v3-fos-license | A Novel Case of Symptomatic BK Viraemia in a Patient Undergoing Treatment for Hodgkin Lymphoma
Symptomatic BK viral infection in the immunocompromised host is well described, most commonly seen in renal transplant recipients, bone marrow transplant recipients, and HIV positive patients. The present case describes a novel clinical scenario of symptomatic urological BK virus infection in a patient receiving treatment for Hodgkin lymphoma. This case highlights the importance of casting a wide diagnostic net for adverse events encountered with novel therapeutic agents or regimens.
Case Presentation
B-K, a 38-year-old male, presented with symptoms of severe bladder cystitis over a 2-week period, during treatment for nodular sclerosing Hodgkin lymphoma stage IVB. At the time of presentation, he had completed four cycles of escalated BEACOPP and was receiving the second of four planned standard BEACOPP cycles.
The escalated BEACOPP regimen consists of doxorubicin 35 mg/m 2 and cyclophosphamide 1250 mg/m 2 on day 1, etoposide 200 mg/m 2 on days 1 to 3, procarbazine 100 mg/m 2 on days 1 to 7, vincristine 1.4 mg/m 2 and bleomycin 10 0000 IU/m 2 on day 8, and prednisone 40 mg/m 2 on days 1 to 14. Doses of cyclophosphamide, doxorubicin, and etoposide are decreased in the standard regimen. Assessments after four cycles of escalated BEACOPP revealed that the patient was both PET and CT negative for Hodgkin lymphoma.
Treatment cycles 1-4 were well tolerated with expected grade 3 or grade 4 hematological toxicities and no infective complications. Of note lymphopenia of 3-12-day duration occurred, with a nadir of 0.1 × 10 9 /L each cycle. Bleomycin was omitted in cycle 6 due to pulmonary toxicity. Two admissions were required for upper respiratory tract infection during cycle 5: the first was treated with piperacillin/tazobactam, gentamicin, amoxicillin trihydrate/potassium clavulanate, and roxithromycin and the second was treated with amoxicillin trihydrate/potassium clavulanate and roxithromycin.
Mr. B-K presented to the emergency department on day 9 of chemotherapy cycle 6, with a 2-week history of suprapubic pain, painful urinary frequency, nocturia, weak stream, and dysuria and a 24-hour history of urge incontinence and sharp thoracolumbar paraspinal pain.
Mr. B-K was in severe pain with suprapubic tenderness on examination. Oxybutynin was administered without effect.
A urological consultation was sought; however, cystoscopy was not performed due to neutropenia. A presumptive diagnosis of overactive bladder was made, and the patient was discharged on solifenacin 5 mg po qd for symptomatic relief.
After discharge from hospital, the patient's symptoms were progressively more severe and distressing. Ongoing investigation included urethral swab for herpes simplex DNA and varicella DNA which were negative. Urine was negative for Chlamydia trachomatis and Neisseria gonorrhoeae nucleic acid. Serology excluded active CMV (IgM negative), EBV, HBV, HCV, HIV, leptospirosis, mycoplasma pneumonia, VZV, Q fever, and Bartonella henselae. No test for adenovirus was performed. Renal function was normal, and a renal tract ultrasound excluded renal and postrenal causes for his symptoms.
Decoy cells were noted during urine cytological examination and electron microscopy was performed demonstrating intracellular viral inclusions suspicious for polyomavirus cystitis ( Figure 1). Blood PCR for BK virus DNA was positive, with a viral load of 8316 copies/mL (PCR Taqman Probe) [1], consistent with a BK viraemia. Urine BK viral load is not routinely measured in our laboratory. Given the clinical symptoms, findings on urine electron microscopy, and time course, a presumptive diagnosis of BK virus disease was made. It is unknown whether this represented a primary infection or reactivation of latent virus.
In light of the limited evidence of efficacy of treatment with cidofovir, vidarabine, and leflunomide [2] for BK viral nephropathy and the potential renal toxicity of cidofovir, no specific antiviral therapy was initiated at time of diagnosis of BK viraemia. Cycle 7 of chemotherapy was withheld due to severe symptoms of cystitis and the patient showed significant clinical improvement. The risk benefit of continuing chemotherapy in terms of Hodgkin lymphoma control was considered at length and it was decided not to complete the last 2 cycles of chemotherapy due to the risk of ongoing BK virus infection with continued immunosuppressive therapy including cystitis and nephritis. The patient's symptoms settled over the next 2-3 weeks and he has been asymptomatic since.
Four months after BK viraemia was diagnosed, it was undetectable in the blood. The patient remains in radiological and clinical remission from Hodgkin lymphoma 28 months and 40 months posttreatment.
Discussion
BK virus is one of the polyomaviruses. Serological evidence of BK virus infection in 70-90% of asymptomatic healthy adults demonstrates its high prevalence in studied populations [2][3][4]. The primary route of transmission remains unclear, with both respiratory and oral routes proposed [2,5]. Despite rarely presenting as a clinical problem, reactivation of latent BK infection in the renal tubular epithelial and urothelial cells can occur in the setting of cellular immunosuppression. A nonspecific inflammatory reaction is triggered by virus mediated cell lysis, and in immunocompetent hosts both cellular and humoural immunity is subsequently activated [4].
Symptomatic BK viremia is a relatively common finding following renal transplant and bone marrow transplant, in patients with HIV infection, and less common in heart and liver transplant recipients, with nephropathy or hemorrhagic cystitis the classic presentations [4,6]. The relationship between immunosuppression and viral reactivation is complex [7], and it is not understood why symptomatic BK reactivation is reportedly only rarely a complication of chemotherapy alone. There are two case reports of symptomatic BK virus associated with Hodgkin lymphoma. The first is a case of viruria in a 15-year-old patient 2 weeks following chemotherapy. This patient had similar symptoms to our case; however, the patient did not have associated viremia [8]. A second case of renal failure with BK viruria in a 3-year-old child with Hodgkin lymphoma differed from our case in several ways. The child had the rare recessive disease cartilage-hair hypoplasia causing reduced numbers of T-lymphocytes, which may have independently explained the symptomatic BK virus infection. Furthermore, it is possible that this was a case of primary BK infection rather than reactivation of infection [9].
The question of whether BK virus is "peculiar to the kidney" in immunosuppression or a problem of immunosuppression generally, posed in the 1970s [10], has been superseded by the more clinically focused question: in whom does symptomatic, clinically significant disease occur [4]? Our case suggests that this question is still being answered.
Hirsch classifies three BK virus diagnostic states: the first is serological evidence of infection (BK virus infection), the second viral activity (BK virus replication), and the third symptomatic disease (BK virus disease) [2]. Leung et al. examined the correlation between BK viruria and subsequent hemorrhagic cystitis in a small study of patients receiving allogeneic hematopoietic stem cell transplants and concluded that viruria, although not directly causative, may be an important cofactor in the development of hemorrhagic cystitis [11]. This finding has since been confirmed by others [12][13][14] and has led to the practice of replication surveillance in transplant candidates.
In neutropenic patients, urinary tract infection cannot be excluded by negative urinary leukocytes and culture, cytology, and other tests are required. The presence of significant amounts of haematuria can be associated with severe thrombocytopenia, coagulopathy or viral infection (such as cytomegalovirus or adenovirus), or acute hemorrhagic cystitis (in patients receiving chemotherapeutics). Cyclophosphamide can cause hemorrhagic or nonhemorrhagic cystitis due to irritation of the bladder by the metabolite acrolein [6]. However, when related to cyclophosphamide use, it is dose dependent with onset within 48 hours of toxic dose initiation. Acrolein may cause asymptomatic damage to the bladder mucosa, setting up a susceptibility to viral infection in the immunosuppressed state that ensues [15]. Viral hemorrhagic cystitis is associated with immunosuppression, particularly of cellular immunity. It has been determined that BK virus specific T cells are undetectable in peripheral blood of immunosuppressed patients with polyomavirus associated nephropathy, however reappearing with immune reconstitution [5].
Diagnosis of BK related cystitis is difficult. Due to the ubiquitous nature of BK virus in the general population, serology is not helpful. Urine cytology may reveal decoy cells (enlarged nucleus with basophilic nuclear inclusions); however, it may be difficult to differentiate from malignancy, and it does not distinguish between the polyomaviruses. Urine polymerase chain reaction does not distinguish latent from active infection. Viral culture is not clinically useful due to speed of growth. Urine polymerase chain reaction (PCR) is a poor disease correlate as asymptomatic shedding is common. Plasma polymerase chain reaction is the standard for diagnosis of BK viraemia and correlates with nephropathy and inversely with immune status [16]. In a prospective study, Erard et al. reported that BK plasma viral loads >10 4 copies/mL were predictive of hemorrhagic cystitis [14]. In the present case, the highest viral load was below this threshold; however, viral load may have peaked prior to diagnosis. The diagnosis of BK virus hemorrhagic cystitis is best made by urine viral load in the context of clinical symptoms.
Current treatment recommendation for BK viraemia depends on the organ or organs involved. There is little evidence supporting use of antivirals in polyomavirus associated nephropathy or hemorrhagic cystitis [5]. Studies reveal that traditional antivirals have no efficacy, and evidence for use of intravenous cidofovir has been difficult to extract due to the potential confounding effect of reducing immunosuppression. Recommended management of renal disease involves reduction, substitution, or discontinuation of immunosuppressive treatments, which was the strategy employed for our patient. Other authors have managed bladder involvement by irrigation and symptom control [5] or local bladder treatment with cidofovir with good effect; however, replication of these findings is needed [17].
Conclusion
This case identifies BK disease in a patient having aggressive chemotherapy for high risk Hodgkin lymphoma. Classically BK disease presents in posttransplant patients; however, with the introduction of aggressive and innovative therapies consideration must be given to atypical infections in symptomatic patients. BK disease warrants consideration as a differential in patients undergoing aggressive curative treatment regimens who develop symptomatic cystitis. | 2016-05-12T22:15:10.714Z | 2014-06-24T00:00:00.000 | {
"year": 2014,
"sha1": "e474e776fe82cdfa8dd739f0ea1c2aac1c326b0a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/criid/2014/909516.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88a87a3c48e6c3623c69bd20240ec0e29d53d776",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213961148 | pes2o/s2orc | v3-fos-license | Conservation of Agricultural Soil Using Entomopathogenic Fungi: An Agent With Insecticides Degradation Potential
A major current focus in agricultural soil conservation is to ensure a pest control program is sustainable, and therefore, entomopathogenic fungi have been considered and extensively studied as biopesticides. However, the ecological role of entomopathogenic fungi in degrading insecticides in soil is not well understood. In this study, the potential of entomopathogenic fungi Metarhizium anisopliae (Met.) in degrading two common agricultural insecticides, chlorpyrifos and cypermethrin was investigated by introducing M. anisopliae into autoclaved soils artificially contaminated with 500 ppm of chlorpyrifos, and cypermethrin. The concentration of chlorpyrifos and cypermethrin were determined after 21 days using UPLC/PDA detector. The residues, rate, and percentage of degradation between insecticides treated and control soil were compared using an independent t-test (SPSS 20.0). The degradation of both insecticides in Met. treated soil (>80%) was significantly higher than control soil (47-61%). The residues for chlorpyrifos and cypermethrin residue in Met. treated soils were 19.39±0.10 ppm and 19.68±0.36 ppm, respectively, significantly lower than control (residues of chlorpyrifos-262.6±7.6 ppm and cypermethrin-194.4±4.3 ppm, at p<0.05). The results suggested M. anisopliae may play a role in the bioremediation of soil.
Introduction
Insecticides contribute significantly to the success of modern farming and food production. Although biological control approaches have been commercially applied for some insect species [1], we still rely heavily on insecticides to limit insect damage in agriculture [2]. Insecticides such as chlorpyrifos and cypermethrin were commonly used in crops like corn, pepper, and potatoes, due to its capable effect in controlling soil-dwelling pest. Organophosphorus pesticides such as chlorpyrifos were once regarded as the most widely used insecticides and occupied an estimated 34% of worldwide insecticide sales [3]. However, the insecticides were highly persistent in the ecosystem. This therefore introduces possible incorporation into our food chain, thus affecting ecosystems including human beings [4], as stated by Singh and Walker [5] who suggested that these compounds possess high toxicity to mammals.
Extensive application of the insecticides led to soil contamination and therefore lowered soil fertility. The problem is accumulating due to deficiency of an applicable solution in the form of methods that are effective and ecologically friendly in the removal of toxic pollutants [6]. In general, physical and chemical clean-up technologies are relatively expensive, less sustainable and not appropriate. Boopathy [7] applied bioremediation on insecticides contaminated soil by using microorganisms, and the result was promising; but when bacteria was applied in degrading chlorpyrifos, its antibacterial metabolite 3,5,6-trichloro-2-pyridinol (TCP) prevented the proliferation of degrading bacteria [8], subsequently persisting in the environment.
The impediment of bioremediation on chlorpyrifos contaminated soil can be resolved by using fungi, as Chen et al. [9] demonstrated from the biodegradation of chlorpyrifos and TCP by fungus Cladosporium cladosporioides Hu-0. Abd El-Ghany and Masmali [10] showed a type of entomopathogenic fungus Metarhizium anisopliae (Met.) was able to biodegrade more than 90% of malathion. Ong et al [11] investigated the interaction between M. anisopliae and chlorpyrifos in the control of house fly, and found the residues of chlorpyrifos were significantly lower compared to the control in the potato-dextrose agar (PDA) culture media after 14 days. Therefore, this facilitated our interests on the possibility of M. anisopliae in biodegradation of chlorpyrifos in the common substratesoil. In this study, we aimed to investigate the degradability of M. anisopliae on the chlorpyrifos and cypermethrin in soil by analysing the residues after certain incubation duration.
Fungal Culture
Metarhizium anisopliae (Met.) Sorokin strain was isolated from the spores that form on the Oryctes rhinoceros (L.) beetle. The isolate was batch cultured on 10 Petri dishes containing potato-dextrose agar (PDA) at 27°C for 30 days. The conidial suspension prepared in the testing was harvested from young colonies from the surface of these colonies by using a sterilized L-rod and transferred aseptically to a tube containing a mixture of 0.1% (v/v) Tween 80 and autoclaved distilled water. The stock suspension was standardized at a concentration of 3.5 x 108 conidia ml-1 using a Neubauer haemocytometer.
Identification of fungus
Fungus conidia morphology was observed using compound microscope (Leica ® DM500) with Köhler illumination. Samples were prepared as proposed by Humber [12], where the conidia were taken by an insect pin with assistance of a stereo microscope (Olympus SZ Stereo Microscope). The morphology was identified according to the Key to Fungal Entomopathogens [12], in which the Metarhizium was confirmed with its formation of short to long chains of conidia, with rounded to broadly conical apices, branched, densely intertwined conidiophores that forming a compact hymenium, and the conidia borne in parallel chains and green in mass (figure 1).
Degradability test on soil
Soil samples were collected from the field with no history of insecticides application at Penang Bayan Lepas, Malaysia from a depth of 10-20 cm. Soil samples were sieved through a 90-mesh sieve to remove plant debris and stones. These soil samples were autoclaved and stored at 4°C. Ten grams of sterile soil were introduced in Petri discs, with the variants of three controls; 1) soil, 2) soil + insecticides, 3) soil + fungal inoculums, and the treatment; soil + insecticides + fungal inoculums in five replicates. Each of the insecticides was added at 500 ppm, and inoculations were done with two mL of fungal spore suspension (3.5 x 10 8 conidia ml -1 ). The Petri discs were incubated at 30°C for 21 days then proceed for insecticidal residue analysis.
Insecticidal residue detection
A standard curve was initially generated to determine the particular retention time for each tested insecticide by preparing serial dilutions of five standards ranging in concentration from 50 to 500 ppm by weighing analytical grade chlorpyrifos (99.7%) and cypermethrin (94.3%) (Sigma-Aldrich, Malaysia) in a 25-ml volumetric flask. Acetonitrile (HPLC grade, Fisher Chemical) was used as the solvent. Modified solvent direct-immersion extraction-SDIE [13] was used to extract insecticide from the soil. Two grams of homogenized soil for treatments and controls were prior extracted using acetone, water, and an insecticide-specific solvent (chlorpyrifos-acetone and cypermethrin-hexane) in a 1:1:2 ratio with rotary shaking and immersed for 24 h. The upper solvent layer of the sample later was subjected to centrifugation using an Eppendorf Centrifuge 5427R (Eppendorf Asia Pacific Sdn. Bhd, Selangor, Malaysia) at 2,000 rpm for 5 min. The supernatant was cleaned-up using solid-phase extraction (SPE) C-18 (Supelclean ENVI-18 SPE wt. 500 mg, volume 6 ml) cartridge (EPA 1996). The filtrate was concentrated until dry at 55-65°C in a ventilated oven. The dried residues were reconstituted in 1 ml of acetonitrile in 2-ml amber glass vials for UPLC analysis.
The residues were analysed using the ACQUITY UPLC WATER system (Waters Analytical Instruments Sdn. Bhd., Petaling Jaya, Malaysia), consisting of a PU-1580 pump coupled to an HG-1580-31 mixer and a photodiode array (PDA) detector with programmable excitation and emission wavelengths. Separation was achieved using an ACQUITY UPLC BEH C18 Column (1.7 mm by 2.1mmby 100 mm). The PDA detector was set at the excitation wavelengths of 220 nm for chlorpyrifos and 225 nm for cypermethrin, with the initial mobile phase of 10:90 (v/v) for cypermethrin and methanol/acetonitrile at 70:30 (v/v) for chlorpyrifos. The quantitative measurement of the insecticide residue followed the CDFA standards [14].
Results and discussion
Extensive use of chlorpyrifos and cypermethrin may cause various environmental consequences due to the natural degradation, particularly soil fertility. We proposed the application of biodegradation on both chlorpyrifos and cypermethrin in agricultural soils that were also demonstrated by Abd El-Ghany and Masmali [10] that using M. anisopliae in reducing organophosphates in the soil. The principle to support the investigation of using fungi on chlorpyrifos and cypermethrin was due to the characteristics of the fungus that did not have sensitive targets for the insecticidal action mode of chlorpyrifos and cypermethrin [15].
The characteristic of fungus was identified as described in the section of 2.2 "Identification of fungus". The insecticides' residue for the controls (soil, soil + insecticides, soil + M. anisopliae) were less than 0.01 ppm, and the chlorpyrifos and cypermethrin's residue for the plates of soil+insecticides+M. anisopliae were significantly lower compared to the plate containing the soil with the two insecticides, respectively (P < 0.05, table 1). The result of this study was comparable to Ong et al. [11], in which the M. anisopliae+ChCy (a commercial insecticide that was having both chlorpyrifos and cypermethrin) showed significantly lower residues than the control. Our results agreed well with the study of Abd El-Ghany and Masmali [10] that showed M. anisopliae was able to biodegrade more than 90% of malathion, which had the same category and action mode as chlorpyrifos. Siewiera et al. [16] used Metarhizium robertsii (genus Metarhizium) to enhance the tributyltin (TBT) degradation by estradiol (E2), in which the presence of M. robertsii significantly reduced the amount of tributyltin. Chlorpyrifos is suggested as a high persistency pollutant in the environment due to its antibacterial metabolite 3,5,6-trichloro-2-pyridinol (TCP) [17]; however, Chen et al. [9] has shown the possibility of using the fungus Cladosporium cladosporioides Hu-01 to degrade chlorpyrifos and hydrolyzed its antibacterial metabolites 3,5,6-trichloro-2-pyridinol (TCP). Similarly, a study by Fang et al. [18] showed that application of the fungus Verticillium sp. from soil successfully degraded chlorpyrifos, Mukherjeea and Gopala [19] also demonstrated that two soil fungi Aspergillus niger and Trichoderma viride are able to degrade chlorpyrifos. The fungus may be an excellent alternative for bacteria in bioremediation of chlorpyrifos contaminated soil. Bioremediation on insecticides contaminated soil was achieved, as demonstrated by Chalamala et al. [20] who used Aspergillus niger, as an alternative to other conventional technique for the degradation of malathion contaminated residue soils in which Aspergillus sp. showed tolerance limit of 800 mg of malathion and degraded 300 mg within 24 h of incubation. The fungus may synthesize phosphotriesterases (PTE), the main class of enzymes in the hydrolysis of organophosphate insecticide such as chlorpyrifos. Various PTEs have been identified such as organophosphate hydrolase (OPH), methyl parathion hydrolase (MPH), organophosphorus acid anhydrolase (OPAA), diisopropylfluorophosphatase (DFP), and paraoxonase 1 (PON1), carboxylesterases from fungus [21]. In conclusion, future study could be broadened in the molecular extraction and catalytic enzymes from fungus on insecticides such as chlorpyrifos and cypermethrin, and also other organic insecticides. | 2020-01-02T21:58:03.864Z | 2019-12-23T00:00:00.000 | {
"year": 2019,
"sha1": "b79ce2a082159807162171e08f4fdd78d7842a11",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/380/1/012014",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "44fce0e43276f17456dc3b757e26dc909f05816f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
} |
259300535 | pes2o/s2orc | v3-fos-license | Modulation of Reoviral Cytolysis (I): Combination Therapeutics
Patients with stage IV gastric cancer suffer from dismal outcomes, a challenge especially in many Asian populations and for which new therapeutic options are needed. To explore this issue, we used oncolytic reovirus in combination with currently used chemotherapeutic drugs (irinotecan, paclitaxel, and docetaxel) for the treatment of gastric and other gastrointestinal cancer cells in vitro and in a mouse model. Cell viability in vitro was quantified by WST-1 assays in human cancer cell lines treated with reovirus and/or chemotherapeutic agents. The expression of reovirus protein and caspase activity was determined by flow cytometry. For in vivo studies, athymic mice received intratumoral injections of reovirus in combination with irinotecan or paclitaxel, after which tumor size was monitored. In contrast to expectations, we found that reoviral oncolysis was only poorly correlated with Ras pathway activation. Even so, the combination of reovirus with chemotherapeutic agents showed synergistic cytopathic effects in vitro, plus enhanced reovirus replication and apoptosis. In vivo experiments showed that reovirus alone can reduce tumor size and that the combination of reovirus with chemotherapeutic agents enhances this effect. Thus, we find that oncolytic reovirus therapy is effective against gastric cancer. Moreover, the combination of reovirus and chemotherapeutic agents synergistically enhanced cytotoxicity in human gastric cancer cell lines in vitro and in vivo. Our data support the use of reovirus in combination with chemotherapy in further clinical trials, and highlight the need for better biomarkers for reoviral oncolytic responsiveness.
Introduction
Nearly one million cases of gastric cancer are diagnosed worldwide each year, with the highest incidence occurring in eastern Asia, parts of South America and eastern Europe. Gastric cancer is the second most common cause of cancer-related death worldwide, with around 700,000 deaths a year. In the US, more than 20,000 new cases of gastric cancer and 10,000 deaths occur annually [1]. Survival of patients with gastric cancer is substantially worse than that of patients with most other solid malignancies. The only current treatment that offers potential cure is complete resection of the tumor [2]. However, after complete resection alone, patients who are shown to have extensive lymph node involvement on surgical pathology specimens have a 5-year survival of only 7% [3]. Recently, the use of chemotherapy such as cisplatin, irinotecan, paclitaxel, or docetaxel for gastric cancer has yielded some improvements in outcome, but the beneficial effects are still modest and there is a real need for new therapeutic strategies [4]. It is possible that oncolytic virus therapy may be useful in this context, and perhaps especially so for viruses that proliferate in the gastrointestinal tract.
Human reovirus is a ubiquitous, non-enveloped virus containing 10 segments of double-stranded RNA (dsRNA) as its genome, with infections that are generally mild, restricted to the upper respiratory and gastrointestinal tracts, and often asymptomatic [5].
Reovirus has innate oncolytic potential in a wide range of murine and human tumor cells, and this is at least partly dependent on the transformed state of the cell [6,7]. The precise mechanism of reoviral tropism and selective oncolysis in malignant cells is yet to be fully determined. In normal cells, the presence of an intact double-stranded RNAactivated protein kinase system limits the establishment of a productive reoviral infection. In malignant cells with an activated Ras pathway, it has been suggested that either directly through Ras mutation or indirectly via upregulation of epidermal growth factor receptor (EGFR) signaling or of other signaling components [5,[8][9][10][11], this cellular antiviral response mechanism may be perturbed, viral replication enhanced, and subsequent lysis of the host cancer cell facilitated. The modulation of other pathways that regulate viral attachment, penetration, uncoating, assembly, and propagation further influence the efficiency of viral oncolysis (see below).
Thus, reovirus is considered a promising candidate for oncolytic virus therapy and Phase II and III clinical studies are underway (oncolyticsbiotech.com (accessed on 5 June 2023)) for multiple tumor categories, and an orphan drug designation has recently been approved by the FDA for the use of reovirus in the treatment of gastric and certain other cancers [12]. While monotherapy with oncolytic reovirus has been explored, most current efforts use reovirus or other oncolytic viruses such as herpes virus in combination with chemotherapy or radiotherapy in preclinical or clinical studies to potentially increase treatment efficacy for various malignancies [13][14][15][16][17][18]. For example, several studies have suggested that reovirus, in combination with paclitaxel against lung cancer and with cisplatin against melanoma, may have synergistic antitumoral effects [19,20]. However, so far there have been few reports of oncolytic reovirus and combination therapy against gastric cancer in preclinical animal models [21]. Here we show that oncolytic reovirus therapy is effective against gastric cancer xenografts in a mouse model, and that the combination of reovirus and chemotherapeutic agents paclitaxel and irinotecan has clear synergistic benefits.
Cells and Virus
Human gastric cancer cell lines (KATOIII, SNU16, AGS, NCI-N87, Hs746T) and human colon cancer cell lines (HCT116, HT-29) were obtained from the American Type Culture Collection (ATCC; Rockville, MD, USA). Other human gastric cancer cell lines (MKN1, MKN7, MKN45, MKN74, HGC-27, GCIY) were obtained from the Cell Bank, RIKEN BioResource Center (Kyoto, Japan). Another human gastric cancer cell line (FU97) was obtained from the Health Science Research Resources Bank (Japan), and ISt-1 was a gift from Dr. Masanori Terashima. All cell lines were tested and free of mycoplasma contamination. The Dearing strain of reovirus serotype 3 (a gift from P. Lee, Dalhousie University, Halifax, NS, Canada) was propagated in suspension cultures of L929 cells (from ATCC) and purified according to previously established methods [8,9] with the exception that β-mercaptoethanol was omitted from the extraction buffer. Viral titers were also established using L929 cells [9].
Chemotherapeutic Agents
Irinotecan (Mayne Pharma, Raleigh, NC, USA), Paclitaxel (Hospita, Canada), and Docetaxel (Taxotere ® ; Sanofi Aventis, Bridgewater, NJ, USA) were kindly provided by Dr. Aru Narendran, University of Calgary. These agents were diluted with the respective medium just before use for in vitro studies and with phosphate-buffered saline (PBS) for in vivo studies.
Cell Viability Assay
All cells were seeded in 96-well plates at a density of 2 × 10 3 cells/well with appropriate medium. GC cells were mock infected or infected with reovirus at a MOI of 1 or 10 and then treated with chemotherapeutic agents. Experiments were repeated three times and results presented as mean +/− standard deviation. Numbers of viable cells were evaluated by a colorimetric WST-1 assay at 3, 6, and 9 days post treatment. WST-1 (Roche), a tetrazolium salt, is cleaved to a colored formazan product by enzymes in metabolically active cells, and the reaction is quantitated with an automatic plate reader at 450 nm. The potential synergistic effect arising from the combination of reovirus with chemotherapy on cell proliferation was assessed by calculating combination index (CI) values using the method of Zhao et al. [22]. The CI provides a quantitative measure of the degree of interaction between two or more agents. A CI of 1 denotes an additive interaction, >1 represents antagonism, and <1 indicates synergy, with lower values indicating a higher degree of augmentation of effect of the two agents working together.
Ras Activation Assay
First, 85-90% confluent cells grown in 150 mm dishes were lysed with 1 × Mg 2+ lysis buffer (Ras activation assay kit; Millipore). To determine the level of activated Ras (Ras-GTP) in these cells, 1 mg of cell lysate was incubated with 10µL of Raf-1 Ras binding domain agarose conjugate at 4 • C for 45 min. The beads were then collected, washed, resuspended in 2× Laemmli buffer, and boiled for 5 min. This was then followed by SDS-PAGE and Western blotting with an anti-Ras antibody (clone RAS 10) according to the manufacturer's instructions. To determine the level of total Ras, cell lysates were directly subjected to SDS-PAGE and Western blotting with anti-Ras antibody. The membrane was incubated with horseradish peroxidase-conjugated goat antimouse antibody, and specific bands were detected with an ECL system (GE Healthcare).
FACS Analysis
After treatment with reovirus and/or chemotherapeutic agents, general caspase activity was assessed by the carboxyfluorescein caspase detection kit (Apologix; cat. no. FAM100-2; Cell Technology, Inc., Newport News, VI, USA), which is based on carboxyfluoresceinlabeled fluoromethyl ketone (FMK)-peptide inhibitors of caspases. These inhibitors are cell permeable and noncytotoxic. Once inside the cell, the fluorescent inhibitor binds covalently to the active caspase. Primary rabbit antireovirus polyclonal antibody was made in our lab and detected by binding to PE goat anti-rabbit IgG (Cedarlane Laboratories Ltd., Burlington, ON, Canada). Fixed and permeabilized cells were analyzed by flow cytometry.
Subcutaneous Tumor Xenograft Model in Nude Mice
Six-week-old male CD-1 nude mice, purchased from Charles River, were kept under pathogen-free conditions according to a protocol approved by the University of Calgary Animal Care Committee. MKN45 cells (2 × 10 6 ) were implanted subcutaneously in the left flanks of mice under anesthesia. When the tumors reached a diameter of~5 mm, the mice were randomly divided into four groups (5 mice/group), and a 50 µL solution containing reovirus (1 × 10 8 PFU/animal) or PBS was injected into the tumor (any excess injected fluid was distributed in surrounding tissues). Simultaneously, each mouse received an intraperitoneal injection of 100 µL paclitaxel at a dose of 10 mg/kg or irinotecan at a dose of 5 mg/kg. The tumor size was calculated by external caliper measurements every 2 or 3 days. Tumor volume was calculated using the following formula: tumor volume (mm 3 ) = a × b 2 × 0.5, where a is the longest diameter, b is the shortest diameter, and 0.5 is a constant to calculate the volume of an ellipsoid. Statistical differences among groups were assessed using the Mann-Whitney U test.
Immunodetection of Reoviral Replication
For histological analysis, tumors were fixed in 10% neutral buffered formalin, embedded in paraffin, and sectioned. Sections were then immersed in xylene, followed by rehydration in decreasing concentrations of ethanol. Endogenous peroxidase was inactivated in 3% hydrogen peroxide in PBS for 15 min. Sections were then incubated in primary rabbit antireovirus polyclonal antibody (1:1000 in PBS with 10% goat serum and 0.3% Triton X-100) partially purified by ammonium sulfate precipitation. Slides were washed in PBS and then subjected to avidin-biotin horseradish peroxidase staining as recommended by the manufacturer (Vector, Burlington, ON, Canada) and counterstained in hematoxylin.
Reovirus Cytotoxicity in Gastric Cancer Cell Lines
We first surveyed the cytotoxicity of reovirus against gastric cancer using the WST-1 assay in 14 different human gastric cancer cell lines. After 72 h exposure, reovirus alone showed moderate cytopathic effects (relative cell viability was between 0.2 and 0.8) in six gastric cancer cell lines (AGS, MKN1, NCI-N87, Hs746T, FU97, ISt-1) and low cytopathic effects (relative cell viability was more than 0.8) in seven gastric cancer cell lines (HGC-27, KATOIII, MKN7, MKN45, MKN74, NUGC4, SNU16). Only GCIY cells showed a high cytopathic effect (relative viability was less than 0.2; Figure 1a).
Immunodetection of Reoviral Replication
For histological analysis, tumors were fixed in 10% neutral buffered formalin, embedded in paraffin, and sectioned. Sections were then immersed in xylene, followed by rehydration in decreasing concentrations of ethanol. Endogenous peroxidase was inactivated in 3% hydrogen peroxide in PBS for 15 min. Sections were then incubated in primary rabbit antireovirus polyclonal antibody (1:1000 in PBS with 10% goat serum and 0.3% Triton X-100) partially purified by ammonium sulfate precipitation. Slides were washed in PBS and then subjected to avidin-biotin horseradish peroxidase staining as recommended by the manufacturer (Vector, Burlington, ON, Canada) and counterstained in hematoxylin.
Reovirus Cytotoxicity in Gastric Cancer Cell Lines
We first surveyed the cytotoxicity of reovirus against gastric cancer using the WST-1 assay in 14 different human gastric cancer cell lines. After 72 h exposure, reovirus alone showed moderate cytopathic effects (relative cell viability was between 0.2 and 0.8) in six gastric cancer cell lines (AGS, MKN1, NCI-N87, Hs746T, FU97, ISt-1) and low cytopathic effects (relative cell viability was more than 0.8) in seven gastric cancer cell lines (HGC-27, KATOIII, MKN7, MKN45, MKN74, NUGC4, SNU16). Only GCIY cells showed a high cytopathic effect (relative viability was less than 0.2; Figure 1a). To determine whether the variable cellular responses to reovirus might be explained by differing levels of Ras activation, we then measured the levels of GTP-Ras in the various gastric cancer cell lines. Activated Ras was detectable in most gastric cancer cell lines (Figure 1b) when compared with a negative control of normal human fibroblasts. However, we observed no obvious correlation between Ras activity levels and cytolytic effectssome gastric cancer cell lines with prominent Ras activation were relatively resistant to reovirus (e.g., AGS), whereas the most responsive cell line (GCIY) displayed only very To determine whether the variable cellular responses to reovirus might be explained by differing levels of Ras activation, we then measured the levels of GTP-Ras in the various gastric cancer cell lines. Activated Ras was detectable in most gastric cancer cell lines (Figure 1b) when compared with a negative control of normal human fibroblasts. However, we observed no obvious correlation between Ras activity levels and cytolytic effects-some gastric cancer cell lines with prominent Ras activation were relatively resistant to reovirus (e.g., AGS), whereas the most responsive cell line (GCIY) displayed only very modest activation of GTP-Ras. While variable cytolytic responses to reovirus may reflect multiple cellular features (abundance of receptor, efficiency of viral uncoating, etc.), it is clear that in these gastric cancer cell lines, variables beyond simple Ras pathway activation must be modulating cellular responses to reovirus infection.
Reovirus Cytotoxicity with Chemotherapeutic Agents in Gastric Cancer Cell Lines
We then chose four different gastric cancer cell lines (GCIY, AGS, NCI-N87, and MKN45, showing strong, medium, or minimal responses to reovirus) to examine cytotoxicity in more detail. We evaluated cell viability with combinations of reovirus and chemotherapeutic agents using WST-1 assays at days 3, 6, and 9 after treatment. We chose irinotecan, paclitaxel, and docetaxel as combination chemotherapeutic agents, as these are already commonly used in treatments of human patients. Although the various cell lines showed modest differences in responses to the chemotherapeutic drugs alone, the various agents showed clear enhancement of cell killing when supplemented with reovirus in MKN45 and AGS cells (Figure 2a,b).
Reovirus alone killed GCIY very well, so we could not evaluate synergy with chemotherapy in these cells, and NCI-N87 cells showed no enhancement in combination experiments (Supplementary Figure S1a,b). All Combination Indices for MKN45 and AGS cells were less than 1, which therefore showed synergy of reovirus with chemotherapy ( Figure 2d); enhanced killing was also revealed in photomicrographs of these two cell populations (Figure 2c).
Combinations of Reovirus and Chemotherapeutic Agents Enhanced Reovirus Replication and Apoptosis
We then tested whether the administration of chemotherapeutic agents might enhance or diminish viral activity, while promoting cell death. For this purpose, we evaluated reoviral protein expression and caspase activity using FACS analysis of MKN45 and AGS cells treated in eight groups (control, reovirus alone, irinotecan alone, paclitaxel alone, docetaxel alone, reovirus and irinotecan, reovirus and paclitaxel, and reovirus and docetaxel). Nearly every combination of reovirus and chemotherapeutic agents enhanced reoviral protein synthesis in these cells, which as we have shown previously leads to the elevated release of infectious viral particles [11]. Reovirus or chemotherapy alone enhanced caspase activity to some extent, and this was enhanced further when reovirus and chemotherapy were combined (Figure 3).
In GCIY and NCI-N87 cells, reovirus alone enhanced caspase activity in both cell types (Supplementary Figure S2b). These results show that the combination of reovirus and chemotherapeutic agents can enhance reovirus protein synthesis in some cell lines and induces effects leading to cell death via apoptosis or possibly pyroptosis. Reovirus alone killed GCIY very well, so we could not evaluate synergy with chemotherapy in these cells, and NCI-N87 cells showed no enhancement in combination experiments (Supplementary Figure S1a,b). All Combination Indices for MKN45 and AGS cells were less than 1, which therefore showed synergy of reovirus with chemotherapy ( Figure 2d); enhanced killing was also revealed in photomicrographs of these two cell populations (Figure 2c).
Combinations of Reovirus and Chemotherapeutic Agents Enhanced Reovirus Replication and Apoptosis
We then tested whether the administration of chemotherapeutic agents might enhance or diminish viral activity, while promoting cell death. For this purpose, we evaluated reoviral protein expression and caspase activity using FACS analysis of MKN45 and AGS cells treated in eight groups (control, reovirus alone, irinotecan alone, paclitaxel alone, docetaxel alone, reovirus and irinotecan, reovirus and paclitaxel, and reovirus and docetaxel). Nearly every combination of reovirus and chemotherapeutic agents enhanced reoviral protein synthesis in these cells, which as we have shown previously leads to the elevated release of infectious viral particles [11]. Reovirus or chemotherapy alone enhanced caspase activity to some extent, and this was enhanced further when reovirus and chemotherapy were combined (Figure 3). (a) Cells were infected with 1 or 10 MOI of reovirus and exposed to chemotherapeutic agents at the indicated concentrations. Cell viability was assessed by WST-1 assay at 6 days after treatment. Experiments were repeated at least three times and results presented as means +/− standard deviation (SD). (b) Time course of the combined effect of reovirus plus chemotherapeutic agents on gastric cancer cell lines. Cells were treated with 10 MOI of reovirus, chemotherapeutic agent (1 µM irinotecan, 1 nM paclitaxel, 1 nM docetaxel), or a combination of both, and cell killing efficacy was evaluated by WST-1 assay over 9 days. (c) Cytopathic effects of reovirus with chemotherapeutic agents. MKN45 and AGS were treated with reovirus, chemotherapeutic agents, or both (1 µM irinotecan, 1 nM paclitaxel, 1 nM docetaxel) according to the schedule described above, and photographed 5 days after treatment. ×100 magnification. (d) Combination indices (CI) were calculated [22] for each combination after 6 days of treatment, when differences in effect were maximal. CI values are the means of three experiments, with levels below 0.9 indicating substantial synergy, whereas a value of 1 denotes an additive effect and values above 1 indicate antagonism between the agents.
Combined Reovirus and Chemotherapeutic Agents Enhanced Anti-Tumor Effects in a Murine Gastric Cancer Xenograft Model
We then assessed the therapeutic efficacy of reovirus in combination with chemotherapeutic agents against gastric cancer cells in vivo. We made two types of gastric cancer xenograft models, with CD-1 nude mice bearing either MKN45-or GCIY-based tumors. Then, we treated the former MKN45 model with combination therapy in four groups (control, reovirus alone, chemotherapeutic agent (irinotecan or paclitaxel) alone, and reovirus plus chemotherapeutic agent); the latter model was treated with monotherapy in two groups (control and reovirus alone). Administration of reovirus, irinotecan, or paclitaxel results in significant tumor growth suppression compared with the untreated control at 28 days after initiation of treatment. Importantly, the combination of reovirus plus irinotecan or reovirus plus paclitaxel produced a more profound inhibition of tumor growth compared with mice treated with either modality alone and control ( Figure 4A-D).
In the GCIY model, reovirus monotherapy was effective and sufficient when compared with the untreated control (Supplementary Figure S3), similar to the effect observed in vitro. Extensive viral distribution in the MKN45 tumors was confirmed by immunohistochemical staining of reovirus protein ( Figure 4E). There were no significant differences in the mean body weights among experimental groups, and no morbidity was attributable to therapy with reovirus, paclitaxel, irinotecan, or both in combination. In GCIY and NCI-N87 cells, reovirus alone enhanced caspase activity in both types (Supplementary Figure S2b). These results show that the combination of reov and chemotherapeutic agents can enhance reovirus protein synthesis in some cell and induces effects leading to cell death via apoptosis or possibly pyroptosis. 28 days after initiation of treatment. Importantly, the combination of reovirus plus notecan or reovirus plus paclitaxel produced a more profound inhibition of tumor grow compared with mice treated with either modality alone and control (Figure 4a-d). In the GCIY model, reovirus monotherapy was effective and sufficient when compared with the untreated control (Supplementary Figure S3), similar to the effect observed in vitro. Extensive viral distribution in the MKN45 tumors was confirmed by immunohistochemical staining of reovirus protein (Figure 4e). There were no significant
Discussion
Many oncolytic viruses have been developed for application in multiple kinds of cancer. However, there are relatively few reports about the potential use of oncolytic viruses with gastric cancer. For example, adenoviral vectors have been used for experimental treatment and gene therapy of various cancers because of their high transduction efficiency. However, adenoviral infectivity of gastrointestinal cancer cells is generally poor due to the limited expression of the coxsackie-adenovirus receptor [23]. Thus, we were interested in exploring the use of reovirus in gastric cancer, as it naturally replicates in the gastrointestinal tract and its cell surface receptor (JAM-A) is abundant.
One of our first questions was therefore to determine the level of activation of Ras signaling in our gastric cancer cells, as this has been reported in other systems to correlate with susceptibility to reoviral oncolysis [5,8]. Although oncogenic mutations of Ras are infrequent (2-7%) in gastric cancer [24]; we nevertheless found evidence for variably activated GTP-Ras in most of the gastric cancer cell lines we used in this experiment (Figure 1b), possibly due to upstream activation of receptors such as EGFR, which is often mutated in gastric cancer. Thus, it is plausible for this reason that reovirus can be used as a gastric cancer therapy, even though we were surprised by the poor correlation between the levels of activated GTP-Ras and cytolytic susceptibility to reovirus (compare Figure 1a,b). The weakness of this association has also been reported by others [25] and it is likely that in addition to the activation status of Ras-associated pathways [5,26], there are other molecular determinants of reovirus-sensitivity, such as the cellular abundance of putative reovirus receptors/coreceptors [27][28][29], intracellular virion uncoating or assembly processes [30,31], and viral release and propagation, all of which can affect reovirus infection and oncolytic efficiency. In addition, as we propose in our accompanying manuscript, it is possible that the degree of cellular stemness may vary among different cancers, and this may confer variable reoviral responsiveness.
The cytopathic effect of reovirus we observed with our gastric cancer cell lines was relatively modest when compared with other gastrointestinal cancers (colon, esophageal, liver, pancreas; in preparation). GCIY cells were very susceptible to reovirus, but most gastric cancer cell lines showed moderate or low cytopathic effects in vitro (Figure 1a). Thus, from this evidence, it would be difficult to justify the use of reovirus as monotherapy against gastric cancer. However, we wished to consider whether some combination of reovirus and chemotherapy might show synergistic effects in gastric cancer and perhaps expand the range of gastric cancers in which benefits could be achieved.
Many preclinical studies have provided experimental evidence for effective killing of cancer cells by oncolytic viruses [32][33][34][35][36]. In animal models, however, established xenograft tumors are rarely eliminated despite the existence of persistently high viral titers within the tumor, and it is possible that total elimination of solid tumors may require higher doses of oncolytic viruses that might prove toxic or lethal. In a report of a clinical trial of ONYX-015 adenovirus, no clinical benefit was noted in the majority of patients, despite encouraging biological activity [37]. Tumor progression was rapid in most patients, even though substantial necrosis was noted in the tumors after treatment [38,39]. Thus, we opted to evaluate chemovirotherapy, consisting of oncolytic virotherapy combined with low doses of a chemotherapeutic agent. We reasoned that sublethal doses of chemotherapy might damage cancer cell pathways and reduce, for example, innate anti-viral responses, thereby enhancing viral oncolysis while reducing the likelihood of adverse effects [40]. In this study, we chose to explore the co-administration of irinoctecan, paclitaxel, and docetaxel because these are often used for second-line chemotherapy in human patients and are therefore plausible choices for future clinical trials employing chemovirotherapy.
Indeed, several chemotherapy/oncolytic virus combinations have already been evaluated to date and have been shown to result in enhanced antitumor effects without compromising safety. For example, the adenovirus Onyx-015 enhanced clinical efficacy by combining intratumoral Onyx-015 with systemic cisplatin and 5-fluorouracil when compared with chemotherapy alone [41]. E1A-expressing adenoviral E3B mutants combined with cisplatin and paclitaxel [42] also showed synergistic activity in vitro and in vivo. Combinations using oncolytic herpesviruses, such as G207 with cisplatin and HSV-1716 with mitomycin C, resulted in synergistic activity in vitro and in vivo [43][44][45]. Even though the precise biochemical mechanisms by which these synergies are achieved remain unknown, their potential use in gastric cancer may provide new options for more successful treatment.
In the experiments described here, we found that irinotecan, paclitaxel, and docetaxel were all able to enhance reovirus replicative activity (Figure 3) in MKN45 and AGS cells. It is possible that these chemotherapeutic agents act by repressing the innate immune responses of cells, thereby enhancing virus replication. In any case, it appears that the elevated viral replication is linked to greater caspase activation and thus an acceleration in programmed cell death.
In our experiments with xenograft tumors in vivo, we did not expect reovirus alone to be very effective in repressing growth in the MKN45 gastric cancer model, simply because reovirus did not kill MKN45 cells very well in vitro. However, we found that even reovirus by itself was able to strongly repress (though not completely eliminate) growth of these tumor cells in vivo. Nevertheless, both combinations of reovirus plus paclitaxel or reovirus plus irinotecan, showed clear enhancement of cell killing in vivo (Figure 4), consistent with our results in vitro.
Thus, we conclude that our in vitro and in vivo data both encourage further studies of reovirus plus chemotherapy (and especially with irinotecan and paclitaxel, which were superior to docetaxel) as a viable therapeutic modality for gastric cancer, even though in some cases ( Figure S3 and reference [46]), reovirus monotherapy may be partly or fully effective. Our results are obtained in immunodeficient mice, and thus studies with immunocompetent hosts and syngeneic tumors [47] could show further enhancement in tumor cell killing.
In other work [48], we found that trastuzumab is able to enhance reoviral oncolysis in gastric cancers that overexpress the Her2/neu oncogene. Thus, we argue that there is a clear cellular rationale for combining chemotherapeutic agents and oncolytic reovirus for the treatment of this disease, even though the precise mechanisms underlying synergy between reoviral oncolysis and specific chemotherapeutic drugs remain unclear. We also note the potential for different routes of viral administration, such as intraperitoneal or intravascular, or especially for tumors of the gastrointestinal tract, via direct administration orally or as we have shown anally [49]. As more is learned about the molecular pathways by which oncolytic reovirus kills many cancer cell types, while sparing normal cells and tissues, the rational combination of more potent agents or immunomodulators ( [50][51][52][53][54]; also Kubota et al., submitted) plus the potential for engineered virus [55] with greater anticancer action and reduced side effects will become clearer. Finally, we return to the observation that we (Figure 1) and others have made, which is that the correlation between Ras activation and reoviral responsiveness is unexpectedly poor. Transfection of activated Ras genes into normal cells can indeed result in both cellular transformation and reoviral susceptibility, as originally reported by Lee's group [26], but many tumor cells with activated Ras may display reoviral resistance, whereas in other cases, those with low Ras pathway activity may still be sensitive to the virus. Ras pathway activation alone is therefore unlikely to be a reliable biomarker for reoviral responsiveness, which as we have argued is indeed subject to multiple cellular constraints [56]. In our accompanying manuscript, we propose that a previously unappreciated variable, that of cellular stemness in cancer (or embryonic) cells, may be a novel important factor in predicting reoviral responsiveness. Further work will be required to evaluate more fully the utility of this proposed relationship.
In our accompanying manuscript (Bourhill et al. [57]), we propose that a previously unappreciated variable, that of cellular stemness in cancer (or embryonic) cells, may be a novel important factor in predicting reoviral responsiveness. It is unlikely that stemness alone will be a definitive factor in responses to reovirus, but perhaps together with a panel of other relevant variables we may eventually be able to identify which patients will benefit most from therapy that includes this or other viruses. These variables may in turn provide novel targets for therapeutic intervention for better modulation of reoviral oncolysis, as we show here in part with chemotherapeutic agents and gastric cancers. Further work will be required to evaluate more fully the utility of these proposed relationships.
Conclusions
Reovirus shows synergistic benefit when used in combination with paclitaxel or irinotecan in the treatment of gastric cancer in a murine xenograft model system. This finding may facilitate the development of effective therapeutic strategies for treating gastric cancer in patients. | 2023-07-01T01:31:36.160Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "04d76a2528469e58dec9021abf3ba5f4f24ba294",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "04d76a2528469e58dec9021abf3ba5f4f24ba294",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90149617 | pes2o/s2orc | v3-fos-license | Survivability of Salmonella and shiga-toxigenic Escherichia coli ( STEC ) O 157 in microwave heated ready-to-eat ( RTE ) foods
1 Department of Food Science, Faculty of Food Science and Technology, Universiti Putra Malaysia, Selangor, Malaysia 2 Department of Microbiology and Biotechnology, Faculty of Science, Federal University Dutse, Jigawa State, Nigeria 3 Department of Food Technology, Faculty of Food Science and Technology, Universiti Putra Malaysia, Selangor, Malaysia 4 Food Safety and Food Integrity, Institute of Tropical Agriculture and Food Security, Universiti Putra Malaysia, Selangor, Malaysia
Introduction
Upon the discovery that microwaves can cook food faster than conventional ovens by Dr. Percy LeBaron Spencer while researching on magnetron, it was a breakthrough in food technology and currently, microwave oven is an irreplaceable electronic appliance in every household due to its ability to achieve high heating rates, significant reduction in cooking time, more uniform heating, safe handling, ease of operation; low maintenance and energy efficiency (Zhang et al., 2006;Salazar-Gonzalez et al., 2012;Puligundia et al., 2013).Microwaves are wavelengths of electromagnetic radiation between 1 to 0.001 m (Decareau, 1985), in between infrared and radio frequency.The frequencies ranged from 300 MHz to 300 GHz.915 MHz and 2.45 GHz are the domestic frequencies used for industrial, scientific and medical application (Meredith, 1998;Hoogenboom et al., 2009).
Microwave heating is known as dielectric heating.Microwaves generate heat from the transformation of alternating electromagnetic field energy into thermal energy by affecting the polar molecules of a material, particularly polar water molecules and charged ions in food (Vadivambal and Jayas, 2010).Heat is being created internally by the polar molecules after microwave absorption to generate a volumetric heating effect, leading to faster heating rate (Vadivambal and Jayas, 2010), which is not achievable by any other conventional means (Fu, 2006).Conventional heating occurs by convection whereby heat is transferred from the surface to the interior of the food and requires more time.The absorption of the microwaves by the polar molecules will cause the polar molecules to orientate themselves according to the electromagnetic radiation, leading to the breaking of hydrogen bonds associated to the water molecules and generation of molecular friction within.Moreover, ions of dissolved salts in food will migrate towards the oppositely charged regions during the interaction with the electromagnetic field and produces heat (Decareau and Peterson, 1986;Oliviera and Franca, 2002).With millions of molecules in food, the reaction occurs a million times and rapidly generating heat.
However, the efficient microwave heating has some major drawbacks.Researchers Ho and Yam (1992) and Campanone and Zaritzky (2005) reported hot spot zones existence in food depending on geometry which suggested a temperature distribution fluctuation.This was first confirmed by Fakhouri and Ramaswamy (1993) and reported non-uniform temperature distribution in microwave heated commercially refrigerated and frozen foods.Subsequent reports on non-uniform temperature distribution studied by other researchers surfaced which prominently confirmed the non-uniform temperature distribution of the microwave heating affected by the thickness and dielectric properties of the food (Fakhouri and Ramaswamy, 1993;Mullin and Bows, 1993;Ryynanen and Ohlsson, 1996;Manickavasagan et al., 2006;Geedipalli et al., 2007;Gunasekaran and Yang, 2007).This led to other rising problems such as poor end quality, microbial safety concerns and overheating (Vadivambal and Jayas, 2010).
Microbial safety concerns in microwave heated foods were given less attention as there were few to no proper guidelines established.Despite that, outbreaks concerning microwave heated foods had been surfacing since 1992 till recently in 2013.In 1992, microwave heated rice salad serving as a buffet meal was reported to be contaminated to cause an outbreak of S. enterica serovar Enteritidis by Evans et al. (1995) who reported that the source of contamination was from the food handlers.In 1994, S. enterica serovar Typhimurium caused an outbreak after the consumption of contaminated leftover roast pork which was heated using the microwave oven (Gessner and Beller, 1994).Since then, several outbreaks surfaced following the events mostly related to microwaveable frozen food products associated with Salmonella.Smith et al. (2008) reported a salmonellosis outbreak from 1998 to 2006 in Minnesota due to the consumption of microwaved stuffed chicken products.In 2007, Salmonella serotype I contamination in microwaved frozen pot pies caused a multistate outbreak in the USA having over 401 outbreak cases (Meyer et al., 2008).In 2010, cheesy chicken and rice frozen meals contaminated with S. enterica serotype Chester cooked using the microwave oven had caused a multistate outbreak in the USA (Rounds et al., 2013).In 2013, an outbreak of Escherichia coli O121 associated with the consumption of microwaved heated Farm Rich products was reported that had caused thirty-five people sickened in nineteen states in the U.S (Larsen, 2013).
Based on the reported outbreaks, there is an arising microbiological risk in microwave heated foods.As the microwave oven is a commonly used electronic appliance in every household, there is a need to address the microbiological concerns.This had challenged us to report on the survivability of pathogens, particularly Salmonella and Shiga-toxigenic Escherichia coli (STEC) O157 in microwave heated ready-to-eat (RTE) food.The causes of most reported outbreaks were due to consumers' misconception on the microwaves in food processing and the lack of knowledge on the microwave oven.Through this report, it is hoped that an awareness regarding microwave heating can be elevated.The risk factors associated with the survivability of the pathogens will also be addressed in this report.
Sampling
The sample size was estimated based on the formula (Daniel, 1999).
where n = sample size; Z = Z statistic for a level of confidence (1.96 at 95% confidence interval); P = expected prevalence or proportion; and d = precision.As there was no available prevalence data recorded for Salmonella and E. coli, a pilot test study (30 samples) was applied to obtain a crude P value and d value (Daniel, 1999;Pourhoseingholi et al., 2013).The outcome of the pilot test study resulted in a prevalence of 0.067 and 0.3 for Salmonella and E. coli respectively.According to Naing et al. (2006), the appropriate d value is determined based on the prevalence.If the prevalence is below 0.1, it is recommended that d is half of P. Hence, the estimated sample size for Salmonella was 215 using Equation (1).On the other hand, the estimated sample size for E. coli was 323 as the d value is recommended to be set at 0.05 if the prevalence is between 0.1 and 0.9 (Naing et al., 2006).A total of 329 samples were analysed and the types of sample and sample size were tabulated in Table 4. Based on the sample size calculation, it was decided that the number of samples collected should be based on the estimated sample size of E. coli as the study was carried out concurrently for both foodborne pathogens.Additional samples were collected if possible errors occur.RTE foods were purchased from convenient stores around Wilayah Persekutuan Kuala Lumpur and Selangor region.The RTE foods purchased are either those that are ready packed in microwavable containers or packed in its original packaging.Samples that were packed in its original packaging when purchased were aseptically transferred into UV sterilized microwavable containers (172 × 120 × 57 mm).Samples were then subjected to microwave heating using a domestic microwave oven [Elba, Malaysia] at 700W, 2.45 GHz for 1 min.
According to New, Thung, Premarathne et al. (2017), most of the respondents that participated in a microwave oven safety survey indicated that they reheated their food for 1 min.Hence, the microwave heating time was selected based on the respondents' preference.After microwave heating, samples were allowed to stand for 5 mins, following the recommended procedures of the Microwave Oven Food Safety by the United States Food and Drug Analysis (US FDA)/Food Safety Inspection Services (FSIS) (2011).Immediately after standing, the center temperature of the samples was recorded using a temperature probe.Samples were mixed to homogenize before aseptically weighed 10 g of the portion into a stomacher bag.Then, 90 mL of Buffered Peptone Water (BPW) [Merck, Germany] was added and the mixture was plunged for 1 min.The stomacher bag containing the homogenized mixture was then loosely sealed and incubated at 37°C for 6 h.
Most Probable Number-Polymerase Chain Reaction (MPN-PCR)
The 6 h incubated homogenized mixture was then subjected to three-tube MPN method according to United States Food and Drug Administration Bacteriological Analytical Method (BAM) by Blodgett (2010) with modification.Briefly, the homogenized mixture was diluted ten-fold for three consecutive times.For each dilution, 1 mL was aliquoted into three tubes of 9 mL BPW (MPN tubes).The MPN tubes were then incubated at 37°C for 18 to 24 h.Turbid MPN tubes indicated growth of the microorganisms and were proceeded to the isolation of microorganisms.All MPN tubes were subjected to DNA template preparation for PCR analysis.
DNA template preparation
DNA template preparation was performed using the boiling method with modifications as described by Tang et al. (2009).Briefly, 1 mL were transferred from the MPN tubes into 1.5 mL of microcentrifuge tubes.The microcentrifuge tubes were centrifuged at 12,000 rpm for 3 minutes and then, had its supernatant discarded.500 µl of Ultra-Pure water was added to re-suspend the pellet.The tubes were then vortexed vigorously to dissolve the pellet and boiled for 10 minutes at 100 ± 2°C using a dry cell bath.Immediately, after boiling, the tubes were transferred to -20°C freezer until further use.
PCR analysis
DNA templates were thawed and centrifuged for short spin (approximately 30s) at 12,000 rpm before subject to PCR analysis.The same DNA template was used for the triplex PCR analysis for Salmonella and hexaplex PCR analysis for E. coli O157: H7.
Triplex PCR analysis for Salmonella was carried out by mixing 5 µl of DNA template with the following concentrations of PCR reagents: 1.5X of PCR Buffer, 2.0 Mm of MgCl 2 , 0.2 mM of dNTP mix; 0.2 µM of ENT primers; 0.1 µM of each Typh primers and ompC primers; and 1.5 U of Taq polymerase.The final volume of 25 µl was achieved by adding sterile distilled water to top up.The PCR conditions for the Triplex PCR analysis for Salmonella was optimized following the steps: predenaturation at 95°C for 3 minutes; 35 cycles of denaturation at 95°C for 1 minute, annealing at 56°C for 1 minute, extension at 72°C for 1 minute; and final extension at 72°C for 7 minutes before holding at 4°C.Hexaplex PCR analysis for E. coli O157: H7 was carried out by mixing 2 µl of DNA template with the following concentrations of PCR reagents: 1.5X PCR Buffer, 4.0 Mm of MgCl 2 , 0.4 mM of dNTP mix; 0.2 µM of primers; and 2.0 U of Taq polymerase.The PCR tubes were subjected to pre-denaturation at 94°C for 5 minutes, 35 cycles of denaturation at 94°C for 30 seconds, annealing at 57°C for 30 seconds, extension at 72°C for 1 minute and 15 seconds; and final extension at 72°C for 7 minutes before holding at 4°C.The primers used in the PCR analysis were as shown in Table 1 and Table 2 for Salmonella and E. coli O157: H7 respectively.All PCR reagents were purchased from Promega (USA) except for the primers that were synthesized by Sigma-Aldrich, Malaysia.Amplicons were separated via 1.25% agarose gel electrophoresis stained with 0.5 µg/mL of Ethidium Bromide (EtBr) at 60V for 1 hour and 15 minutes for E. coli O157: H7 while at 90V for 30 minutes for Salmonella.Visualization of the gel was performed under Gel Documentation System (Syngene, USA).
Risk assessment
The exposure pathway on the direct consumption of the RTE food with possible contamination of survived pathogens during microwave heating was modelled as shown in Figure 1.It was assumed that consumers are exposed to the survived pathogens through the consumption of the RTE food.Separate simulations were performed using @RISK® Version 7.5 (Palisade, USA) based on 100,000 iterations to estimate the probability of illness per serving of each pathogen for each type of sample of RTE food.Information on the serving size of the food was obtained from the report on the Food Consumption Statistics by the Ministry of Health, Malaysia (2013).Beta-Poisson model and exponential model were used for Salmonella and STEC O157 respectively for the dose-response model.The alpha and beta values of the Beta-Poisson model for Salmonella was adopted from the risk assessment study on Salmonella in Eggs and Broiler Chickens by the World Health Organization (WHO) (2002) while the exponential value was obtained from the exponential model for STEC O157 was adopted from Cornick and Helgerson (2004) whom studied on the dose of Enterohaemorrhagic E. coli (EHEC) O157: H7 in pigs.Parameters and distributions used in the simulation model were described in Table 3
Results and discussion
Salmonella and E. coli were detected in RTE food samples implied that the pathogens survived the oneminute microwave heating.Salmonella was detected in 66 out of 329 samples (20.1%) with a density of <3.0 -11000 MPN/g, through the identification of the amplification of specific ompC gene of Salmonella at 204 bp that encodes for the protein C involved in the invasion of epithelial cells (de Freitas et al., 2010).Out of the 66 samples, 6 samples (1.8%) were identified as positive for S. enterica serovar Typhimurium through the presence of the Spy gene amplicons at 401 bp that encodes a specific periplasmatic protein for Typhimurium serotype while 13 samples (4.0%) were identified as positive for S. enterica serovar Enteritidis.The fragment of Sdf1 gene encoding the chromosomal region related to invasiveness and infection of poultry and eggs was used for the identification of the Enteritidis serotype and when amplified, produced 299 bp fragments (de Freitas et al., 2010).Figure 2 shows the amplicons produced through the triplex PCR analysis for Salmonella.The density for S. enterica serovar Typhimurium and S. enterica serovar Enteritidis in RTE foods was <3.0 -62.0 MPN/g and <3.0 -270.0MPN/g respectively.
In contrast, E. coli was highly detectable in RTE foods compared to Salmonella with 86 positive samples (26.1%).This was identified through the presence of the hypervariable regions of E. coli 16s rRNA fragments at 544 bp (Sabat et al., 2000) which was the internal standard used in the multiplex PCR.E. coli O157: H7 is the common causative agent of diarrheal illness of the STEC group and it is responsible for many outbreaks, having the virulence genes, typically stx1 (Shiga toxin 1), stx2 (Shiga toxin 2), eae (intimin), rfbE (O157 antigen) and fliC (flagellar antigen) in its DNA (Bai et al., 2010).Targeting the virulence genes in multiplex PCR will not be sufficient as there are other bacteria like Shigella dsyentria that produces Shiga toxin, similar to STEC.Hence, the presence of E. coli 16s rRNA gene as the E. coli internal standard will validate the multiplex PCR for identification of STEC O157: H7.There were 17 samples (5.2%) positively identified as STEC O157 with the density of <3.0 -930 MPN/g via the amplicons of stx1 at 655 bp, stx2 at 477 bp, eae at 375 bp, and rfbE at 296 bp (Figure 3).To identify the pathogen as STEC, either the stx1 or the stx2 gene amplicons should be present.The confirmation of O157 and H7 was through the amplicons of the rfbE gene and fliC gene.It was noted that all the identified STEC O157 produced the rfbE gene amplicon but did not produce the fliC gene amplicon.Hence, only STEC O157 was identified from the RTE foods.
The total number of positive detection of Salmonella and STEC O157 with the respective MPN/g densities in accordance with the types of samples is tabulated in Table 4.The distribution of the survived pathogens in food samples as shown in Table 4 was presumably viewed as initial contamination by the food handlers as there was no common type of sample that contained a specific survived pathogen.Cross-contamination in food occurs through various ways and sources, which the cause is difficult to be identifiable due to the complexity of the food processes.The most probable transmission of occurrence could be through food handlers, the major (Lues and van Tonder, 2007).The inconsistent practice of food hygiene and sanitation by the food handlers increase the possibility of contaminations of the pathogens in the food.According to Lee et al. (2017), most food handlers had low performance in maintaining hygienic hands although it was reported that they had a moderate level of food safety knowledge with a good attitude and, selfreported practices.Jensen et al. (2017) conducted a study on the quantification of bacterial cross-contamination rates between fresh-cut produce and hands and the study concluded that transfer rates are higher from hands to food while transfer rates from food to hands were approximately 1%.The indirect cross-contamination of pathogens from hands to food could be due to that the pathogens are being provided with the source of nutrients from food which favours their growth and attachment (New, Wong, Usha et al., 2017).
The high detection of generic E. coli might be due to its high presence in raw vegetables that is in contact with soil or contaminated water and used as a part of the ingredient in RTE food.Most RTE food was observed to have some raw vegetable garnishing placed on the food which presumably became the vehicle of contamination for E. coli.Pathogenic E. coli O157: H7 is frequently found in soil (Ibekwe et al., 2014) and water sources if contaminated with faeces from infected humans or animals.The presence of E. coli on food indicates faecal contamination, which can lead to the possible presence of other harmful microorganisms, viral or helminthic of protozoal parasites (Jay, 1997).It was reported that E. coli can adhere to roots from contaminated soil and/or water, and subsequently travel through the plant to the leaf tissue (Cooley et al., 2003;Bernstein et al., 2007).The occurrence of Salmonella spp. on leafy green produce was also reported via the irrigation of poor quality water, but the counts were reported lower than generic E. coli (Benjamin et al., 2013) which possibly explained why the presence of generic E. coli was higher than Salmonella spp. in this study.In addition, the physical structure of the raw vegetables was probably not affected by the microwave heating, which also did not affect the pathogens present.It was reported that the loss factor, ɛ", which is translated to heat in microwave heating for fruits and vegetables were low at high frequencies (Sosa-Morales et al., 2010).
The total number of S. enterica serovars (Typhimurium and Enteritidis) detected were slightly higher than STEC O157 which is in accordance with the outbreaks reported concerning with the microwave oven were caused by S. enterica serovars.The MPN concentrations reported for both S. enterica serovars and STEC could cause an infection.Hara-Kudo and Takatori (2011) reported on the ingestion dose of foodborne pathogens associated with infections were as low as 81 MPN/g for S. enterica serovar Enteritidis and <108 MPN/g for STEC O157 in outbreaks occurring in Japan between 2004 to 2006.In fact, Salmonella was reported to be able to cause severe adverse health effects at low infectious doses of 0.042-0.427MPN/g in a toasted cereal outbreak reported by Wang et al. (2015).Thus, the reported MPN concentrations in this study were well above the ingestion dose associated with infections which suggested the consumption of the contaminated RTE food with survived pathogens after microwave heating will cause foodborne illness especially for Salmonella as few cells are sufficient to colonize the lower gastrointestinal tract (Waterman and Small, 1998) and further, Salmonella may gain protection from the fats in some food products against the harsh stomach acidic condition, increasing the likelihood of illness despite the low number of viable organisms consumed (D'Aoust, 1977;D'Aoust and Maurer, 2007).
The survivability of pathogens was most presumably due to the uneven heating distribution -the major issue of microwave heating that is affected by many factors such as food composition, temperature, ionic conduction and water availability.According to Fakhouri and Ramaswamy (1993), microwave heating is 'fooddependent' unlike conventional heating.Starch and protein foods have minimal effect with microwaves due to its nature non-polar charges of the molecules which indicated no heating will occur when being subjected to microwaves (Chandrasekaran et al., 2013) groups in starch and protein behave similarly, only that the ability to follow the rotation of the electromagnetic field was hindered due to high shear environment (Chaplin, 2015) and thus, reducing the ability to extract energy from the field (Feng et al., 2012).Fat, on the other hand, improves heating rate when subject to microwaves due to the lower specific heat which gives rise to the rapid heat (Chaplin, 2015).
Moreover, the current temperature of the food affects the microwave heating in a complex way.According to Venkatesh and Raghavan (2004) and Feng et al. (2012), the complexity of the relationship between temperature requires the need to understand the dielectric dispersion of the water molecules present in food.Meanwhile, the influence of ionic conduction is always positive when temperature increases (Feng et al., 2012) as the salts decreases the natural structuring of water and reducing its dipole-dipole moments ability.Depending on the water and the content of it in the food in relative to the temperature, the microwave heating will be affectedeither increase or decrease.
The presence of salt in foods contributes to the ionic conduction and microwave heating at high frequencies of domestic microwave ovens (2.45GHz) is not favourable as the ions will not be able to respond quicker to produce frictional force in contrast to lower frequencies of microwave ovens (Chaplin, 2005).Otherwise, foods that are high in salt will become better microwave absorber and gets heated rapidly.Water availability is unquestionably the biggest factor to microwave heating as water molecules are the major contributors to dielectric heating.This depends on free water and bound water available in the food product.More than 70% of the free water dispersion contributes to dielectric heating (Feng et al., 2012).
All in all, the microwave heating affected by the factors above causes the uneven heating distribution in microwave heated food.The factors affecting the microwave heating are relative to one another.This led to the presence of cold spots in food and if bacteria were present within the zone, the bacteria could survive through the microwave heating.The center temperature measured from the samples (Figure 4) evidently showed the fluctuation temperature of the food after microwave heating, inferring the uneven heating distribution.Most samples could not reach the safe temperature minimum requirement (75°C) with only approximately 4.0% (13/329) from the total samples achieved more than 75°C . Most of the samples ranged between 60 -70°C at the center temperature.Salmonella was reportedly destroyed at cooking temperatures above 65°C while E. coli will not survive above 71°C.The center temperatures of the food samples were practically within the range of the pathogens' survival.This added more concerns to microbiological safety of the RTE foods.
A recent obscure research reported that some strains of non-pathogenic E. coli were heat resistant.However, the unbeknown risk may present as some pathogenic strains of E. coli could be heat resistance (Flynn, 2016) which supports the findings of this study and amplifying the risk.This study had indicated the possibility when seventeen strains of STEC O157 were recovered from microwave heated food, but this was yet to be confirmed as the strains of STEC O157 survivability could be linked to the non-uniform temperature distribution of the microwave heating.Salmonella was reported able to have increased thermal tolerance in foods with low water activity combined with high-fat content (Werber et al., 2005).All the samples had a substantial amount of fat but had high water activity which led to the lower outcomes of Salmonella in this study.It could also partially be due to the absence or low initial number of Salmonella in the types of food sampled although Salmonella can be present in any foods when contaminated (Wagner, 2008) due to its versatility.
The survivability of the pathogens could also depend on the food matrix.Clumping of bacteria within the food matrix could limit the inactivation via microwave heating due to the low penetrating depth of the microwaves at high frequency.If the food samples were contaminated and contained high moisture which supports the growth of the bacteria in the food, the available moisture will contribute to higher dielectric heating to inactivate the pathogens, but the effectiveness of the inactivation will have to depend on the microwave heating time.
Microwave heating time is another factor that contributes to the survivability of the pathogens.It was observed that most microwave ovens available in convenience stores and restaurants are equipped with time turners, allowing consumers to turn the time to reheat the food according to their likings.Further, consumers might have a misconception towards the microwaves: having the thought that microwaves are radioactive waves that can kill the pathogens at the same time the food is being reheated quickly.This perception led them to reheat the food as fast as possible or stop the process whenever the food container was warm enough.In fact, there are no guidelines or safety regulations in which determined how long should the food be heated to ensure that microwave heated food is microbiologically safe.Consumers' lack of knowledge on the microwave oven should also be considered.The cause of the reported outbreaks was mostly due to consumers' lack of FULL PAPER knowledge on microwave oven, e.g.usage, principles, and food safety as well as their confusion and ignorance on the microwave instructions given by the food manufacturers.All these eventually contributed to the pathogens' survival in the food, if present, and grow to a hazardous level which could cause foodborne illness to those exposed.
Foodborne illness is considered as a global economic burden as the impact could cause many to be hospitalized and deaths if drastic actions are not taken and great expenditure on the health care costs.The distribution of the food globally exacerbates the matter by disseminating the biohazard and preventing fast control measures, making it more difficult to intervene.It is no exception to this study as it was observed the RTE food were supplied by different suppliers to the convenience store and the supplier has many other similar customers to supply to.Food travels locally and globally, and hence, this amplifies the risk, having more people exposed.
The risk of exposure to the survived pathogens through microwave heating is estimated and summarized in Table 5.As observed from Table 5, the concentration of the Salmonella was at an average of 1.738 log MPN/g, 1.855 log MPN/g and 1.4882 log MPN/g for rice, noodles and rice vermicelli respectively, while the concentration of STEC O157 was at an average of 1.43 log MPN/g, 1.887 log MPN/g, and 0.961 log MPN/g for rice, noodles and rice vermicelli respectively.The concentration of the pathogen and the serving size of the food determined the total exposure of the pathogen in the food and thus, the higher the concentration and serving size of the food, the higher the exposure of the pathogen to humans.The probability of illness and the rates per 100,000 iterations estimated were significant to indicate a high chance of contracting foodborne illness.From Table 5, it was noted that STEC O157 simulated a higher impact of foodborne illness occurrences despite some concentrations of the pathogens were lower compared to Salmonella.This was because STEC O157 was simulated using the exponential dose-response model which assumed that one organism is capable of producing an infection.In comparison to Salmonella which was simulated using the Beta-Poisson Model, the model assumed that non-constant survival and infection The rate of foodborne illness was estimated to be 0.4292 (130 cases), 0.2782 (84 cases) and 0.1772 (54 cases) for rice, noodles, and rice vermicelli respectively for Salmonella.While for STEC O157, the rate of foodborne illness was estimated to be 0.7525 (228 cases), 0.6463 (196 cases) and 0.1246 (38 cases) for rice, noodles, and rice vermicelli respectively.Based on the previous epidemiological data on non-typhoidal salmonellosis (NTS) (Food Safety News, 2014;Astro Awani, 2014) in Malaysia, the predicted foodborne illness cases for Salmonella were in agreement for RTE foods.On the other hand, there was no reported epidemiological data of the similar type of food for STEC O157 in the Southeast Asia that can be compared to the current study.As Malaysia reports incidence rate as an overall foodborne poisoning, it is difficult to distinguish which pathogen contributed more cases and vice versa.The predicted cases were assumed to contribute at least 0.3 to 1.6% of the incidence rates of foodborne poisoning in Malaysia based on the available data in 2015 summarized by Ministry of Health, Malaysia (2016).It should be noted that although the predictions were in agreement, it should not be assumed as reliable as it is easy to adjust assumptions and input settings in the risk assessment model.Hence, the inputs and outputs of each unit operation and pathogen event in the risk assessment should be validated (Oscar, 2004).
Through the simulation, the high concentration of the survived pathogens in the food was the main contributing factor to such high risk estimates.This is probably due to the high initial microbial load present in the RTE food that was ineffectively inactivated during microwave heating.On the other hand, RTE food prepared for sale was not directly consumed and held for display at a certain time in the convenience store before sold.A longer holding time will allow the pathogens to grow to a hazardous level whereby microwave heating will not be able to reduce the load to a safe level, especially fastidious pathogens such as Salmonella and E. coli.And with the lack of consumer's knowledge on the microwave oven, the surviving pathogens present a risk to consumers as they are metabolically active to infect and intoxicate.
The simulation model could be refined with the addition of more critical data to display the real scenario of the exposure route of the pathogens, particularly the dose-response model.The dose response model depends on the susceptibility of a person to be affected towards the dose administered.Having an optimized doseresponse model allows greater flexibility and a wider range of understanding in the estimated risk.Minimal to none dose response model studies on the Asian demographic were reported which is a data limitation to our study.It is noted that consumer behavior on habit and consumption patterns are critical to obtaining a good estimate risk (Barraj and Peterson, 2004).As our study did not include any consumption patterns in which we assumed that consumers will directly consume the food after being heated.Some consumers may have heated their food and brought it back to their homes or offices to consume.That will provide a certain holding time to the survived pathogens to grow which will result in different concentrations.These data gaps are yet to be confirmed and the risk could be underestimated or overestimated.Nonetheless, the risk assessment could serve a vague purpose in suggesting interventions.By refining the model, the sensitivity of the model will be increased, and the direct risk mitigations could be carried out.
Conclusion
The prevalence of pathogens survival and the risk assessment conducted had evaluated the possible risks of exposure to pathogens from microwave heated foods as their vehicle of contamination.The identified risk factors that contributed to the survival of the pathogens were the uneven microwave heating distribution, the microwave heating time and consumers' lack of knowledge on the microwave oven and food safety.Microbial safety concerns of microwave heated food should be put into the spotlight as the relative importance is not well understood by consumers.Food safety guidelines on the microwave oven should be proposed to alert and educate the consumers on microwave oven and the safety of microwaved food.Besides that, practicing proper hygiene and sanitation by food handlers and taking food safety measures, foodborne illness could be controlled, and thus reduce the economic burden imposed by foodborne illness and preserve the public health.
Authors.Published by Rynnye Lyan Resources
Fagan
Figure 1.The risk assessment model to simulate the risk of consumption of survived pathogens in microwaved heated RTE food.Parameter Description of Parameter Input Distribution P x Prevalence Beta (s+1, n-s+1) a
Figure 4 .
Figure 4.The center temperatures of the microwave heated RTE foods.
Table 1 .
. Primer sequence used for Salmonella detection
Table 4 .
Number of positive samples and concentration of Salmonella and STEC detected in accordance with the type of sample
Table 5 .
Risk estimates of consumption of Salmonella and STEC O157 in microwave heated RTE foods | 2019-04-02T13:14:13.496Z | 2018-03-26T00:00:00.000 | {
"year": 2018,
"sha1": "6515f54b170c340dbafe44b1d92f95ca88ace7a7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26656/fr.2017.2(4).e01",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6515f54b170c340dbafe44b1d92f95ca88ace7a7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
261986786 | pes2o/s2orc | v3-fos-license | THE PROCESS OF DIGITAL TRANSFORMATION IN EDUCATION DURING THE COVID-19 PANDEMIC
Purpose: This document seeks to delve into the digital transformation of education during the COVID-19 pandemic, aiming to provide a comprehensive understanding of this evolving phenomenon's purpose and significance. Design/Methodology/Approach: The research approach undertaken is characterized by a non-experimental, documentary, exploratory, and descriptive study methodology, which involves an extensive examination of existing literature and data to gain insights into the digital transformation in education during the pandemic. Findings: The study's key findings revolve around the consensus in existing literature regarding the swift acceleration of the transformation of education from traditional face-to-face classes to virtual learning environments. It also highlights the implications of this transformation, particularly in reshaping teaching models and advocating for a hybrid approach encompassing both face-to-face and virtual learning. Research, Practical & Social implications: The implications of this research extend to informing educational institutions about the need for digital adaptation, guiding policymakers in supporting adaptable learning models, and empowering educators with a deeper understanding of the changing educational landscape. Moreover, it considers the broader societal impact, including equity and access issues in education. Originality/Value: This research is unique in its contribution to understanding the profound impact of the COVID-19 pandemic on education. It emphasizes the significance of adaptability and hybrid learning models while providing a foundation for future educational research and policy development. Doi: https://doi.org/10.26668/businessreview/2023.v8i9.3770
INTRODUCTION
On March 11, 2020, The Who declared as a pandemic the new disease originated at Wuhan, China, and in less than 3 months, it spread almost worldwide.The instant effect of the pandemic was the significant disruption of almost all human activities.Likewise, the rapid spread of this new virus caused two types of concomitant situations: on one side, the sanitary crisis, which affected millions of people, either by infection and suffering effects in health or by the deaths due to lack of information about specific treatments for the disease.On the other side, the productive and economic social crisis linked to the imbalance between supply and demand results from the effects of the pandemic spread and the measures to face it.In this sense, the coordination of general policies and specific strategies trying to stop the spread of the disease worldwide was unfailing (Blackman et al., 2020).
As a principal strategy, the governments worldwide, following the WHO recommendations (WHO, 2020), implemented several measures: the closure of work and education centers, remote working, suspension of domestic and international movement (land and air), among others.Thus, to control the virus spread, almost all the countries closed the education centers, affecting 90 % of the student population worldwide (WHO, 2020).The backdrop in this unexpected drama worldwide, governments, organizations, and institutions of the tertiary sector have implemented multiple political alternatives to face the contingency and continue with the academic activities seriously affected.Different reports show the multiple approaches and disagreements related to three things: how and what to teach (the activities and the workload of teachers and professors); where to teach (what type of environments, applications, methodologies, and teaching tools); and what type of effects are developing regarding the equity of education (Zhang et al., 2020).Most countries had managed their efforts to the deployment of technology as a tool in virtual learning.The researches worldwide show the persistent deficiencies in the infrastructure of telecommunication; environment; hardware and software; lack of training or inexperience of teaching staff; the unclear and contradictory information from plan organizers and managers; proposals to consider against the social asymmetry; the complex familiar environment of students; among others.(Oluwatimilehin et al., 2021).
Since World War II, the educational offering has increased steadily; however, the pandemic and the guidelines derived from governments imposed an unprecedented and A report submitted by UNESCO (2020) suggests that the entities involved in education should find alternative ways of learning for children, teenagers, and young adults who had to be away from educational centers; all of this makes necessary the launching of equivalence and "bridge" programs accredited and approved by the government to shape a submission of flexible learning in formal and informal environments during a state of emergency (Huang et al., 2020).
In this context, the use of ICT induces teaching staff to train to meet a generation that, in one way or the other, facing asymmetries and imbalances, had already been exploring the world of new technologies.Ali (2018) revealed strong relationships between students and ICT, being the context and the perceptions mentioned the amalgam that provided the necessary encouragement to the strategies' implementation of virtual classes.
At this time, there is a combination of unique elements managing the range of concomitant variables referred to the growing importance of digital connectivity via the internet in all productive, distribution, and service sectors, and the crisis triggered by the COVID-19 (Ríos, 2020).Of this conjunction of factors, from different sectors of society, several ideas have emerged from the necessity of reinventing and implementing more developed dynamics in solutions to the challenges in sectors such as economy, trade, politics, entertainment, and, particularly, education.All problems and challenges not resolved yet, and those pending in Latin America education, become more complex with the pandemic spread; some topics such as growth without quality, inequality in education access and achievements, progressive loss of government expenditure on education, among others.
Analysing the pandemic effects and offering recommendations to governments and higher education institutions, UNESCO (2020) and International Institute for Higher Education in Latin America and the Caribbean -IESALC (2020) state that: a) there is no planned and the efforts of educational institutions and offer solutions about technology so that students and teachers can access platforms, content, courses, among others; e) at least; 25 % of university students will be removed from the programs as a result of the use and availability situation of educational resources of teleinformatics; and f) they suggest all the actors be prepared for the reopening of higher education institutions and focus on activities considered less important.
These activities involve the following: programs of health monitoring and support, adjustment of calendars, the contribution for pandemic mitigation, and moving from the extempore planning and approach of health emergency to the planning and the preparation to reopening in a recessive scenario with cuts in public investment in education.In this sense, the challenge around education is attending pedagogical, economic, and socioemotional demands of those who experience greater difficulties in continuing their training, now in a non-traditional modality (IESALC, 2020).
The different international multilateral agencies made various comments about how the pandemic has affected education.Firstly, the estimations made by the World Bank (2020b) and UN (2020) indicate that it is an unprecedented event that has a double impact on education: a) the closure of educational institutes, and b) effects result from the economic recession that will have a long-term impact; even, combined with the difficulties caused by the promptness of the countries' appropriate response to face the crisis.On the other side, early data analysed by ECLAC (2020) states that the impacts will have several costs in the human capital accumulation, its formation, and the development and well-being perspectives.In summary, the crisis generated in education can be in the following ways: a) high educational costs and impacts on health, translated into terms of learning disruption, reduction in the feeling of belonging to the school, an increase of inequalities on education, among others; b) impacts in educational supply and demand; and c) the long-term effects will increase the already fragile situation on education worldwide, contributing to the reduction of human capital, increasing inequalities in learning, among others.Despite all effects and severe consequences unavoidable, ECLAC (2020) believes that if the governments' response is prompted, with adequate planning and programming, the crisis can become an opportunity to change the educational systems into systems with higher resilience; inclusiveness; and efficiency.
Likewise, the ECLAC (2020) considers 3 stages in public politics planning to reverse the negative effects of this crisis: 1) combating the pandemic, as has been in the first year; 2) learning continuity management, and 3) the stage of improvement and acceleration learning.The last stage pointed by the ECLAC ( 2020) is the concept of the utmost importance of digital transformation.In the pandemic and post-pandemic context, digital transformation is a crucial process; since it suggests the integration of digital solutions in common life, especially in education.Likewise, it intends to enhance traditional solutions and open up opportunities for innovation and approaches that can revolutionize the perception of how things are made.One of the principal objects of the digital transformation process is the improvement of business processes to meet clients' demands through the intensive and extensive use of technology and the involved data.In the field of education, in this approach, teachers, students, and education staff would be the target consumer; in that way, the pair student-teacher will benefit from digital transformation (Goldin & Katz, 2008).Likewise, the purpose of the digital transformation process must include, but not be limited to providing a wide range of virtual learning options, using technology in the classroom, and monitoring the learning process, allowing students to use mobiles or web applications in their learning process, and the virtual classes' enablement.
In that sense, this document aimed to explore and describe the process of digital transformation in education during the COVID-19 pandemic.
METHODOLOGY
A non-experimental, documentary, exploratory and descriptive study.This study involves the complexities associated with the concomitant relationships between the variables: distance or online learning, quarantine, and social distancing in the COVID-19 pandemic.
Between 2020 and 2021, a systematic review of published studies was made, trying to interpret more than giving added aspects.Also, rigorous qualitative methods are used for the synthesis of the existing qualitative studies.The study applied the exploratory approach of the systematic literature review proposed by Kitchenham (2004).
Global Context
According The World Economic Forum has identified 10 technology trends that, from its point of view, are eminent to face the pandemic and possible similar outbreaks in the future, including distance education (Xiao & Fan, 2020).Moreover, among these trends, there are digital and automation components that pretend to become physical contact services into technology services.These trends promise to have a wide range of possibilities of process improvement, optimization, and efficiency.Likewise, the crucial point is associated with the complication of digital technology adoption, which tends to meet with the adjustment processes and resistance that take time to its complete implementation, disregarding those cases in which the available resources and digital capital are insignificant.Management System (LMS), which is a software that companies use to manage, document, and monitor the courses and programs of online training; this software was created at the end of the last century.The LMSs are widely used in the field of education, existing platforms with low cost and even free.The reasons for its use were evident: they have improved the efficiency in the preparation of learning, educational courses management, and the communication between teacher and students; also, they have social networking features in which teachers, students, parents, and guardians interact as it was a social media (Kant et al., 2021).
Likewise, the LMS technology can improve the efficiency of learning material and methodology in education; most of them have software applications that work on smartphones.
Researches made in Japan; at the beginning of the LMS use; show that the employ of educational applications based on cell phones or other devices improved the interaction between teacher and students (Nakane, 2005).Another innovation in the EdTechs is the use of Artificial Intelligence as an education tool; these collect data about performance, evolution, and comprehension level in students, these are analysed and provide suggestions and recommendations to improve them.In this way, for the sectors that maintain that personalized education is the ideal one, these advances give the opportunity of education for individual students, this should improve the effects of education (Setiawan et al., 2021).Combined with the LMS, the so-called approach Observe, Orient, Decide, Act (OODA) is a loop of feedback (Silvander & Angelin, 2019); this approach has its basis on the idea that computing devices provide educational content generating the data that are used, if it is required, as a new entrance to adequate and improve those contents through the repetitive use of the feedback loop.
The second trend in the context of distance education expansion as a result of the
Recorded online classes
The courses widely used are the so-called Massive Open Online Courses (MOOCs).
The MOOCs are online platforms that provide free courses, allowing students to learn in a selfpaced manner and personalize their studies.The flexibility of MOOCs comes from ubiquity; a person can take courses anywhere at any time.In general, these courses are available for high education, but there are no alternatives adapted to the study plan of primary education.Due to the conditions of the technology evolution, the different or improved versions of MOOCs could be the basis for people who want to learn new skills while working, can do it (Torres-
Toukoumidis et al., 2021).
There is a wide range of these courses on the Internet that Universities use.Chuang y Ho (2016) described the effects of the phenomena EdX of MIT and Harvard University; through this platform, people receive a certificate after completing and passing the course.Between 2016 and 2021, they found that the number of registered people constantly increased; however, just 5.5 % of 2.4 million registered people had received the certificate during that period.Other studies maintain similar results: completion rates are between 2.3 % and 19.5 % for an average of 6.5 % (Chuang & Ho, 2016).In that sense, the researches of Chuang & Ho (2016) & Ruipérez-Valiente (2019) show that the types and tools of distance education do not guarantee the full use of the online education content.
Interactive and live online courses
The other trend, which education centers used most until now (UNESCO, 2020), is interactive online learning, especially virtual conferences such as broadcasting or streaming.
Unlike the MOOCs that most them are pre-recorded and based on asynchronous learning; these courses use the synchronous learning method in which students can participate in real-time, promoting bidirectional communication and encouraging students to participate, ask in realtime, and discuss the activity with teachers, even if they are in remote or physical places.In general, these courses have specific periods of study and depend on each education center; for that reason, they do not have an education curriculum developed by other centers (Bedenlier et al., 2021).
Limits in the Digital Transformation of Education
The COVID-19 pandemic has generated several concerns in education that international forums have expressed in different ways (UNESCO, 2020; ECLAC, 2020; World Bank, 2020b), mainly for the suspension of classes and the closure of education centers, even with a virtual alternative to continue with education.However, virtual transformation in education generates concerns about its haste implementation in the education system, causing uncertainty in the results.The most important trends in this area are: (1) education inequality arose from the so-called digital divide; (2) the lack and deficit of motivation management in teachers, students, and parents; and (3) the possible negative impact of computing devicessuch as computers, tablets, and smartphones-in education.Likewise, not all of these concerns arose from the pandemic; some arose from the pre-pandemic: 1.
The recent concept of the digital divide refers to the inequality in society between people who can access education using ICT and those who can partially access education or not.The reports in the pre-pandemic show that this digital divide exists not only between rich and poor countries but also in regions of the same country (Nicolau et al., 2020).To provide technology solutions in the education field, it is necessary having a suitable infrastructure of telecommunication and informatics.
Likewise, the universalization of equipment and high-quality connectivity is the precondition for students and teachers to access the technology potentials.In other words, if the access to digital infrastructure dot not improve, the advance and digital transformation in virtual education will increase the gaps of existing inequality (Neidhöfer et al., 2021).
2.
The deficit and lack of motivation management.According to various experts, virtual education will be the prevalent learning method that will replace traditional education in the short term.However, the effect of distance education on learning success and education is not still clearly defined; the education effects of online and interactive classes are relatively new, and there is no guarantee that students completely took advantage of the resources and contents, so this is an open research area (Reich & Ruipérez, 2019).Ito et al. (2019) found that the EdTechs do not motivate learning since the increase in technology use does not guarantee that learning expectations impact selfesteem levels and motivation to succeed; consequently, despite having suitable and high-quality education resources, the motivation in most students disappears.to the pandemic show that students admit that, although computing devices are sources of entertainment, they use them considering that the relation cost/benefit is favorable for them (Kay & Lauricella, 2011;Ito et al., 2019).The report of the OECD (2015) found that there were no significant differences in the improvement of reading, sciences, or maths skills between students who invested in ICT and those students of other countries who did not make that investment.
DISCUSSION AND CONCLUSIONS
In 2020-2021, the education field spent two transcendental events: the spread of the COVID-19 pandemic and the education transformation due to the indefectible necessity of implementing digital technologies as a strategy to continue with the education process.The exponential growth of the population that uses ICT opened up the possibilities of implementation of online learning in the last 30 years, making a qualitative leap in its application due to the pandemic scenario.The present trends in tools and platforms of online learning depend on the Learning management system (LMS), Massive Open Online Courses (MOOCs), videoconferences such as broadcasting or streaming, several types of educational applications, and supporting technology.All of this shows the potentials of implementing and consolidating the new method to deal with the learning process.
The literature agrees that education evolution, from face-to-face classes to virtual classes, had an early boost due to the pandemic.The result shows that each actor (teachers, students, centers, CEOs, and parents) faced-and faces-many obstacles in the adaptation process of this new stage.The obstacles urge centers and governments to provide coordinated and consensual alternatives to activate the required resources to remove barriers in the transformation of education, which will result in a reshaping of the teaching models and the adaptation to new paradigms.Thus, the significance of the transformation of education and the adaptation to remote learning and virtual education demand studies that make it possible to find the impact of this type of education resulting from the cause that incited the adaptation to the education system.
Return to normal for education centers worldwide is uncertain (ECLAC, 2020).In this scenario, the purpose of the transformation of the education system progress as the available technological tools are incorporated to implement the process of educational disruption that promotes resilience and new ways of thinking, making, and creating.In the post-pandemic scenario, education will emerge with an unprecedented experience in asymmetric A widespread argument in the literature formulates that notwithstanding all efforts of teachers to make maximum use of tools, students are those who will be most affected by the digital divide expansion, which particularly aggravates in regions with problems of electricity supply, lack of mobile devices, or with cell phones but not internet, whether due to high prices, or lack of quality service (Romero-Rodríguez et al., 2020).Likewise, there are considerable differences in the accessibility and quality of teachers in public and private education centers (Budi et al., 2020;Arora & Srinivasan, 2020;Qian-Hui, 2020).Combined with this, studies show that; if the integration of technologies, communication, and participation between teachers and students is not good, the learning process becomes an activity focused on the teacher-instructor-tutor and not on the student (García et al., 2015).In the same vein, Moorhouse (2020) identified the trend towards courses digitalization in the adaptation process of teachers to virtuality; without being trained to do it.
The limitations that arose from the COVID-19 containment do not yet provide the evaluation and monitoring metrics of the impact in virtual education and the dynamic of the transformation processes from face-to-face to the virtual model; therefore, Chang et al. (2020) suggest the advance of field investigations that compare both types of education.In that context, it prevails the questions that try to respond if there are aspects that remote education does not solve, for example, the attitudes and movements expressed by the students in a face-to-face course, the understanding difficulties of topics may not be as critical as in a face-to-face class, among other elements that form a group of criticisms discussed by those who are not sure about the appropriate and widespread use of remote courses (Lall & Singh, 2020).
During the pandemic, remote education has been essential; however, digital education is far from replacing traditional face-to-face learning.The most meritorious trend is the guidance in a parallel environment since it is still necessary to evaluate all the experiences of untimely change: face-to-face classes should become virtual classes.Notwithstanding the differences and limitations, the present situation emphasized the necessity of taking directives to mitigate the pandemic effects in education.The Chinese government took the first intervention; an initiative called "suspending classes without stopping learning" was launched by China to continue the teaching-learning process during the coronavirus pandemic without disruptions (Zhang et al., 2020).Huang et al. (2020) suggest that the approaches to face and deal with the present educational problems must focus on the production and escalation improvement of educational computing.In that sense, they propose to prevail the equipment of hardware, and skills and training to teachers and students, preferring the standardization of Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic education at home so that teachers have online training, thus supporting the educational research to all students, including those with particular needs (difficulties in virtual learning.) organised dynamic to extensively and intensively use in classes and assistance, and monitoring costs and resources to ensure the possibility to continue with the training schemes; b) those who did not have opportunities of quality, continuity, and ICT use are at the mercy of the dissociation of educational formation and learning process, giving rise to more possibilities of abandonment of the education system; c) in Latin America and the Caribbean, there is a necessity of changing the good-quality interconnection offers, stability in service, affordable costs and prices, and availability of mobile network coverage, in which the average of interconnected homes does not exceed 50 %; d) this situation involves an opportunity to guide Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic The first stage has been signed by the responses of each country to face the sudden institutions' Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic closure, prioritizing health, security, and student learning, without forgetting the components of the educational system.In the second stage, simultaneously with the ease of the confinement measures, social distancing, and mitigation of the pandemic, it should be ensured that various institutions safely organize the process of reopening, keeping in mind to ensure as least desertion as possible and the education recovering.The third stage oriented to accelerate learning provides the opportunity to rebuild the education systems to be more equitable and robust, and promote efficient approaches to reduce the gaps in education.It includes innovations that efficiently use communication technologies to support and develop distance learning systems; moreover, incorporate technology for the early detection of possible cases of school desertion, adequacy of pedagogical programs, study plans adapted to the right level, and the promotion of the material and socioemotional support to parents, teachers, and students.
Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic to the OMS data, at the end of August 2021, it was reported 219 million confirmed cases of 4.55 million deaths, and 213 countries affected by the virus.In this context of constant evolution, the most important measures taken worldwide have been containment and mitigation of the virus spread.In the case of education, the vast majority of governments have opted for the closure of education centers, and they have focused on learning use and online education.New York University in Shanghai and Duke University in Kunshan are emblematic cases because they gave examples of the short-term and efficient adjustment and adaptation of services and educational products with digital technology.However, most educational institutions had to adapt to vertiginous paces and depend on the virus spread in all countries.Consequently, the digital divide has been unfathomable; on one side, several students suffered minor disruptions in the continuity of their studies; while, others had extended periods of waiting and adaptation to the implementation of those solutions (UNESCO, 2020; ECLAC, 2020; World Bank, 2020a).The pandemic revealed that most countries and institutions lack resources and digital capital.Accordingly, the measures of virtual education implementation face the obstacles of improvisation, weakness of the communication infrastructure, lack of training, and weak development of ICT competencies between students and teachers.Considering that social distancing is as essential as the quarantine in the fight against the COVID-19, the general trend worldwide was to use digital systems in all educational activities (courses, tests, researches, among others).At the level of secondary and primary education, the responsibility of making the change and the creation of platforms, resources, and solutions felt on the ministries or entities responsible for managing those education sectors(Czerniewicz, 2020).In the light of the analysis of the events, the change was unavoidable; since the spread and seriousness of the pandemic demanded it to the people's health security.In that sense, the response to the challenges of the COVID-19 pandemic has forced organizations of the service sector, especially education, to rapidly adapt to the practices promoted by the technology of information and communication with severe limitations(Carroll & Conboy, 2020), expecting that the digital transformation continues increasing with the pandemic evolution and post-pandemic.Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic
Finally,
Andersen et al. (2020) notice the so-called Digital Sclerosis arose from the immature technology implementation, characterized by the stiffening of service provision, reduction in innovation possibilities, and the failure to respond to the changes in demand.It incites the careful monitoring of implementation and the evolution of digital education; besides that, digital education has a methodical design, development, and deployment; since its initial adoption has been rapid, sudden, and no-planning during the pandemic evolution.Trends in Education as a Result of the PandemicSince the pandemic burst, private and public sectors have promoted and provided educational content combined with teachers' contributions worldwide.In most cases, the contents are contextual; some cost and others are free.The two principal trends in education directly related to the spread of the COVID-19 pandemic and the phenomenon of digital transformation are the increasing innovations in educational technologies and the expansion of digital education.In the case of the first trend that refers to expansion, development, and innovation in educational technologies, the literature shows that virtual education was the most appropriate solution because most governments worldwide closed their education centers(Bedenlier et al., 2021).The learning content was accessible through communication technologies, distance education, and educational technologies (EdTech), so, thanks to the pandemic, the education field has a challenge and an opportunity to the innovation and the adoption of technologies that have emerged and developed before the advent of the internet as a social phenomenon.The EdTechs have shown a high level of innovation; because they have progressed with the deployment of new computing devices, such as cell phones, tablets, and the increased digitalization of texts and teaching material.On their own, these tools do not improve the Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic efficiency or the effects in learning, much less modify the education; because they existed before the pandemic in education.Another innovation of the EdTechs is the so-called Learning pandemic was the exploration and application of distance learning methods on an unprecedented scale.Notwithstanding the practices of distance education existed a long time before the COVID-19 pandemic, they were not widespread; since most learning activities were made in the classroom.In a short period, distance education became usual, and they reached various places depending on the connectivity and infrastructure.Therefore, the distance learning is based on interactivity with diverse modalities that involve platforms, collaborative learning, tutorial, and education guided by teachers.There are two predominant modalities of distance learning based on the internet: Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic of computing equipment uses, the provision of equipment, and quality connectivity are the premises for distance education and the access platforms with learning content.However, researchers, teachers, parents, and authorities worry about how students, especially minor children, use computing devices.Studies previous Barreto, I. B., Sanchez, R. M. S., Sanchez, W. S., Jordan, O. H., Escalante, J. D. B. (2023) The Process of Digital Transformation in Education During the COVID-19 Pandemic Digital Transformation in Education During the COVID-19 Pandemicuniversalization processes in virtual classes, which demand planning for the disruptive education era.In the perspective of digital transformation, it is insufficient that organizations and government entities pronounce alignments to alternative calendars and use technology and tools of teleinformatics to promote different types of virtual education.In this way, it is essential to accompany this initiative with policies oriented to close gaps in technological and physical infrastructure, services, resources, equipment, development of competencies of teachers and students, and innovation process in the creation of the courses.All of this is oriented to mitigate the evident digital divide during the pandemic in a heterogeneous, unequal, and combined environment and situation. | 2023-09-17T15:13:12.978Z | 2023-09-14T00:00:00.000 | {
"year": 2023,
"sha1": "af6c016c18347f4d83b0de50b841d56b1f7ee310",
"oa_license": "CCBYNC",
"oa_url": "https://www.openaccessojs.com/JBReview/article/download/3770/1416",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "369127dfc5ffbd0a182f85687bf0c94bffe62394",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
229246764 | pes2o/s2orc | v3-fos-license | Factors Affecting Online Payment Method Decision Behavior of Consumers in Vietnam*
E-commerce development led to the explosion of online payment. Consumers have many choices when deciding on the online payment method for each transaction. Using a combination of both qualitative and quantitative methods with the help of SPSS AMOS version 22.0, the article explores the factors that influence consumers’ online payment method decision behavior in Vietnam. Research results show that awareness of usefulness, awareness of risk, awareness of trust, awareness ease of use, product uncertainly perception and perceived behavioral control have effects on the behavior of deciding on online payment methods. Awareness of risk has the strongest negative impact on online payment method decision behavior and awareness of usefulness has the strongest positive impact on online payment method decision behavior. Based on these important results, the article proposes a number of implications: (i) continuing to invest and upgrade modern technology to ensure customer information absolutely confidential; (ii) converting all ATM cards on the market to EMV chip standard card technology; (iii) improving service activities, quickly handle things to create confidence for customers; (iv) credit institutions operating in the field of online payment linked to e-commerce sites, supermarkets, convenience stores, restaurants must ask partners to increase transparency for the products.
Introduction
In 1990, the advent of e-commerce introduced a unique way of doing business commerce for the consumer and business world. Since then, e-commerce has grown and changed dramatically with creating extraordinary benefits for customers and businesses worldwide (Bezovski, 2016). The number of people shopping via the Internet is transactions in Vietnam are cash payments and consumers prefer cash on delivery over online payment (Vu, Nguyen, & Dang, 2019). Meanwhile, the online payment in the world in general and in Vietnam in particular reduces the amount of cash in circulation, reduces the cost of printing money, preserving and transporting money, reducing social labor costs, at the same time, improving the efficiency of payment in the economy, contributing to speeding up the circulation of capital of the society, promoting the development of production of goods and monetary circulation.
Therefore, it is necessary to study the factors affecting online payment method decision behavior of consumers in Vietnam in the current context. The study selected the context in Hanoi, the capital of Vietnam, where the urban population is densely populated to conduct interviews and surveys.
Theoretical Foundation
Awareness of Usefulness: Awareness of usefulness refers to the extent to which a person believes that using a particular system will enhance the work performance (Davis, 1989). Online payment is effective and helpful at work when the characteristics of the online payment system meet the requirements and provide important value to users (Schierz, Schilke, & Wirtz, 2010). Referring to the TAM model (Davis, 1989), TAM2 model (Venkatesh & Davis, 2000), awareness of usefulness is understood as the benefits that consumers receive when using online payment systems. Awareness of usefulness has a positive impact on the decision of the online payment method (Gu, Lee, & Suh, 2009). Therefore, the following hypothesis is formulated:
H1: Awareness of usefulness has a positive impact on consumers' behavior of deciding on online payment methods in Vietnam.
Awareness of Risk: Bauer (1960) argued that awareness of risk is related to uncertainty and consequences to consumers' actions. A common barrier to accepting online payments is the lack of security on the Internet (Wang, Wang, Lin, & Tang, 2003). The security of credit card information, hackers or unreliable suppliers is a major concern for consumers. Awareness of risk reduces consumer confidence in online purchases and payments, causes fear of disclosing personal information (Yoon, 2002), causes financial losses (Napitupulu & Kartavianus, 2014). From there, the following hypothesis is formulated: H2: Awareness of risk has a negative impact on consumers' behavior of deciding on online payment methods in Vietnam.
Awareness of Trust: According to Lu, Yang, Chau, & Cao (2011), awareness of trust plays an important role in promoting the intention to use services. Awareness of trust has an indirect influence on the level of risk of financial transactions (Yang, Pang, Liu, Yen, & Tarn, 2015) and the results of awareness of trust also reduce the risk perception, leading to a positive decision on online payment (Yousafzai, Pallister, & Foxall, 2003). Therefore, it can be said that awareness of trust plays an active role in consumers' decision to use online payment. When consumers have awareness of trust, it will minimize barriers when deciding to use online payment. Therefore, the proposed hypothesis is: H3: Awareness of trust has a positive impact on consumers' behavior of deciding on online payment methods in Vietnam.
Awareness Ease of Use: Awareness ease of use is the degree to which a person believes that using a particular system will not require much effort (Davis, 1989). Improved technology systems that are easier to use and less complex will be accepted and used more (Davis, Bagozzi, & Warshaw, 1989). Awareness ease of use was studied to positively influence various technology systems such as mobile services (Wang, Lin, & Luarn, 2006), mobile data services (Faziharudean & Li-Ly, 2011) and commercial services (Kalinic & Marinkovic, 2016). Awareness ease of use is aware when users feel the payment system is easy to understand and easy to use. Especially in online payment, a system that is considered easy to use should have a simple interface, clear steps, appropriate content and layout, understandable functions and notifications. Awareness ease of use is considered to greatly influence the adoption and use of new consumer technologies. From there, the authors propose the following hypothesis: H4: Awareness ease of use has a positive impact on consumers' online payment decision-making behavior in Vietnam.
Subjective Norms: Subjective norms is the perceived social pressure to perform or not to perform a behavior (Ajzen, 1991). Park (2000) emphasized the influence of important people such as friends, relatives and colleagues to consumers who have positive subjective norms towards a behavior will also tend to engage in a positive behavior (Taylor & Todd, 1995;Han, Hsu, & Sheu, 2010). Many studies have concluded that subjective norms is an important factor in predicting intention and behavior (Baker, Al-Gahtani, & Hubona, 2007;Dean, Raats, & Shepherd, 2012;Ha & Janda, 2012;Kumar, 2012). When consumers are aware that people who are important to them make online payment, they tend to do so. The proposed research hypothesis is as follows:
H5: Subjective norms has a positive impact on consumers' behavior of deciding on online payment methods in Vietnam.
Product Uncertainty Perception: Product uncertainty perception is due to doubts about the actual quality and future performance of the product (Dimoka, Pavlou, & Davis, 2011). In the network market, the interaction between seller and consumer is mediated by technology. Consumers often worry about poor product quality, weakness in freight forwarding, channels, and lack of professionalism online payment (Giao, 2020). After the purchase commitment, it will be difficult for consumers to follow the seller 's intention to respect the contract between the two parties, such as after-sales service and personal information protection. As a result, buyers form an uncertain product awareness, possibly related to their online payment decisions (Pavlou & Dimoka, 2008;Ghose, 2009;Mavlanova & Benbunan-Fich, 2010). Therefore, product uncertainty perception is a factor affecting customers' decision to pay online. The proposed hypothesis is as follows:
H6: Product uncertainty perception has a negative impact on consumers' behavior of deciding on online payment methods in Vietnam.
Perceived Behavioral Control: Perceived behavioral control is defined as an individual's confidence in the ability to perform behaviors (Stroborn, Heitmann, Leibold, & Frank, 2004). Perceived behavioral control shows the degree of behavioral control, not the result of behavior (Polančič, Heričko, & Rozman, 2010). In the context of the growing online payment, perceived behavioral control describes consumer awareness of the availability of resources, knowledge and opportunities needed to make payments. Perceived behavioral control has a direct impact on the decision to use a payment method (Kim, Tao, Shin, & Kim, 2010). According to TPB, perceived behavioral control can be directly used to predict the implementation of acts. Therefore, the authors propose the hypothesis:
Survey and Sample
Qualitative research was conducted through in-depth interviews with 10 consumers in Hanoi. The content of the interview focused on: online shopping, factors affecting the behavior of deciding on online payment methods, the influence of awareness of usefulness, the influence of awareness of risk, the influence of awareness of trust, the influence of awareness ease of use, the influence of subject norms, the influence of product uncertainly perception, the influence of perceived behavioral control, the behavior of deciding on online payment methods. Interviews were conducted for approximately 1 hour at the location selected by the interviewee. The content of the interview was stored, summarized and analyzed to conclude on the factors in the research model. Quantitative research was used to measure the influence of factors on the behavior of deciding on online payment methods. This method is implemented through a questionnaire survey with consumers in Hanoi. The questionnaire surveys were distributed to consumers in Hanoi area through two forms: utilizing Google Forms -a web-based dorm solution system, which allows researchers to design only surveys and questionnaires; distributing directly to the respondents at shopping locations, schools, parks ... from January to April 2020.
The statistics of 370 observations in the quantitative research show that the sample of the factors affecting the behavior of deciding on online payment methods in Hanoi area is mainly women (accounting for 65.1%) nearly twice as many as men (accounting for 33.5%); most of them are between the ages of 18 and 30 (accounting for 90.8%); the observations focused on people educated Intermediate/ College/University (accounting for 82.7%); the average income per month is mostly below 5 million, specifically 248 observations (67%), followed by the level of 5 to 10 million (15.4%); finally, the frequency of online payment of surveyed consumers is usually 1-2 times/month (accounting for 40.5%), about 3-4 times/month (accounting for 26.5%) and a relatively small proportion. Number of respondents often pay online more than 5 times/month (accounting for 26%), besides there are some consumers who have never paid online (accounting for 7%).
Analyses
The authors performed an analysis to assess the contributions of factors (awareness of usefulness, awareness of risk, awareness of trust, awareness ease of use, subjective norms, product uncertainty perception, perceived behavioural control) to online payment method decision behavior. The analysis process includes three main steps. Firstly, Cronbach's alpha and explorative factor analysis (EFA) are implemented to assess the reliability of variables. Secondly, confirmatory factor analysis (CFA) to evaluate models and scales. Finally, regression analysis to test hypotheses and assess the level of influence. In addition, the statistical analysis has been carried out using SPSS 22.0 and AMOS 22.0.
Measures
All scales used in our study were adapted from the past researches and a new observation suggested by the authors after in-depth interviews. The scales were scored on a 5-point Liker-type format from strongly disagree to strongly agree. (2007) which is comprised of 3 items, including 'I have the necessary resources for using online payment' (0.277), 'I have the necessary knowledge for using online payment' (0.323), 'Using online payment is entirely within my control (0.668). In particular, the third observation is excluded because of the item-sum correlation less than 0.3.
Exploratory Factor Analysis (EFA)
After assessing the reliability of scales by Cronbach's alpha, total of 25 items are used in the exploratory factor analysis (EFA). The results of testing the reliability of scales
Confirmatory Factor Analysis (CFA)
From the EFA analysis results, we have 8 official factors used in the research model. CFA analysis results from the sample with GFI = 0.940; TLI = 0.984> 0.9; CFI = 0.987> 0.9; CMIN/df = 1,219 ≤ 2 and RMSEA = 0.024 ≤ 0.08. Therefore, the calculation results show that the model's indicators are satisfied, the model is accepted with the research data.
The results of testing scales by CFA analysis have shown that the factor weights of the indicators for the concepts described in the Table 3, all factors have high significance levels (p <0.000); the values of standardized weights are > 0.5 (except for SN3), so the scales achieve the convergence value (Hoang & Chu, 2008). However, subjective norms has the composite reliability of 0.624 less than 0.7 and the total variance extracted is 0.414 less than 0.5, indicating that the survey data only reflects 41.4% of the relationship between observed variables. This factor, in other words, the correlation level between the observed variables of this factor is not high. Therefore, in next step, subject norms will be excluded from the research model.
Linear Regression Analysis
The linear regression analysis model is shown in detail in Table 4. Adjusted R Square -R corrected square of 0.475 means that the independent variables influence 47.5% of the variation of the dependent variable, the remaining 52.5% is due to the variables outside the model and random errors.
Linear regression analysis (Table 5) shows that: • Awareness of risk (AR) with a beta of -0.372 has the strongest negative impact on online payment method decision behaviour (OPD); awareness of usefulness (AU) with a beta of 0.308 has the strongest positive impact on online payment method decision behaviour (OPD). • Awareness of ease of use (AEU), product uncertainty perception (PUP), perceived behavioural control (PBC) are the variables that moderate impact on online payment method decision behaviour (OPD). • Awareness of trust (AT) with a beta of 0.095 has the weakest impact on online payment method decision behaviour (OPD). Thus, hypotheses H1, H2, H3, H4, H6, H7 are accepted. The hypothesis H5 is rejected at the Confirmatory Factor Analysis (CFA).
Discussion
Awareness of usefulness positively influence online payment method decision behavior. The benefits that online payment brings to consumers such as faster payment, improved work efficiency. This result has been confirmed in the research of Schierz, Schilke, and Wirtz (2010). Awareness of risk in online payment has a negative impact on consumers' decision to make online payments. When consumers are aware of risks to online payment systems such as faulty online payments that cause financial loss, disruption of payment processes, fear of disclosure of personal information or confidentiality. Regarding credit cards, the higher hackers decide to pay with this method. This result is completely consistent with the study of Wang, Wang, Lin, and Tang (2003) when studying the barrier to online payment is the lack of security on the Internet; Thakur and Sirvastava (2014) when studying on risk perception in accepting online payments via Internet and mobile; Napitupulu and Kartavianus (2014) when studying of errors in online payment causing financial losses.
Awareness of trust means that the belief that consumers will make online payments does not affect consumers' decision to make online payments. The influence of the perception of trust has not really had a strong impact on promoting the use of online payment by consumers. Although the results of this study do not coincide with the findings of Lu, Yang, Chau, and Cao (2011) when researched awareness of trust plays an important role in promoting the intention to use services in Hong Kong; the study of Yousafzai, Pallister, and Foxall (2003) confirmed that trust has an indirect effect on the level of risk associated with financial transactions. This can be explained by the difference in the context and the object of study.
Awareness ease of use also affects consumers' decision to make online payments. Awareness ease of use is perceived when consumers feel the payment system is easy to understand and implement. This will affect the acceptance and use of consumer technology. This assertion coincides with the results of Kalinic and Marinkovic (2016).
Subjective norms is one of the factors that positively influence consumers' decision to make online payments. The subjective norms is the influence of close people such as family, friends, colleagues, when those who positively impact consumers will contribute to the decision-making behavior of online payment. The results of this study are consistent with the research of Taylor and Todd (1995) and the study of Han, Hsu, and Sheu (2010).
Product uncertainly perception is a factor that has a negative impact on consumers' online payment behavior. Consumers cannot directly check products, asymmetric information among consumers makes consumers unsure about the products to make payment decisions online. This research result is completely consistent with the research of Pavlou and Dimoka (2008), Mavlanova and Benbunan-Fich (2010). Perceived behavioral control is the weakest positive influence on consumers' behavior of deciding on online payment methods. When consumers are aware of the ability and eligibility to make payments online, the intention will increase and vice versa. This result is consistent with the study of Stroborn, Heitmann, Leibold, and Frank (2004) and the study of Kim, Tao, Shin, and Kim (2010).
Implications
Vietnam is considered a potential market to develop online payment services for consumers. In the era of Industry 4.0 and towards 5.0 society -a super smart society, the problem of developing online payment methods is extremely urgent. Based on the research results, the authors offer some suggestions for companies to improve, promote and develop online payment methods of consumers in the future.
Firstly, continuing to invest and upgrade modern technology to ensure customer information absolutely confidential. Enterprises in Hanoi region in particular and Vietnam in general need to lower the fraud rate when paying online, increase the security solutions for electronic payment in the context of e-commerce development to believe that hackers do not take advantage of fraudulent practices.
Secondly, converting all ATM cards on the market to EMV chip standard card technology. EMV chip is an electronic chip with a processor like a computer with high technology, capable of storing and encrypting information with high security. In contrast to traditional EMV chip cards will generate a unique transaction code and never repeat. In case a consumer card is stolen information from a store, the fake card will never work because the stolen transaction code will not be reused, that card will be rejected. Currently, besides international banks such as ANZ, HSBC, CitiBank... Vietnamese banks are also on the way to fully convert to EMV cards. There are many big banks introducing international credit and debit cards with advanced EMV chips such as VIB, VietinBank, VietcomBank, Techcombank, ACB, Sacombank... In the future, all banks must switch to Chip card for the security and safety of customers.
Thirdly, when there are bad situations like losing money in customers, or making payment of errors that cause financial damage, banks and financial institutions need to improve service activities, quickly handle things to create confidence for customers from which peace of mind using online payment methods.
Finally, credit institutions operating in the field of online payment linked to e-commerce sites, supermarkets, convenience stores, restaurants must ask partners to increase transparency for their products to help consumers trust the products. In addition, not only e-commerce sites but other units that use online payment methods need to regulate product images, limit risks when buying online for product quality. When programming, we need to create a tool that allows brands to provide unlimited product images and videos on the application website. Thereby, consumers easily make decisions on the choice of online payment method.
Limitations
The study of factors affecting online payment method decision behavior of consumers still has some limitations: Firstly, the research scope of the research is consumers in Hanoi. However, the subjects mainly surveyed were between the ages of 18-30 and lacked other subjects in the age group. Therefore, the sample does not represent all consumers in Hanoi. Secondly, the authors conducted a survey of consumers in Hanoi with a convenient sampling method so it was difficult to achieve a high level of representation. Thirdly, consumers' online payment decision-making behavior is also influenced by many other factors besides the seven groups of factors mentioned. | 2020-10-28T19:21:05.859Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "c434dc0ce5e8080c98f08aceabb004aa75bfa683",
"oa_license": "CCBYNC",
"oa_url": "http://koreascience.or.kr/article/JAKO202029062616376.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "63dadbe4231f5ddcc0e742455fbd5d0f5c8e0005",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
245579479 | pes2o/s2orc | v3-fos-license | T Design of Resources Monitoring and Controlling System on Aircraft Maintenance Project in XYZ MRO
⎯ Maintenance Repair and Overhaul (MRO) XYZ is a company engaged in aviation / aviation, especially in aircraft maintenance. MRO XYZ can work on the maintenance of several types of aircraft and its components in accordance with the certification and permits owned by MRO XYZ. The vital aspects that must be regulated in an aircraft maintenance project include the engineer involved in the maintenance process and the manhours calculation system who must be able to explain every detail of the work on an aircraft maintenance project. In addition, other resources such as consumbale material and raw material must also be monitored and controlled, so that management team can obtain COGS as optimal as possible in the project and profitability can increase. This research was built to provide solutions to the aircraft maintenance process stakeholders in meeting vital information needs that must be continuously monitored and based on this information, the system built will be able to control the ongoing maintenance process of the aircraft, in accordance with regulations.
I. INTRODUCTION
HE development of Information Technology (IT) in various sectors, especially aviation, has become an integral part of business objectives. Management-organized business plans require IT support. The importance of IT's role in supporting business processes on aircraft maintenance is considered for the standardization and certification process of companies engaged in aviation and aerospace. This led to the importance of IT implementation, especially in aviation companies, especially the Aircraft Maintenance Division (Maintenance Repair & Overhaul/MRO).
Aircraft maintenance is an important part of the business process in the aviation company in general. The aviation company must have a special division that handles all the aircraft maintenance processes, and there is also an aviation company that is specialized in aircraft maintenance only, commonly referred to as MRO. Examples of MRO in Indonesia are Garuda Maintenance Facility (GMF AeroAsia) and Merpati Maintenance Facility (MMF), and there are several other smaller MRO scales [2][3][4].
The business process that occurs in aircraft maintenance can differ and be specific to any MRO, but in general they follow the rules and standards stipulated by the Indonesian Government (through the Ministry of Transportation) and International Aviation Federation, such as the European Aviation Safety Agency (EASA). The vital aspects of aircraft maintenance, among others, are the engineers involved in the maintenance process and the man hours calculation system that must be able to explain every detail of the work on a single aircraft maintenance project.
Engineer is a major resource in aircraft maintenance that has expertise in specific areas, such as: Airframe, powerplant, electrical, radio, instrument. In fact, these areas of expertise are still categorized by type/type of aircraft, such as: Boeing 737-200, 737-300, 737-400, 737-500, and others. Engineers can only do the work in accordance with the skills and license owned, so there must be control that can set and restrict this.
The man hours in aircraft maintenance should also be monitored accurately because man hours are the defining aspect of cost calculation and one aspect that determines the size of the quality of the aircraft maintenance work. In a routine aircraft maintenance project, the breakdown of job details that must be done can reach thousands, and from each job the man hours must also be accurately recorded until the aircraft maintenance process has been completed. The man hours of the work should then be counted to know the total man hours in one aircraft maintenance project. This Man hours can also be a reference to figure out the extent to which the progress of an aircraft maintenance project has been conducted. So that man hours is a vital aspect that must be monitored in actuals, and can assist top level management in determining aircraft maintenance business strategy. The cost of goods sold (COGS) or the price in a aircraft maintenance project is also highly determined by the use of consumable material and raw material during the project, so there must be a planning, management, and control over the entire use of materials. Measured HPP, will greatly affect the top level management in determining the sales price strategy to the customer, of course it aims to find a selling price that maximizes profit for the company.
Based on the need for monitoring and controlling aspects of job restrictions by engineers and man hours aspect to other resources used during the aircraft maintenance process, it is necessary to implement an IT-based system that can help project leader/planner and top level management to determine the good strategy in aircraft maintenance.
A. Basic concepts of system development
System development can be interpreted as a standardized development process defining a set of activities, methods, best practices, ready-delivered goods, and automated devices that will be used by system developers and project managers to develop and continuously improve information systems and software (Indrajit, 2002).
B. Basic principles of system development
The development of information systems is as an activity to generate computer-based information systems to solve organizational problems or problems or to take advantage of opportunities that occur. As for some basic principles of development of other between systems (Indrajit, 2002): 1. Owners and users of the system must engage 2. Using a troubleshooting approach 3. Determining the stages of development 4. Set the standard for consistent development and documentation 5. Don't be afraid to cancel or change the work environment 6. Solving problems into small parts 7. Designing the system for growth and development
C. System Development Life Cycle (SDLC)
The System Development Life Cycle (SDLC) is a classical methodology used to develop, maintain and use information systems [1]. Since the work follows a sequential pattern and is done with the top down method, SDLC is often known as a waterfall approach. The activity stream runs one way from the beginning until the project is completed.
1) Policies and Planning System
The system planning process aims to plan system projects that will be developed later. Doing this system planning process is a system planning staff (planning staff) who consult with the Steering Committee. The system planning process consists of the following stages: 1. Assess the purpose of making systems 2. Identifying System Projects 3. Set a system goal to be created 4. Observe the constraints that occur during system creation
2) Analysis System
The analysis stage is the parser of an intact information system into parts of its components with a view to identifying and evaluating the problems, opportunities, obstacles and expected needs so that it can be proposed Perbaikanperbaikannya.
3) Design System
System design can be divided into two parts, namely system design in general also called logical design, and detailed system design is also called by physical system design. 1. System Design in general. The purpose of the system design in general is to give the user a general picture of the new system. System design in general merfodder preparation of detailed design. 2. Detailed system Design. This stage is a more detailed explanation of the design that is in the previous stage.
4) Selection System
This stage is a select stage of hardware and software for information systems.
5) Implementation System
The system implementation stage is the level of laying system ready to operate. This stage also includes writing program code.
III. RESULT AND DISCUSSION
Before designing and technical design is done, then based on the results of analysis of existing business process, do some changes, and the expected business processes, among others are:
A. Process of creating projects
Is a process that serves to make planning for the process of aircraft maintenance work that contains data taskcard, calculation manhours, usage facilities, until the determination of a dedicated team. Create prohect form can see at Figure 1.
B. Jobcard Process
This jobcard process serves to perform the execution process or to start the work that will be done by mechanics and engineers. This Form displays all information of the Jobcard data, among others is the skill information, estimation manhours, work area or the location of the work area, to the description of the work to be done as well as the needs of materials and tools needed to work on the work. This process can only be done by an engineer who has a special license in aircraft maintenance. Jobcard process can see at Figure 2.
C. Work Progress Report
This work progress report serves to display the ongoing progress information of the work. This Form contains information about ongoing project data and all job progress details, making it easier for the supporting team to monitor the progress of the work being done by mechanics and engineers. Work progress report can see at Figure 3.
D. Release To Service
Release To Service Certificate is a certificate issued by the MRO company as the basis for the completion of the aircraft maintenance repair process as well as authorizing that the aircraft is eligible to fly after the repair process is complete. Certificate release to service can see at Figure 4.
E. Profit and Loss Project
The Profit and Loss, also called Income, is probably the most important and most common of the three essential projections in standard business plan financials. Tracking profits and loss requires detailed records of project-related income and expenses. Profit and loss project can see at Figure 5.
IV. CONCLUSION
The conclusions of the results of this study are Design of Resources Monitoring and Controlling System on Aircraft Maintenance Project In XYZ MRO can control/authorize control of any work performed based on the area of expertise.
Design of Resources Monitoring and Controlling System on Aircraft Maintenance Project In XYZ MRO can control the distribution and use of consumable material and raw material when the aircraft maintenance project is in progress.
Design of Resources Monitoring and Controlling System on Aircraft Maintenance Project In XYZ MRO can provide realtime reports or reporting on all key resource movements such as: Manhours of engineer, status of each task or occupation in aircraft maintenance projects, and comparison between estimation/planning of manhours with actual running manhours. | 2021-12-31T16:10:30.533Z | 2021-10-15T00:00:00.000 | {
"year": 2021,
"sha1": "ece01e7bcf53e0be33356dd6afa8f86b91fe7a4f",
"oa_license": "CCBYSA",
"oa_url": "https://iptek.its.ac.id/index.php/jps/article/download/11150/6245",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "72815807859e524166971d062e9f4454ee30da2a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
6836646 | pes2o/s2orc | v3-fos-license | Positive role of cell wall anchored proteinase PrtP in adhesion of lactococci
Background The first step in biofilm formation is bacterial attachment to solid surfaces, which is dependent on the cell surface physico-chemical properties. Cell wall anchored proteins (CWAP) are among the known adhesins that confer the adhesive properties to pathogenic Gram-positive bacteria. To investigate the role of CWAP of non-pathogen Gram-positive bacteria in the initial steps of biofilm formation, we evaluated the physico-chemical properties and adhesion to solid surfaces of Lactococcus lactis. To be able to grow in milk this dairy bacterium expresses a cell wall anchored proteinase PrtP for breakdown of milk caseins. Results The influence of the anchored cell wall proteinase PrtP on microbial surface physico-chemical properties, and consequently on adhesion, was evaluated using lactococci carrying different alleles of prtP. The presence of cell wall anchored proteinase on the surface of lactococcal cells resulted in an increased affinity to solvents with different physico-chemical properties (apolar and Lewis acid-base solvents). These properties were observed regardless of whether the PrtP variant was biologically active or not, and were not observed in strains without PrtP. Anchored PrtP displayed a significant increase in cell adhesion to solid glass and tetrafluoroethylene surfaces. Conclusion Obtained results indicate that exposure of an anchored cell wall proteinase PrtP, and not its proteolytic activity, is responsible for greater cell hydrophobicity and adhesion. The increased bacterial affinity to polar and apolar solvents indicated that exposure of PrtP on lactococcal cell surface could enhance the capacity to exchange attractive van der Waals interactions, and consequently increase their adhesion to different types of solid surfaces and solvents.
Background
In natural aquatic populations, bacteria often live in biofilms, which may be described as matrix-enclosed bacterial communities attached to a substratum [1,2]. Biofilm formation allows bacteria to survive in environments that would be lethal for their planktonic counterparts [3,4]. Key event in biofilm formation is bacterial adhesion on a surface that depends on factors such as preconditioning of the support by macromolecules and the physico-chemical interactions between the bacterial cells and the substratum [5,6].
In the dairy industry, biofilms usually occur on surfaces that are in contact with fluids, and may be a source of bacterial contamination leading to technological and economical problems [7][8][9]. Nevertheless, protective biofilm formation on food industry workshop surfaces can also be beneficial because their presence may effectively modify the physico-chemical properties of substrates and as such, reduce adhesion of the undesirable planktonic microorganisms [10,11]. Furthermore, multiplication of the undesirable organism may be inhibited by nutrient competition or by synthesis of antagonistic compounds such as acids, bacteriocins, or surfactants [12,13]. In recent years, biofilms of lactic acid bacteria have received considerable attention for their potential use in the settlement of a competitive flora [14,15]. Lactococcus lactis is the most frequently used dairy bacterium for fermentation and preservation purposes. Lactococci do not present any detrimental effect on the sensory properties of processed foods, making them a suitable candidate for the creation of protective biofilms.
Various studies have demonstrated that bioadhesion depends mainly on combination of surface physicochemical properties (such as Lewis acid-base character, capacity to exchange attractive van der Waals interactions, and global surface charge) of both the cell and the solid substratum [5,16,17]. Concerning bacterial surfaces, these properties depend on molecular cell surface composition. It was shown that the L. lactis ssp. lactis LMG9452 surface is composed mainly of proteins and polysaccharides and has a hydrophilic character [18]. However, it is still unclear as to which lactococcal cell surface molecules influence particular physico-chemical properties and adhesion.
Cell wall anchored proteins (CWAP) are among the known bacterial cell surface components having adhesive properties [19]. This group includes adhesins or proteins influencing coaggregation, e.g., fibronectin and collagen binding proteins of Staphylococcus aureus, S. schleiferi [20,21], or glucan binding protein of Streptococcus mutans [19]. Concerning L. lactis, three surface proteins were attributed to the same group of CWAP: i) the chromo-somally-encoded sex factor CluA [22], ii) the plasmidencoded proteinase NisP [23], and iii) the plasmidencoded cell serine proteinase PrtP (also called lactocepin [24], which initiates proteolytic degradation of milk casein [25]). Like other CWAP, the lactococcal PrtP proteinases are characterized by a signal sequence at the N-terminus that is cleaved during secretion across the membrane; and a LPXTG sorting motif followed by a hydrophobic membrane-spanning region and a positively charged tail at the C-terminus [25]. After protein translocation through the membrane, the sortase enzyme mediates cleavage of LPXTG such that the threonine carboxyl group is linked to the cross-bridges in the peptidoglycan layer [26]. Deletion of the N-terminal end containing the LPXTG motif results in complete secretion of the truncated proteinase [27]. Fusion of the C-terminal LPXTG containing domain of PrtP with several reporter proteins resulted in the surface exposure of the fusion proteins [28,29].
The role of bacterial cell wall anchored proteins in adhesion was studied mainly in connection with their possible roles in virulence [21]. Previous studies addressed specific binding to host cell components like platelets, albumin, fibrinonectin, or collagen [20,21,30]. However, the role of cell wall anchored proteins of non-pathogenic bacteria on cell surface physico-chemical properties and adhesion to inert surfaces has not been examined.
The aim of this work was to evaluate the influence of the proteinase PrtP on hydrophobic/hydrophilic characteristics, Lewis acid-base properties, electrical charge and adhesive capacity of lactococci.
Determination of the hydrophobic/hydrophilic and Lewisacid base characters
We used derivatives of L. lactis ssp. cremoris strain MG1363: PRTP + (PrtP anchored and active), PRTP* (PrtP anchored and inactive) and PRTP -(MG1363 carrying vector plasmid pGKV2 without prtP gene) as control strain. The strain MG1363 does not express other surface exposed proteinases although several membrane and cytoplasmic proteases are present [31]. As previously was shown that expression of various proteinase derivatives from the same promoter resulted in the same amount of proteinase [32], it was assumed that the proteinase expression in PRTP + and PRTP* strains was identical.
The MATS kinetic experiment was used to determine the dynamic interaction of lactococci carrying different alleles of prtP gene (PRTP -, PRTP + and PRTP*) with polar (chloroform and ethyl acetate) and apolar (hexadecane and decane) solvents (Fig. 1). To extract the maximal affinity to solvents (A max ) and the initial slope (A max ·k) values, the experimental data presented in Fig. 1 were fitted using the following exponential expression: where A(t) is the affinity as a function of time, A max , the maximal affinity; A max ·k, the initial slope and t, the time in seconds. The maximal affinity and the initial slope values are presented in Table 1.
For cells expressing anchored proteinase PRTP* and PRTP + a maximum affinity was reached between 20 to 40 second interaction with mono-polar solvents (chloroform and ethyl acetate), while the maximum affinity to apolar solvents (hexadecane, decane) was attained after a period of time superior to 60 seconds. PRTP* had higher initial slope (12. 1) of affinity to ethyl acetate compared to PRTP + (3. 2). The difference between these two strains was slightly less pronounced in case of chloroform: 10.5 for PRTP* and 6.6 for PRTP + (Table 1).
Our results showed that control strain PRTPexhibited very low affinity for all four solvents (maximal affinity <20%) independently of their different physico-chemical properties (whether apolar, Lewis-acid or Lewis-base). Low affinity for apolar solvents (i.e. A max for hexadecane and decane was less than 10%), indicated the lack of hydrophobic properties of PRTPcontrol strain ( Table 1).
The hydrophobic character of the two other strains expressing anchored proteinase (PRTP + and PRTP*) was different: they both exhibited higher affinity to all solvents (P < 0.05; Fig. 1). The higher affinity for all solvents was observed in strain PRTP*, encoding anchored inactive PrtP. The presence of anchored proteinase generally resulted in an increase of bacterial affinity for apolar solvents hexadecane and decane, since A max values comprised in the range 89-95% for PRTP + and PRTP*, in comparison to values of less than 10% for control strain PRTP -(P < 0.05; Table 1). This suggests that anchored PrtP, active or not, markedly mediated the increase of cell hydrophobicity.
Evaluation of cell wall electrical charge
The same L. lactis strains, carrying different prtP alleles were used to evaluate global cell surface charge. Electrophoretic mobility (EM) of three bacterial strains (PRTP -, PRTP + , and PRTP*) at pH values ranging from 2 to 7 are presented in Fig. 2. We observed that all strains were highly electronegative and an isoelectric point could not be determined in the pH range explored. In all cases the Affinities of MG1363 derivatives carrying different prtP alleles to four solvents used in kinetic MATS analysis: chloroform (a), hexadecane (b), decane (c) and ethyl acetate (d) [18,33]. At pH range exceeding 3 the presence of anchored proteinase significantly reduced the negative charge of microbial cells (P < 0.05, Fig. 2). This effect was maximal for cells expressing anchored active proteinase: EM values of PRTP + were higher than -2 × 10 -8 m 2 V -1 s -1 , in comparison to less that -3 × 10 -8 m 2 V -1 s -1 for control strain PRTP - (Fig. 2).
Evaluation of adhesion to solid surfaces
We used glass and PTFE to study the influence of anchored PrtP on lactococcal adhesion to solid surfaces. Physicochemical properties of these two solid substrates were evaluated by contact angle measurements. The van der Waals (γ LW ), Lewis-base (γ -) and Lewis-acceptor (γ + ) components of the surface tension (γ S ) of glass and PTFE are presented in Table 2. In agreement with previously published data [6], glass exhibited a strong hydrophilic character (Θ water = 10°). The hydrophilic glass nature is mainly due to its Lewis base character (γ -= 55 mJ·m 2 ). This test indicated that PTFE was almost apolar (γ AB ~ 0) and exhibited very low van der Waals character (γ LW = 15), indicating low interacting capacity.
Adhesion to glass and PTFE of lactococci expressing different prtP alleles was examined in two concentration NaCl, 1.5 mM and 150 mM. We observed a statistically significant increase (P < 0.05) of adhesion for strains expressing anchored PrtP, independently of their proteolytic activity and the surface (3 -6 fold for PRTP + and 8 -10 fold for PRTP*; Table 3). This increase was significantly (P < 0.05) higher in 150 mM NaCl solution.
Discussion
The aim of this work was to study the involvement of the cell wall proteinase PrtP on physico-chemical mechanisms of adhesion of L. lactis to solid surfaces. In our experimental conditions, the presence of CWAP PrtP, active or inactivated, on the cell surface modified the physico-chemical surface properties as well as microbial adhesion to hydrophobic (PTFE) or hydrophilic (glass) surfaces (proteinase is active in PRTP + and inactive in PRTP*). Efficient adhesion of the strain expressing inactivated cell surface-anchored PrtP indicated that the presence of PrtP on the cell surface, and not its proteolytic activity, is important in this phenomena.
We ruled out possible effects of vector itself on adhesion: the physico-chemical properties and adhesion of MG1363 with or without vector pGKV2 [34], used to clone prtP, were essentially the same (results not shown). Proteolytic activity of cloned PrtP proteinases used in this work is comparable to that of a wild type strain, suggesting that their expression and anchoring could be also comparable [35]. This allows us to suggest that adhesion via PrtP may also occur in natural strains. Moreover, it has been shown that PrtP expression in milk is more efficient than in M17 medium, used in this study [31]. Therefore we can expect that in dairy environment the effect of PrtP on cell surface properties would be even more pronounced.
The adhesive behavior of strains bearing surface-anchored PrtP could be explained by changes in cell surface physicochemical properties. Electrophoretic mobility measurements revealed that the presence of proteinase on the lactococcal cell surface is correlated with a reduced global negative charge. The high negative charge and the absence of isoelectric point in the pH range we examined could be linked to the presence of (lipo)teichoic acid in the cell wall that contains many phosphates groups with a pKa of around 2 [18]. The clear reduction of negative charge in cells displaying PrtP may be explained by an increase of the N/P (protein/phosphate) ratio of the bacterial cell wall [18]. The ability of PrtP to bind cations such as Ca ++ may also have an influence on global surface charge [36].
We observed more efficient adhesion of PRTP* strain to solid (glass and PTFE) surfaces as compared to PRTP + strain (p < 0.05; Table 3). Moreover, we observed the difference in PRTP*adhesion between high (150 mM) and low (1.5 mM) ionic strength conditions. Since both bacterial (Fig. 2) and glass or PTFE [37] surfaces are negatively The maximal affinity and initial slope values were extracted from experimental data presented in Fig. 1.
charged, this could be explained by stronger electrostatic repulsion in low salt concentration. However, the differences in adhesion between PRTPand PRTP* strains were more expressed in high salt concentration, the conditions where repulsive electrostatic interactions are strongly diminished ( [6], Table 3). We therefore suggest that electrostatic interactions do not play a predominant role in PrtP mediated adhesion.
The MATS test showed that strains bearing anchored PrtP had increased affinity for all solvents tested, independently of their nature, i.e., polar, less hydrophobic (ethyl acetate and chloroform) or apolar, more hydrophobic (decane and hexadecane). Furthermore, adhesion of strain bearing anchored PrtP increased regardless of whether the substrate was PTFE, which is apolar and hydrophobic, or glass, which is polar and hydrophilic [38]. Based on these results, we hypothesize that the presence of PrtP increases the capacity of the cell to exchange attractive van der Waals interactions; these interactions would increase bioadhesion of lactococci displaying anchored PrtP to different types of surfaces (e.g., inert, polar or apolar, or organic).
The affinity of inactive PRTP* to solvents and to solid surfaces was higher in comparison with its active counterpart PRTP + . This effect could be explained by degradation of main lactococcal autolysin AcmA by PRTP + [39]. AcmA activity was reported to increase significantly bacterial adhesion [40,41]. Degradation of AcmA by PrtP could diminish its activity and consequently adhesive properties. Alternatively, the greater affinity of inactive PrtP carrying strains to solvents and to solid surfaces may be explained by the absence of self-cleavage. Such self-cleavage is characteristic to an active proteinase and consequently could result in lower number of molecules present on cell surface [34].
We observed a very low affinity of lactococci to apolar solvents, consistent with previous results using L. lactis strain LMG9452 [18]. The presence of anchored proteinase thus increased strain hydrophobicity. The hydrophobic character was reported as feature of a number of Gram-positive bacteria which possess cell wall anchored proteins [42,43]. The increase of hydrophobicity by cell wall anchored proteins may be a common property of Grampositive bacteria. Nevertheless, other factors (like polysaccharides) could mask this effect. For example, in the case of hydrophilic L. lactis strain LMG9452, the surface is dominated by polysaccharides rather than proteins [18].
Surface proteins other than those that are anchored via an LPXTG motif may affect bioadhesion. For example, autolysins of Staphylococcus epidermidis were recently shown to affect primary attachment to solid surfaces, and the autolysin of Listeria monocytogenes contributes to adhesion to eucaryotic cells [44,45]. Presence of PrtP on the lactococcal cell surface increases adhesion to glass and to PTFE about 10 fold. The ability of a single protein to change adhesion to this extent may also indicate that there are few other proteins present on the lactococcal cell surface or that these proteins do not affect adhesion. Two confirmed lactococcal proteins with cell wall anchor domains are the sex factor protein CluA [22], and plasmid-encoded NisP [23], which is not present in MG1363.
The CluA dependent cell aggregation phenotype is reportedly poorly expressed unless a co-integrate is formed between the sex factor and a lactose plasmid [22], so we consider it unlikely that CluA is a significant adhesion factor in our experimental system.
Conclusion
We have shown that the cell wall anchored PrtP proteinase, in addition to its role in milk casein degradation, is responsible for greater cell hydrophobicity and adhesion to solid surfaces. An increase of adhesion to polar and apolar solid surfaces and solvents indicates that attractive van der Walls interactions may be responsible for PrtP mediated lactococcal adhesion. Obtained results indicate that PrtP, and not its proteolytic activity, are responsible for the changes of these cell surface physico-chemical properties. We suggest that PrtP or its derivatives can be used as a tool to construct strains with increased adhesion that form protective biofilms.
Bacterial strains and growth conditions
The Lactococcus lactis ssp. cremoris strain MG1363 [46] was used as host for three isogenic plasmids: pGKV2 [47]; strain carrying this plasmid called here PRTP -); pGKV552 (derivative of pGKV2, containing cloned prtPI gene [34]; strain carrying this plasmid is called here PRTP + ); and pGKV1552 (derivative of pGKV552, where PrtPI is inactivated by in-frame point mutation of Asp-30 to Asn-30 in a catalytic site [34], strain carrying this plasmid called here PRTP*). Plasmid pGKV2 contains the replication origin of the cryptic L. lactis WG2 plasmid pWV01 and the erythromycin and chloramphenicol resistance genes [47]. Bacteria were cultivated in M17 medium [48] supplemented with 5% of glucose at 30°C. When needed, 5 μg/ml of erythromycin was added.
MATS (Microbial adhesion to solvents)
The method is based on comparing the affinity between microbial cells and a mono-polar or an apolar solvents [49]. The polar solvent can be an electron-acceptor or an electron-donor. The solvents used in this study were: chloroform (an electron-acceptor solvent), hexadecane (non-polar solvent), ethyl acetate (an electron-donor solvent) and decane (nonpolar solvent). To evaluate kinetic of bacterial adhesion to solvents over night grown bacteria were harvested by centrifugation (7000 g, 4°C, 10 min.), then washed twice using 150 mM NaCl and a re-suspended in a 150 mM NaCl solution. The high NaCl concentration was used to avoid charge interference. The initial optical density (OD i ) of this suspension was then adjusted to around 0.8 at 400 nm. The suspension was divided in six 2.4 ml samples, 0.4 ml of a solvent was added to each of them. The samples were mixed 10, 20, 30, 40, 50 and 60 seconds with agitator type vortex (Heidolph, Schwabach, Germany). The mixtures were allowed to stand for 15 min. for complete phase separation. The aqueous phase was then removed and the final optical density (OD f ) was measured. The microbial adhesion to each solvent was calculated as (OD i -OD f )/OD i × 100 and presented in percents. Each experiment was performed in triplicate using independently prepared cultures.
Electrophoretic mobility
After overnight growth, bacteria were harvested by centrifugation (7000 g, 4°C, 10 min.), washed twice with 1.5 mM NaCl and suspended in the same buffer at a final cell density of 10 7 cfu/ml. The pH of the suspension was adjusted in the range of pH 2 to 7, as needed, by adding nitric acid or potassium hydroxide (Sigma, Saint-Quentin, France). Electrophoretic mobility was measured with an automated zetameter (Zetaphoremètre II, CAD Instrumentations, Paris, France) using an electric field of 50 V. Each experiment was performed in triplicate using three independent cultures. five times with Milli-Q water (Millipore, Saint-Quentinen-Yvelines France).
Bacterial adhesion to solid surfaces
Slides were incubated in 30 ml of bacterial suspension (O.D 600 = 0.8) in 1.5 and 150 mM NaCl solution in Petri plates for 1 hour, then rinsed five times (care was taken to prevent slides from drying between washes), and colored for 15 min. with 0.01% (w/v) acridine orange water solution (Sigma, St. Louis, MO). Fluorescently colored cells were visualized and images captured with epifluorescence microscope (Leica DMLB, Tokyo, Japan, equipped with objective 10×). Ten images of each slide were taken and analyzed with UTHSCSA ImageTool program . Microbial adhesion was estimated as the percentage of solid surface covered by bacteria. Each value presented is the mean of at least three independent set of experiments.
Statistical analysis
Multifactor ANOVA variance analyses were performed with statistical analysis program Statgraphics Plus 4.1 (Manugistics, Rockville, MD).
Authors' contributions
OH and CLG performed MATS, adhesion to solid surfaces and electrophoretic mobility measurements and helped in draft the manuscript, VJ and MNBF participated in the design of the study and interpretation of results, GB participated in plasmid constructions, design of the study and critical reading of the manuscript, SK and RB conceived the study and drafted the manuscript. All authors read and approved the final manuscript. | 2014-10-01T00:00:00.000Z | 2007-05-02T00:00:00.000 | {
"year": 2007,
"sha1": "7db502c630a7153167d3617629ee5acb64400ddd",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-7-36",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d954a6361a4fdd00d4267c3d29018ccffcf8f94",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
69963227 | pes2o/s2orc | v3-fos-license | Predicting Urban Dispersal Events: A Two-Stage Framework through Deep Survival Analysis on Mobility Data
Urban dispersal events are processes where an unusually large number of people leave the same area in a short period. Early prediction of dispersal events is important in mitigating congestion and safety risks and making better dispatching decisions for taxi and ride-sharing fleets. Existing work mostly focuses on predicting taxi demand in the near future by learning patterns from historical data. However, they fail in case of abnormality because dispersal events with abnormally high demand are non-repetitive and violate common assumptions such as smoothness in demand change over time. Instead, in this paper we argue that dispersal events follow a complex pattern of trips and other related features in the past, which can be used to predict such events. Therefore, we formulate the dispersal event prediction problem as a survival analysis problem. We propose a two-stage framework (DILSA), where a deep learning model combined with survival analysis is developed to predict the probability of a dispersal event and its demand volume. We conduct extensive case studies and experiments on the NYC Yellow taxi dataset from 2014-2016. Results show that DILSA can predict events in the next 5 hours with F1-score of 0.7 and with average time error of 18 minutes. It is orders of magnitude better than the state-ofthe-art deep learning approaches for taxi demand prediction.
Introduction
An urban dispersal event is the process where an abnormally large crowd leaves the same area within a short period. Dispersal events can be observed after large gathering events, such as concerts, sporting events, or protests. Unexpected dispersal events often cause public safety risks, congestion, and high demands of public transportation resources (e.g., taxis) within a short period. Therefore, early prediction of large dispersal events as well as the crowd size are of great importance to a number of different parties. (1) Public safety officials and traffic administrators can benefit from such a technique since they could allocate resources and make plans to mitigate potential risks or congestion.
(2) Transportation stakeholders such as taxi drivers and fleet managers are enabled to improve profit by dispatching more taxis to such events if they can be predicted in advance.
Thus, dispersal event prediction is non-trivial and necessary. While most of the large events have schedules, Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the time of dispersion often has a high level of uncertainty due to varying occasions, attendees, and other factors such as weather. Moreover, many events are not planned or have much higher attendance than expected, such as social protests and gatherings. In addition, many events are not public and only known to special interest groups and it is not possible for the public to collect schedules of such events. For example, large groups of Pokemon Go game players gather for special events in the game. Social activities organized through instant messaging tools are not public, either. Finally, collecting and verifying schedule information from various sources often requires costly human labor and cannot be done in a fully automated manner.
In recent years many big mobility datasets such as taxi trip data and For-Hire Vehicle (FHV) requests (e.g., Uber), have become available. Automatic dispersal event prediction, therefore, has become feasible. A typical solution is to build models to predict taxi demand using the above datasets and identify high-demand locations as dispersal events. Related research shows taxi demand has a highly predictable pattern, when predicting near future (Zhao et al. 2016;Xu et al. 2017;Zhang et al. 2016;Zhao et al. 2016;Moreira-Matias et al. 2013;Davis, Raina, and Jagannathan 2016). However, that is often true only for regular demand prediction rather than abnormally high demand. In such a case, the assumptions of pattern repetitiveness are violated and the methods will fail to provide a timely and accurate prediction. In particular, their ability to make long-term forecast of abnormally high demand is weak due to assumptions that demand changes smoothly (auto-correlated) over time.
In this paper, we focus on predicting such non-periodic and unexpected large dispersal events with abnormally high taxi demand. Specifically, given the historical taxi trip records and other relevant features (e.g., weather, POI), we predict (1) when and where abnormally high taxi demand will occur, and (2) the volume of demand in the predicted time span of the dispersal event.
To address the limitations of related work, we propose an alternative solution. Firstly, we treat dispersal event prediction as a "Survival Analysis" problem, where we learn a model to predict the probability of "death" (event occurs) at each location in the future. Secondly, we argue that there is evidence of demand abnormality during the time leading to it, which can be used to train the survival analysis model. The intuition is that dispersal events are often caused by some forms of gatherings, which can indicate future abnormally high demand. Figure 1 (a) shows an example of abnormally high pick-ups for a concert event at Madison Square Garden in New York City. The dashed and solid lines represent the anomaly scores (Neill 2009) of the drops and the pick-ups, respectively. The large anomaly in pick-ups towards the end of the event follows an anomaly in the drops earlier. However, such patterns are not always as explicit. Fig. 1 (b) shows such a case around McKittrick Hotel in Manhattan, where there are no abnormal drops preceding the dispersal event. In such cases, finding the right signals for predicting dispersal events is more challenging.
In this paper, we make the following novel contributions: (1) We propose a two-stage framework, using deep neural networks to predict dispersal events. We incorporate various features including spatial, temporal, weather, and Point of Interest (POI) features, in addition to recent taxi pickup and drop observations. (2) We formulate the dispersal event prediction problem as a survival analysis problem (Miller Jr 2011).In the two-stage prediction framework, we predict the time of abnormal demand using survival analysis and then predict the demand volume. We call our method DILSA, DIspersaL event prediction using Survival Analysis. We evaluate our methods using real-world data from New York City. Our evaluations show our method identifies dispersal events with F1-score of 0.7, while the error for predicting the time of the dispersal event is 18 minutes for a 5-hour prediction period. Also, our method predicts the pick-up demand in case of anomaly with superior accuracy compared to a baseline.
The rest of the paper is organized as follows: In the next section we discuss the related work, followed by problem formulation and our proposed computational solution. Then, we present the evaluations and conclude the paper.
Related Work
Prior related work include (1) event detection and forecasting, (2) taxi demand prediction, and (3) survival analysis.
Event Detection and Forecasting: Event detection has been widely studied in various domains, including public health, urban computing, and social network analysis. The works (Kulldorff 1997;Kulldorff et al. 2005;Neill 2009) and other recent works on event detection (Li, Xiong, and Liu 2012;Hong et al. 2015) use already observed counts. An event is defined as a region with significantly higher counts, such as disease reports or number of taxi drops. Social media posts and geo-tagged tweets have been used as well to detect and forecast events such as social unrests and protests (Zhou and Chen 2014;Chen and Neill 2014;Zhang et al. 2017). Regions and time windows where the frequency of certain keywords exhibit abnormal changes are identified as events. These works do not use mobility data. The dynamic patterns of the events such as gathering or dispersing are not captured.
Works Khezerlou et al. 2017;Hoang, Zheng, and Singh 2016) use traffic flow data to detect gathering events. Vahedian et al. use destination prediction to predict gathering events (Vahedian et al. 2017). However, such methods are not applicable to dispersal events, as trajectories and traffic flow are observed only after such events.
Taxi demand prediction has been studied closely in recent years, due to access to public taxi datasets (Zhao et al. 2016). To the best of our knowledge none of the proposed methods directly address the prediction problem in case of anomaly. State-of-the-art methods for predicting taxi demand use historical data and time series analysis. (Yao et al. 2018) propose a deep learning framework which captures the spatial and temporal dependencies to predict taxi demand. (Xu et al. 2017) formulate an LSTM Network to learn the regular pattern of taxi demand. (Zhao et al. 2016) show the regular taxi demand is highly predictable and test different algorithms to approach the maximum accuracy. ) used spatial clustering to predict demand hotspots. They predict areas with high density of demand using DBSCAN. Such areas, despite having high demand, are part of the regular pattern. (Moreira-Matias et al. 2013) used streams of taxi data as time series to predict taxi demand in the next 30-minute period. (Davis, Raina, and Jagannathan 2016) used time series analysis to solve the demand prediction problem, giving recommendations to drivers. (Mukai and Yoden 2012) used a simple multi-output ANN to predict demand, using features created from recent demand, time and weather information.
The above-mentioned research aims at learning the regular pattern of taxi demand in absence of anomaly. Considering the regular demand is highly predictable, in this paper, we take on the harder challenge of predicting anomalous taxi demand, which we believe is of greater importance.
Survival analysis is the analysis of duration of time until an event. It has been applied in engineering as well as health practices (Street 1998), for which it was originally developed (Miller Jr 2011). To the best of our knowledge, this is the first time survival analysis is used in the context of urban event prediction. In this paper, we propose to use a deep Artificial Neural Network to predict the probabilities of survival. Predicting the probabilities of survival at different time points using a common internal representation (the hidden nodes of a deep ANN) allows the learned model to share information across the time points, resulting in better predictive results.
Concepts and Definitions
We define a spatio-temporal field Z = (S, T ) as a twodimensional geographical region S paired with a period of time T . S is partitioned by a grid. Each grid cell l 1 , l 2 , ..., l |S| represents a distinct location in the geographical region. T is partitioned into fixed-length time-steps. Given Z, the location of any moving object, can be mapped into a grid cell in S and a time-step in T . For instance, pick-up and drop location and time of a taxi trip can be represented by (l s , t s , l d , t d ), where (l s , t s ) are source location and time and (l d , t d ) correspond to the destination. Pick-up count C p l,t of grid cell l at time t is the number of trips with source (l, t). Similarly, drop count C d l,t , is the number of trips with destination (l, t). Since the drop and pick-up counts demonstrate a periodic pattern, we define baseline counts to represent the expected counts. Pick-up baseline B p l,t of grid cell l at time t is the average of pick-up counts at l at the same time of day. Drop baseline B d l,t is defined similarly. A spatio-temporal region R = (S R , T R ) is a rectangular sub-field of S paired with a continuous subset of T . The counts and baselines can be obtained for any spatio-temporal region, defined bellow. To study the abnormally high taxi demands, in this paper, we are interested in regions, where there are significantly higher counts than expected, i.e., when C p R is significantly higher than B p R . We assume C p R follows a Poisson distribution and test the following hypotheses: We use the Expectation-based Likelihood Ratio Test of (Neill 2009): . Therefore, we define dispersal events: Locations have specific attributes other than the pick-up and drop counts. We consider two of them: weather and Point of Interest (POI) vector. Locations have a daily maximum and minimum temperature, average wind speed and total precipitation, which impact the traffic and people's movement. In addition, locations consists of several POIs that can be categorized into functions. For instance, one grid cell in S might contain many hotels and few shopping centers, while another grid cell might contain many shopping centers. The distribution of categories of POIs over the space impacts people's movement. Therefore, we define a POI vector where v l i is the number of places in category i at l.
Problem Statement
Given: Spatio-temporal field Z = (S, T ), historical trip records and weather information in Z, POI vectors of S, significance threshold α, current time t c and target period (t c , t g ], Find: All the dispersal events and their (1) Start time t e ≤ t g of the dispersal event and (2) Demand volume C p Tg , in case of a predicted dispersal event, where T g = [t e , t g ]. Our objective is to improve accuracy of t e and C p Tg .
Computational Solution Overview
Per problem statement, we predict (1) start time of dispersal events, (2) demand during dispersal events. We propose the framework in Fig. 2. In the learning phase, we extract features from historical data and use them to train an event predictor based on Survival Analysis and a demand predictor. In the prediction phase, we follow two steps. First, we use the event predictor to predict the start time of the event. Then, we predict the pick-up count for the period of the event.
Survival Analysis
Survival analysis analyzes the expected time until an event happens (Miller Jr 2011). The event could be death or failure, or in this paper, a dispersal event. The analysis is primarily done using the survival function defined as follows: In Eq. 2, S(t) is the probability of the event not happening until t (subject has survived at t). Another commonly used function is the hazard function h(t), which is the rate of event at time t, given that it has not occurred by then. Hazard function is defined as follows: −S (t) is the rate with which S(.) decreases at t. It is divided by S(t), the remaining mass of survival probability, because it is conditional to the survival of the subject at t. We use this analysis to calculate the remaining time to dispersal events.
Feature Extraction
To do supervised learning, we need to have a training set with instances of inputs and outputs. In this section, we define the input variables, or the building blocks of the feature vector of the supervised learning framework. Let (l, t c ) be the current location and time. We build the variables through following definitions: Definition 2 Time profile of (l, t c ) is Q l tc = d y , d w , t c − t d , where d y and d w are the day of the year and day of week for t c , and t d is the first time-step of current day.
Definition 3 Weather profile of (l, t c ) is W l tc = ω, η, ζ, θ max , θ min , where ω is average daily wind speed, η is total rain fall of the day, ζ is total snowfall of the day and θ max and θ min are the maximum and minimum temperatures of l at t c .
Definition 4 Daily profile of (l, t c ) is defined as: The daily profile is a vector containing the sum of pick-up and drop counts and baselines since the start of current day. It is important, because a gradual gathering during the day can result in an accumulation of people in l at t c , which might not be obvious in individual time-steps. Next, we define the recent profile of x. Definition 5 Recent profile of (l, t c ) is defined as: where τ is a parameter.
The recent profile contains all the pick-up and drop counts of the recent τ time-steps at current location. We define the target profile as follows: Definition 6 Target profile of (l, t c ) is defined as: where (t c , t g ] is the target period, i.e. the time period for which we are going to make predictions. The target profile is the expected pick-up counts of the prediction target time period in the future. We define the anomaly profile as follows: Definition 7 Anomaly profile of (l, t c ) is defined as: F l tg = LLR(l, tc + 1), ..., LLR(l, tg) .
where (t g − 1, t g ] is the target period, same as Definition 6. The anomaly profile is consisted of the anomaly scores of l during the target period based on Eq. 1. These values are available during training, but not during testing. We will use predicted anomaly scores instead, while testing.
Building the Training Sets
As in Fig. 2, we train three estimators: survival function estimator (f s ), anomaly profile estimator (f a ) and dispersal event pick-up predictor (f e ). In this section, we describe how their training sets are obtained. We propose to use estimators that are maintain an internal state, such as recurrent neural networks. Thus, the order of instances in the training set matters. This order must match the order of real-time data. Ensuring this requirement is straightforward for f a and f s , since they are trained using all the instances. However, it is not straightforward for f e , because it is not trained on all instances. Later in this section, we will demonstrate how this requirement is satisfied.
As mentioned earlier, we treat the dispersal event prediction problem as a survival problem. Therefore, the output vector for f s is the survival probabilities. In this case, the dispersal event is the death event in the survival problem. To this end, the survival function is defined as follows: where E p is the time of dispersal event. In our proposed framework, we train a model to predict S(t). For location l and time t c , we use the following input vector for f s : We call l * the surrounding area of l = (a, b) defined as the rectangular area bounded by grid cells (a − λ, b − λ) and (a+λ, b+λ), where λ is a parameter. Input vector x s consists of time, weather, daily, target and anomaly profiles and the POI vector of (l, t c ) and the recent profile of (l * , t c ). x s plus the current value of the survival function S(t c ).
For each input vector x s at location l and time t c , we use the following output vector: y s = S(tc + 1), ..., S(tg) .
Ideally, we would like to have a labeled event list for our training phase. However, such lists are not available. Therefore, we use an algorithm to obtain S(.) for a given time and location (l, t c ) by determining if any dispersal event has occurred, or is underway in the future of t c . This procedure is presented in Alg. 1. We put a limit on the length of a dispersal event, assuming the events that are shorter than e min or longer than e max are not interesting. Then, we test every sub-period between t c − e max and t g that are longer than e min , using Def. 1. The survival value will be set to one before and zero after the start of the dispersal event.
For example, consider Fig. 3, which shows the dispersal event of Fig. 1 (b). The first vertical line is the current time, the second vertical line is the starting time of the dispersal event. The survival function is set to 1 before the start of the event and is set to zero afterwards. Alg. 1 calculates the survival function. (t c − e max , t c + t g ) has exponential number of sub-periods. However, we are only interested in the earliest dispersal event, because the survival function will be zero afterwards. Alg. 1 takes advantage of this fact and runs in O(nm), where n is the length of time being searched (end − start) and m is the number of different lengths subperiods can have (e max − e min ).
x s and y s are obtained for every spatio-temporal grid cell in Z. They constitute the training set for y s = f s (x s ) for estimating the survival function. We will discuss how we use f s to predict the start time of dispersal events. Although anomaly profile (F l tg ) values are available during training, they are not available during testing, because we do not have the true pick-up counts in the future. While, we train f s using the true anomaly profile, we have to use predicted anomaly profile in the prediction phase. We use f a to predict the anomaly profile. The input vector of f a is denoted as x a and is shown as follows: Where F l (tc−τ,tc] is the anomaly profile in the recent time period. Eq. 11 means that we use the time, weather, daily, recent profiles and the POI vector, in addition to recent anomaly values to predict the future anomaly profile.
Next, we predict the pick-up counts in case of dispersal events, using estimator f e . We use an input vector with the same elements as x s . The output vector of f e is as follows: Although f s and f e have the same feature vectors, they are not trained with the same sets. This is a key point in our proposed approach. In the training set of f e we only include the data instances which correspond to a dispersal event. The reason is we will only use f e to predict the pick-up counts in case of abnormally high pick-up counts. Thus, we train it with just those instances. Alg. 2 builds the training set for f e . To make sure f e learns a full cycle of a dispersal event in its internal state, for each event, we include all the instances starting from the time when the event is first observed in the target period (line 5). For example, let t be current time and the target period be 4 time-steps long. If the survival function is 1, 1, 1, 0 , then the instances of time-steps [t, t + 4) will be included in the training set (lines 6-9).
DILSA: Dispersal Event Prediction using Survival Analysis
The training sets built in the previous section contain temporal and spatial dependencies. Thus, we use a Deep Artificial Neural Network that uses Convolutional layers to capture spatial dependencies and LSTM layers to capture temporal dependencies. Fig. 4 shows the employed structure. First step in our framework is to obtain the anomaly profile of the target period using f a , to be used in the input vector of f s . Then, 1 − S(.) is the estimated cumulative probabilities of event. Assuming S(0) = 1, we calculate the probability of event at future time using the hazard function: Eq. 13 calculates the cumulative hazard of event happening between t − 1 and t given that it has not happened as of t − 1. This value is calculated by dividing the amount of drop in the survival function from time t − 1 to t, by the total remaining amount, which is S(t), given that S(.) monotonically decreases. We predict an event, when value of H(.) exceeds a threshold γ, which is tuned using a tuning set. Once a dispersal event is predicted, we predict the pick-up count for the event using f e . Since our estimators maintain an internal state, we must make predictions in the same order as training. This is not a problem for estimators f a and f s , because they were trained using all the instances, which is the same order of real-time data. To train f e , Alg. 2 establishes a specific order that must also be followed in the prediction phase. In the training phase, we included instances when the start of the event first appears in the target period, i.e. the survival function turns to 0 in the last time-step of the target period (S(t g ) = 0 and S(t g − 1) = 1). Therefore, we must start predicting the pick-ups using f e once Eq. 13 predicts the last time-step of the target period to be 0. However, Eq. 13 might not predict the occurrence of the event until the start time gets closer. In such a case, f e will not have correct internal state. Therefore, to bring f e to its correct internal state, we feed the input vectors of previous time-steps to f e before the input vector of current time. For example, suppose we are at time t c and target time period is 4 time-steps long. Then we predict a dispersal event at time t c + 2. For f e to make predictions for t c + 2 and t c + 3, we feed the input vectors of time t c − 2, then t c − 1 to f e . Now f e has the correct internal state to make predictions.
Alg. 3 shows the proposed dispersal event demand predictor. First, the anomaly profile is obtained and used to predict the survival function (lines 1-2). Then H(.) is calculated for future periods and compared with threshold γ to predict the dispersal events (lines 4-9). A value of 1 inŷ s [t] = 1 means a dispersal event is predicted for t time-steps after current time. In case of a predicted event, the internal state of f e is corrected and pick-up counts are predicted (line 10-13). Mayor's Office 1 . The weather data is obtained from the National Centers for Environmental Information 2 from two weather stations, Central Park and the La Guardia Airport. The Point of Interest data is obtained from Google Maps Places API 3 , which assigns POIs into one or more of 129 categories. We partition the New York City area into a grid of 32×32 with cell size of 400×400 meters. We use 30-minute time-steps. Every record is mapped into the grid to obtain counts and baselines. The values of weather profile for each spatio-temporal grid cell is an average of the measurements reported by the two stations, weighted inversely by their distance. We train the models using year 2014 and evaluate on 2015 and 2016. All datasets are standardized by subtracting the minimum and dividing by the maximum value of each feature. The test sets are standardized using parameters from the training set. Table 1 shows our parameter settings. In table 1, t g − t c is the duration of the target period. Our Deep Learning Network uses 4 convolutional layers with window size of 9×9, 2 LSTM layers of 69 memory cells and 10 output nodes. We compare f e with state-of-the-art deep learning method for taxi pick-up prediction, DMVST-Net (Yao et al. 2018). Moreover, we use three additional baselines for comparison. First baseline is simple thresholding of the survival function instead of Eq. 13, i.e. if the survival function drops below a threshold (σ), the event is predicted. We call this baseline DIL. We tune both γ and σ, using a week's data in 2015 (γ = 2.95 and σ = 0.1). We also compare with Multi-Layer Perceptron (MLP) and Logistic Regression (LgR) models.
The estimators were trained using the stochastic gradient descent method proposed by (Kingma and Ba 2014), with 20 epochs for f a and f s and 40 epochs for f e .
Case Studies
We apply the proposed method to a full dataset from 2016. Here, we present two of the predicted events.
On March 19 th , 2016, we predicted a dispersal event at 1:00 PM around an exhibition center in Pier 92/94 in Manhattan. We predict the event 2.5 hours before (at 11:30 AM). Public records show a home design exhibition at the time 4 . Fig. 5 (b) shows the predicted survival curve at 11:30 AM. The red vertical line is the predicted time of the dispersal event, which is inferred by Alg. 3. Fig. 5 (c) shows the predicted counts by the baseline and the proposed method. The proposed method successfully predicts the increase, while DMVST-Net stays close to the historical average.
We also predicted a dispersal event around 12:30 PM on June 26 th , 2016, at Jacob K. Javis Convention Center, 2.5 hours before. Public records show there was a food show at the convention center 5 . Fig. 6 (b) shows the predicted survival curve and the event prediction time, indicated by the vertical red line. Fig. 6 (c) shows the proposed method outperforms DMVST-Net in predicting the pick-up counts in this case.
Experiments
In this section, we first evaluate the prediction performance of DILSA, i.e. the performance of Alg. 3 to predict events. We compare our results with four baselines. Baseline DMVST-Net predicts taxi demand. We apply Def. 1 to the predicted value to determine if there is a dispersal event. Table 2 shows that DILSA out-performs all the baselines in terms of F1-score (0.7) and time error (18 minutes). Time error is the average difference between the true start time and predicted start time of the correctly predicted events. A prediction is considered a true positive if the predicted event period overlaps with the true event period. The results show the proposed survival analysis method predicts dispersal events with high accuracy. Although DMVST-Net demonstrates high precision, its recall is extremely low, meaning the regular patterns fail to predict accurately in case of abnormally high demand. Moreover, the results show using the cumulative hazard function of Eq. 13 in Alg. 3 has a considerable impact on model's performance. Second, we compare the demand predictor f e to DMVST-Net in case of dispersal events. The baseline was trained on the same period as the previous experiment. f e was trained on the dataset obtained using Alg. 2 on the same period of time in 2014. Fig. 7 shows Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) in future timesteps. Fig. 7 shows our proposed method out-performs the baseline in case of a dispersal event. This experiment shows methods proposed to capture the regular pattern of taxi demand are not reliable in case of dispersal events.
Lastly, we evaluate the impact of different features on the performance of the models. We use Root Mean Squared Error (RMSE) as the measure. The x-axis represents future time-steps. The letters R, D and P represent the Recent and Daily profiles and the POI vector. The results show including the POI vector reduces the error. Including the daily profile does not have a significant effect on f e while improves the performance of survival function predictor f s .
Conclusions
In this paper we solved the problem of predicting dispersal events where a large number of people leave the same area in a short period. Predicting such events has managerial and business value for various stakeholders. We solved the problem as an abnormally high demand prediction problem. The taxi demands in unexpected dispersal events deviate from regular patterns and violate assumptions made by previous techniques (e.g., auto-correlation, periodic). In this paper we argued that dispersal events follow a complex pattern of trips and other related features. We formulated and learned such patterns to predict dispersal events. We formulated the dispersal event prediction as a survival analysis problem and proposed a two-stage framework (DILSA), where a supervised model predicted the probability of "death", i.e., the dispersal event. The demand was then predicted in case of a predicted event. We conducted extensive case studies and experiments on a real dataset from 2014-2016. Our method out-performed the baselines and predicted dispersal events with F1-score of 0.7 and time error of 18 minutes. | 2019-02-19T14:08:18.909Z | 2019-05-03T00:00:00.000 | {
"year": 2019,
"sha1": "7463123ad5128b6e7d743f7ce5052a9bfba6771e",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/4455/4333",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "b128920973b79b35958ee627c63c57d0e0ab855d",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
10521745 | pes2o/s2orc | v3-fos-license | Chiral Symmetry Breaking for Fundamental Fermions
This talk gives an overview of the recent development in the study of non-perturbative fermion-boson vertex in quenched QED towards achieving that the fermion propagator satisfies the Ward-Takahashi identity, is multiplicatively renormalizable, agrees with perturbation theory for weak couplings and has a critical coupling for dynamical mass generation that is strictly gauge independent. The use of such a vertex may lead to a realistic calculation of t(tbar) condensates as the source of electroweak symmetry breaking. (Talk given at The Advanced Study Institute on Frontiers in Particle Physics, Cargese, Ajaccio, France, August 1-13, 1994.)
INTRODUCTION
Massive fermions have long been a problem in gauge theories. Unification of electromagnetic and weak forces was once hindered by the fact that the introduction of mass terms broke the gauge invariance of the theory. This problem was solved by the introduction of the Higgs field. Spontaneous breakdown of the SU(2) × U(1) symmetry then takes place. The gauge bosons gain mass and the masses for the fermions are generated through their Yukawa interaction with this Higgs field. However, there has been a widespread dissatisfaction with this mechanism since the masses are not predictable. Rather, they must be fixed by experiment. Studying the non-perturbative behaviour of gauge theories provides an alternative. If the interactions are strong enough, they are capable of generating masses for the particles dynamically even if they start with zero bare mass. Moreover, experiment tells us that the top quark is very heavy and so the Yukawa coupling g t for top-Higgs interaction is O(1). Then one naturally expects that non-perturbative effects become important. Indeed, it has been suggested [1] that the top quark may acquire mass non-perturbatively through four-fermion interactions, and the Higgs can then be viewed as the condensate of the top and the antitop. However, in an attempt to include the effects of gauge boson exchange term, one loses gauge invariance of the physical quantities. Of course, physical quantities must be gauge independent. This motivates the study of how to achieve this in non-perturbative calculations. Quenched QED provides a toy model in which to study this problem, as we discuss.
DYSON-SCHWINGER EQUATIONS
Our starting point is the set of Dyson-Schwinger equations. These are an infinite system of coupled equations for all the Green's functions, which are non-perturbative in nature. Their structure is such that the 1-point function is related to the 2-point function, the 2-point function is related to the 3-point function, etc. ad infinitum. As it is impossible to solve the complete set of equations, one has to truncate this infinite tower in a physically acceptable way to reduce them to something that is soluble. A familiar way to do this is perturbation theory. However, if one wishes to generate masses for particles, a non-perturbative way has to be sought.
where the quantities with the superscript '0' are bare quantities, and the others are full ones. Quenched QED corresponds to making the assumption that the full photon propagator can be replaced by its bare counterpart. This limit is achieved by regarding N f as a mathematical parameter, which is set equal to zero. As an example, to begin with, we make a further simplification by replacing the full vertex by the bare one.
Eq. (1) then reduces to: in what is known as the rainbow approximation, where
Eq. (3) is a matrix equation which corresponds to two equations in M and F. We
can project out equations for these by taking the trace of Eq. (3) having multiplied by p and 1 in turn to obtain: where as usual α = e 2 /4π. On carrying out the angular integrations, and putting the bare mass equal to zero, we have where Λ is the ultra-violet momentum cutoff. It is easiest to solve these equations in the Landau gauge where they decouple. F (p 2 ) is obviously 1. Moreover, there is a non-trivial solution [2] for the mass function M for the coupling larger than a critical value of α c = π/3. This is best illustrated by plotting the Euclidean mass In contrast, non-perturbative dynamics is able to generate masses for particles even if they have zero bare mass. However, there are problems. As the critical coupling corresponds to a change of phase, we expect it to be independent of the gauge parameter. But when one solves the Eqs. (4) and (5) for different gauges, one finds that this is not the case, as depicted in Fig. 4. However, it is not difficult to trace the root of this problem. The full vertex of Eq. (1) has to satisfy the Ward-Takahashi identity for the fermion propagator to ensure its gauge covariance. However, the bare vertex that was used in Eq. (3) does not obey this identity. Therefore, one should not expect physical outputs to be gauge independent when the input is not.
THE VERTEX
We expect that any reasonable ansatz for the vertex should fulfill the following requirements: • It must satisfy the Ward-Takahashi Identity in all gauges.
• It must ensure that the fermion propagator of Eq. (1) is multiplicatively renormalizable.
• It must result in a critical coupling, at which mass is generated dynamically, that is gauge independent.
• It must be free of any kinematic singularities, i.e. it should have a unique limit when k 2 → p 2 .
• It must have the same transformation properties as the bare vertex γ µ under C and P .
Keeping in mind the form of the Ward-Takahashi identity, one can split the full vertex into two components, longitudinal and transverse: where, the transverse part of the vertex is defined by: The Ward-Takahashi identity uniquely fixes the longitudinal part of the vertex, as shown by Ball and Chiu [4], to be where a(k 2 , p 2 ) = 1 2 However, the transverse part remains arbitrary. Ball and Chiu [4] enumerated a basis of eight independent tensors in terms of which the most general form for the transverse part of the vertex can be written: We list here only those four tensors which we shall need later: The simplest choice is to take the transverse part to be zero. But Curtis and Pennington [5] showed that if we take the transverse part of the vertex to be zero, the fermion propagator is no longer multiplicatively renormalizable. They suggested the following transverse part of the vertex satisfying this requirement.
where, d(k 2 , p 2 ) = k 2 for k 2 ≫ p 2 . d(k 2 , p 2 ) must be symmetric in k and p and free of kinematic singularities leading to the proposal: The vertex specified by Eqs.
BIFURCATION ANALYSIS
To see this, Atkinson et al. [6] recently suggested a bifurcation analysis to study the phase change near the critical coupling. This is a precise way to locate the critical coupling as compared to the previous methods which rely on numerical calculations. This method amounts, in practice, simply to throwing away all terms that are quadratic or higher in the mass-function M. Employing this procedure, and using the fact that at the critical coupling, M(p 2 ) ∼ (p 2 ) −s and F (p 2 ) ∼ (p 2 ) ν in Eq. (1), one arrives at the following equation in an arbitrary gauge: There are two roots of this latter equation for s between 0 and 1. Bifurcation occurs when the two roots for s merge at a point specified by ∂ξ/∂s = 0 . The bifurcation point defines the critical coupling, α c . Numerically, α c = 0.933667 in the Landau gauge. For each value of the gauge parameter, these equations can be solved for ν, s c and α c . The solution found by Atkinson et al [6] is displayed in Fig. 6. For comparison, the points for the bare vertex have also been shown. One can see that the gauge dependence has considerably been reduced, as was seen earlier.
However weak this variation, any gauge dependence shows that the CP vertex cannot be the exact choice.
CONSTRAINTS OF MULTIPLICATIVE RENORMALIZABILITY
To find a vertex that ensures the gauge independence of the critical coupling, we start off by making three assumptions. Firstly, we demand that a chirally-symmetric solution should be possible when the bare mass is zero, just as in perturbation theory. This is most easily accomplished if the sum in Eq. (9) involves just i = 2, 3, 6 and 8. The second assumption is that the functions, τ i , multiplying the transverse vectors, Eq. (9), only depend on k 2 and p 2 , but not q 2 . The third assumption is that the transverse part of the vertex vanishes in the Landau gauge. The motivation for this comes from the lowest order perturbative calculation for the transverse vertex, satisfied by Eq. (11). These conditions fix the τ i of Eq. (9). Multiplicative renormalizability of the wavefunction renormalization F (p 2 ) enables us to write τ 6 and τ in terms of one function W 1 (x) [7] : τ 6 (k 2 , p 2 ) = − 1 2 where τ is the combination of τ i given by: τ (k 2 , p 2 ) = τ 3 (k 2 , p 2 ) + τ 8 (k 2 , p 2 ) − 1 2 (k 2 + p 2 ) τ 2 (k 2 , p 2 ) , and s 1 (k 2 , p 2 ) = k 2 p 2 F (k 2 ) + p 2 k 2 F (p 2 ) . | 2014-10-01T00:00:00.000Z | 1994-10-28T00:00:00.000 | {
"year": 1994,
"sha1": "9e7efc3bcc07087d27c9a03adc4cefdc2032bf95",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9410399",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "44e5da8dbdf1593690428aab61f2a45f218908d8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
542143 | pes2o/s2orc | v3-fos-license | Cancer immunotherapy: potential involvement of mediators
The description of a cell-free soluble anti-tumour factor by Carswell et al. in 1975 (Proc Natl Acad Sci USA, 72: 3666–3670) was followed by a long series of experimental and clinical investigations into the role of cell-free mediators in cancer immunotherapy. These investigations included research on the effects of macrophage–derived eicosanoids (cycloxygenase and lipoxygenase derivates of arachidonic acid) and of monokines such as tumour necrosis factor-α, interleukin-1 and granulocyte–monocyte–macrophage–colony stimulating factor) and of lymphocyte products: interleukins and interferons. The investigations yielded information on the effects of various factors on macrophage and T-cell activation in vitro, determination of direct anti-tumour properties on animal and human tumour cells in vitro and on therapeutic effectiveness in tumour-bearing individuals either alone or in combination with other therapeutic factors and their production by tumour cells. During recent years much effort has been dedicated towards the use of the tumour cells transfected with cytokine genes in the preparation of cancer vaccines. Cycloxygenase products (prostaglandins) were usually assumed to inhibit expression of anti-tumour activity by macrophages and an increase in their production in cancer patients was considered as a poor prognostic index. Lipoxygenase (leukotrienes) products were assumed to exhibit antitumour activity and to induce production of IL-1 by macrophages. Interleukins 2, 4, 6, 7, 12 and the interferons were extensively tested for their therapeutic effectiveness in experimental tumour models and in cancer clinical trials. The general conclusion on the use of cell-free mediators for cancer immunotherapy is that much still has to be done in order to assure effective and reproducible therapeutic effectiveness for routine use in the treatment of human neoplasia.
Introduction
The eld of cancer immunotherapy began approximately 100 years ago with rather 'naive' attempts to use anti-tumour antibodies raised in various animals for treatment in human sarcoma patients. 1,2 At around the same time Coley was probably the rst (1891, cited in reference 3) to suggest that cell-free ltrates of bacteria might posses anti-tumour activities.
Macrophages are known to produce and secrete derivates of arachidonic acid and cytokines (such as TNF-a and Interleukin-1 (IL-1). T and B cells produced and released a long series of interleukins; both macrophage and T and B-cell products are involved in the interaction between the immune system and tumour cells in a number of ways: direct anti-tumour activity, activation of immunocompetent cells in vivo and/or in vitro, and changes in their production in vivo in tumour-bearing hosts. Finally, it was shown that tumour cells themselves might secrete some of the above mentioned products and develop a means to resist the antitumour activity of various biologically active products.
The data accumulated on the activities of macrophages and T-cell products against tumour cells helped to devise certain immunotherapeutic protocols rst in experimental animal tumour systems and afterwards in human neoplasia. The immunotherapeutic protocols were based on either treatment with cell-free products alone or in combination with other treatments. They also helped in devising ways to activate in vitro immuno-competent cells for therapeutic uses in vivo.
The aim of this review is to discuss the potential involvement of cell-free mediators in cancer immunotherapy. On reviewing their antitumour activities it should be mentioned that some of these mediators are also involved in in ammatory processes and that their production is closely interelated.
Eicosanoids and Cancer
Cycloxygenase (prostaglandins) and lipoxygenase products have been used in experimental tumour systems and are involved in human neoplasia.
Prostaglandins
It has generally been assumed that prostaglandins (especially PGE 2 ) inhibit anti-tumour activity of macrophages. Thus a PGE 2 inhibitor, indomethacin, enhanced the macrophage cytostatic activity in vitro against MOPC-315 murine plasmacytoma cells. 4 Indomethacin stimulation of macrophage cytostasis was inhibited by PGE 2 5,6 and this enhancement of anti-tumour activity was also reported in vivo against Ehrlich murine tumour cells. 7 -10 The effect of endogenous and exogenous prostaglandins on macrophage functions was described: culture conditions that caused increased PGE 2 production by activated macrophages resulted in an inhibition of their tumoricidal activity whereas production of high levels of PGE 2 by resident and elicited macrophages was associated with an increase in their tumoricidal activity. 11 In another work, 12 it was reported that a subcutaneous injection of polyacrylamide beads in mice induced a population of immature macrophages which became fully cytostatic to syngeneic P815 plasmacytoma when stimulated in vitro by LPS. Blocking of PGE synthesis by indomethacin prevented the effect of LPS and addition of PGE 2 did not reverse the indomethacin effect but inhibited the macrophage-mediated cytostatic activity. 12 Suppression of macrophagemediated tumour cytotoxicity was correlated with an increase in secretion of prostaglandin from the macrophages of breast cancer patients. 13,14 Elevated prostaglandin production in human breast cancer was considered a marker of high metastatic potential for neoplastic cells, the increase in PG production occurred early in the course of breast cancer and decreased later in the course of tumour development. 15 It seems that PGE 2 production by human monocytes is by a subset of cells other than the cells which produce IL-1. 16 It should also be mentioned that cancer cells also produce PGE 2 and in this context it was reported that the amount of PGE 2 released by cancer cells which metastasized into the liver of tumour-bearing rats was higher than that of cells metastasizing into the kidney. 17 An increase in prostaglandin levels in cancer patients was also reported: 18 of plasma prostaglandin F levels in cases of tumours of the female genital tract and of plasma 6-oxoprostaglandin F 1a in cases of gynecological tumours. 19,20 On the assumption that prostaglandins can suppress the development and expression of effector cells, clinical studies were initiated to determine the effect of piroxicam (a prostaglandin antagonist) in patients with recurrent unresectable squamous cell carcinoma or lymphoepithelial carcinoma of the head and neck. 21 Although some improvement in immune reactivity was noted, more studies to correlate the improvement in immune reactivity with therapeutic effectiveness of either prostaglandin antagonists alone or in combination with other treatments are needed. 21 -23 A paradoxical effect of indomethacin on lymphokine-activated killer cell (LAK) activity in cancer patients was described: indomethacin enhanced LAK activity in patients with no distant metastases but depressed LAK activity in patients with such metastases. 24 Apparently, these effects of indomethacin are not related to the PGE inhibiting property of this compound. 24 Increased synthesis of prostaglandin by macrophages from breast cancer patients was assumed to inhibit macrophage mediated cytotoxicity. 25
Leukotrienes
Leukotrienes (lipoxygenase pathway of arachidonic acid) are usually considered to enhance the potential of the immune response. 26,27 Thus, leukotriene and indomethacin enhance additively the macrophage anti-tumour cytostatic function. 4,28 Leukotriene B4 (LTB4) was reported to augment human monocyte cytotoxic activity and enhance monocyte production of hydrogen peroxide, IL-1 and TNF. 26 5-Lipoxygenase activation was also described to facilitate an IL-1 transduction signal. 29 Augmentation by leukotrienes of IL-1 production by human monocytes was also described in other reports. 30,31 Lipoxygenase speci cally inhibited indomethacin stimulation of anti-tumour macrophage cytostasis 32 and reversed macrophage cytostasis, induced by the calcium ionophore A23187, towards P815 tumour cells in vitro. 31 Leukotriene C4 was reported to be an essential 5-lipoxygenase intermediate in A23187-induced macrophage cytostatic activity against P815 tumour cells. 33 Products of the lipoxygenase pathway were involved in human natural killer cell cytoxicity. 34
Cytokines
Immunocompetent cells reported to produce and secrete cytokines involved in the reaction of the organism to tumour cells were macrophages, T and B cells.
Macrophage-derived cytokines
Biologically active products described as being involved in the interaction with tumour cells were tumour necrosis factor-a (TNF-a) interleukin-1 (IL-1) and granulocyte ±macrophage colony stimulating-factor (GM-CSF).
TNF-a
The rst description of TNF-a as a macrophage product was made by Carswell et al. 35 Since then, several reviews have appeared which have been concerned with the characterization and properties of this cytokine. 3,36 -38 The mechanism and activity spectrum of TNF-a has been the topic of several investigations: it was reported that exposure of human cervical carcinoma cells to Concanavlin A (ConA) increased the total number of binding sites for rTNF-a but blocked the transduction of the signal for the cytotoxic response. 39 MethA sarcoma cells were found to be sensitive in vitro to human tumour recombinant TNF-a and expressed low numbers of TNF-a receptors. 40 TNF-a (unlike INF-c or IL-1a), induced regression of subcutaneous MethA implants. 40 It was assumed that the primary lesion induced by TNF-a is vascular and the mechanism(s) involved in generation of speci c cell-mediated anti-tumour immunity induced by the TNF-a treatment was not clear. 41 It was claimed that there are multiple pathways leading to resistance to TNF-a induced tumour cell cytotoxicity, among them production of transforming growth factors by tumour cells and ampli ed expression of certain oncogenes. 41 Apparently, viable activated monocytes produce other lytic factors in addition to TNF-a and IL-1 because TNF-a and IL-1 associated with plasma membranes of activated human monocytes lyse only monokine-sensitive tumour cells whereas viable activated monocytes lyse both monokinesensitive and monokine-resistant tumour cells. 42 The killing of tumour cells by TNF-a was assumed to involve internalization of this ligand in the target cells. 43 The anti-tumour effect of TNF-a and of Interferon-c against MmB16 murine melanoma was potentiated by macrophage colony-stimulating factor. 44 The anti-tumour activity of recombinant human TNF-SAM1 was enhanced by connecting the TNF compound to thymosin b4. 45 TNFa acted synergistically with IL-1 in inhibiting the growth of A375 cells. 46 An experimental study carried out in nude mice showed that intratumoral injection of an adenoviral vector containing radiation inducible DNA sequence of the Egr-1 promoter linked to a cDNA encoding TNF-a (Ad.Egr-TNF) enhanced the tumoricidal action of ionizing radiation in a human epidermoid carcinoma xenograft. 47 By another experimental approach it was shown that murine tumour cells transduced with the gene for TNF-a, regressed unlike non-transduced tumour cells after an initial phase of tumour growth. 48 The results in vitro and in vivo from experimental tumour models which indicated the antitumour activity of TNF-a, prompted initiation of clinical trials devised to determine the therapeutic effectiveness of TNF-a in human neoplasia. Intravenous injection of TNF-a in 18 cancer patients led to some clinical improvement in three lymphoma patients. 49 Recombinant TNF-a was given in combination with TNF-c in phase I trials in 36 patients with solid tumours. Sideeffects such as fatigue, fever and chills occurred: in one patient with melanoma there was a mixed response and in one patient with mesothelioma there was transient clearance of ascites from malignant cells. 50 Disappointing results were reported in a phase II study of recombinant human TNF-a in 127 cancer patients. The conclusion of the authors was that: ''rhuTNFa does not appear to have signi cant anti-tumour activity''. 51 A similar conclusion was made in a phase II trial of rTNF-a in 22 patients with adenocarcinoma of the pancreas: ''No objective responses were observed''. 52 A phase II study of recombinant TNF-a was also performed in a group of 26 renal cell carcinoma patients. The conclusion was: ''rTNF given as described, has only modest anti-tumour activity in renal carcinoma and produces considerable toxicity. We plan no further studies of rTNF in this disease''. 53 In a phase I study including 16 evaluable patients with various types of metastatic cancer, there was evidence for antitumour effect in two patients. 54 The property of TNF-a as an immunomodulator was reported: 55 pretreatment of monocytes with IFN-a, IFN-c, IL-1 or TNF-a resulted in enhanced human monocyte toxicity.
Interleukin-1
The production, characterization and properties of IL-1 have been summarized in several reviews. 56,57 It was stated that IL-1 is a key mediator of host response to microbial invasion and that it acts as a true hormone produced during infection and in ammation. 57 Human recombinant IL-1 (hrIL-1) induced proliferative responses of T cells in the presence of suboptimal concentrations of mitogen and doubled the response to higher concentrations. 57 Human recombinant IL-1 induced release of IL-2 by T cells and acted as a potent in ammatory agent by inducing dermal broblast PGE 2 production in vitro and of fever in rabbits and mice. 57 The overall conclusion from these data was that IL-1 possesses both immunological and in ammatory properties. 57 Human Interleukin-1 acted as a cytocidal factor for several tumour cell lines 58 and promoted human monocyte-mediated tumour cytotoxicity. 59 Human monocytes stimulated with pneumococcal cell surface components produced IL-1 but not TNF, thus showing independence of production of the two cytokines. 60 As mentioned already 46 IL-1 acted synergistically with TNF-a against tumour cells. IL-1a enhanced carboplatinum antitumour activity against human ovarian cells in vitro and in vivo. 61 In an in vitro study with human melanoma cell lines it was shown that tumour cells which secrete IL-1 exhibited increased adhesion to endothelial cells. 62
Gr anulocyte ±m acroph age colony stimul ating f acto r (GM-CSF)
The effect of intravenous and intraperitoneal administration of GM-CSF was examined in a phase I trial of 13 cancer patients refractory to standard chemotherapy. 63 Administration of MG-CSF was well tolerated but no data were provided on clinical improvement. 63 In another study, it was reported that administration of GM-CSF in 24 patients with solid tumours enhanced monocyte cytotoxicity against a human colon carcinoma line but no data are given on the effect on clinical course of the disease. 64 The partial and somehow disappointing results obtained in clinical trials with TNF-a may be due to several factors: · TNF-a can be administered only in small amounts because higher amounts are toxic and induce severe side effects.
· TNF-a is a relatively small molecule and as such, is rapidly cleared after injection.
· It is possible that some human cells are resistant to TNF-a or develop mechanism(s) of defence against the anti-tumour activity of TNF-a.
· Anti-tumour activity in vivo requires more than one anti-tumour factor.
· TNF-a in vivo does not occur suf cient concentration in its contact with tumour cells.
In view of the results obtained in clinical trials with TNF-a, experiments were undertaken to use activated human macrophages or activated human peripheral blood for therapy. It was assumed that macrophages can act indiscriminately against immunogenic or nonimmunogenic tumours, and that they might release monokines continuously in vivo (in addition to TNF-a and/or IL-1) which would be active against the tumour cells. It should be mentioned in this context that activated macrophages might release anti-tumour cytostatic products unrelated to IL-1, TNF-a and INF-a/b. 65 Results of in vitro experiments showed that peritoneal human macrophages obtained from renal patients on continuous ambulatory peritoneal dialysis (CAPD) can be activated in vitro by LPS to express anti-tumour activity and as such they acted in vivo against a human tumour implanted subcutaneously in nude mice. 66 Similarly, human peripheral blood cells from cancer patients, activated in vitro by IFN-c and LPS, reacted against a human tumour growing in nude mice. 67 These results prompted clinical trials with autologous human peripheral blood monocytes activated in vitro and reinjected in to the cancer patients. 68,69 The therapeutic effectiveness of this procedure was limited 68,69 and probably more clinical trials are required in order to improve conditions for therapy with activated macrophages.
Interleukin-2
Interleukin-2 (IL-2) is one of the main products of T cells and was extensively studied in the context of its anti-tumour activity. IL-2 was reported to increase human natural killer (NK) cell activity in vitro and this activity was partially reduced by monocytes due to PGE 2 production. 70 Murine lymphocytes cultured in the presence of IL-2 lysed syngeneic murine tumour cells. 71 The lytic activity was attributed to the occurrence of lymphocyte activated killer (LAK) cells Thy-1 + Lyt -1-2+. 71 The described effects of IL-2 in enhancing anti-tumour activity promoted a series of clinical trials devised to determine the therapeutic effects of IL-2 in cancer patients. It was found that of 106 patients with metastatic cancer receiving LAK cells plus IL-2, eight had complete responses, 15 had partial responses and ten had minor responses. 72 The same group of researchers reported that autologous tumour in ltrating lymphocytes (TIL), cultured in the presence of IL-2, lysed melanoma tumour cells. 73 Melanomaspeci c cytolytic tumour lymphocytes derived from TIL grown in the presence of IL-2, and injected together with IL-2, induced tumour rejection in vivo possibly by reacting against the gp100 epitope. 74 A phase I trial in a total of 31 evaluable patients with metastatic cancer of the breast, gastric cancer, colorectal cancer, melanoma, non-small cell lung cancer, osteosarcoma or renal cancer, received a combined treatment of IL-2, followed by TNF-a and indomethacin: 75 two partial responses were seen (in breast and renal cancer). 75 In another clinical trial, 16 patients with advanced renal cell cancer (stage IV) received a combined treatment of cyclophosphamide, INF-a and IL-2. 76 Two patients had a partial response, two had a minor response and three patients achieved stable disease. 76 The effect of IL-2 with or without LAK cells was assayed in another group of 71 patients with advanced renal cell carcinoma. 77 A low level of anti-tumour response was detected and the addition of LAK cells did not improve the response. 77 IL-2 administration was found to prolong survival of some metastatic renal cell carcinoma patients with no or moderate HLA-II expression and/or no or moderate macrophage presence in the primary tumour, but was not effective in patients with both high HLA-II and high macrophage expression. 78 Some potential uses of IL-2 treatment have been described: IL-2 and IL-7 augmented the cytolytic activity and the antitumour killing spectrum of aCD3-induced activated killer cells, and such cells were suggested for immunotherapy of non-immunogenic tumours. 79 Cytotoxicity of induced LAK cells against human leukaemia was augmented in the presence of either IFN-a, IFN-c or TNF-a in cultures, and as such they might be more effective in treating human leukaemia. 80 Adoptive therapy of established pulmonary metastases with LAK cells and recombinant IL-2 has been reported. 81 Adoptive therapy with highly enriched NK cells was assumed to have a potential use in leukaemia. 82 It should be also mentioned that LAK cells are apparently not a unique cell type but a function of various types of cells. 83 Finally, one of the problems of IL-2 therapy is the toxicity of the compound: in this context, it was suggested that the oxygen freeradical scavenger, dimethythiourea, ameloriates pulmonary permeability and vascular leak syndrome associated with multiple-dose IL-2 therapy without inhibiting IL-2 induced anti-tumour cytotoxicity. 84 On the other hand, the tumourassociated antigen 90K and the soluble IL-2 receptor were associated with poor prognosis in human ovarian cancer. 85 Another way of promoting the anti-tumour activity of IL-2 was by gene transfer into tumour cells. 86 -89 The use of a human renal carcinoma line transfected with IL-2 and/or INF-a gene has been suggested for the preparation of live cancer vaccines. 90 Combined treatment with granulocyte colony stimulating factor and IL-2 increased the survival time of nude mice bearing human ovarian cancer cells. 91
Interleukin-4
Interleukin-4 was rst described as B-cell stimulatory factor. IL-4 was reported to inhibit tumour growth of syngeneic mammary adenocarcinoma, plasmacytoma, 92 and renal carcinoma 93 by using cytokine gene transfer into the murine tumours. Repeated injections of small amounts of IL-4 around the tumour draining nodes of mice resulted in growth inhibition of poorly immunogenic and non-immunogenic tumours (cited in reference 94). Transfection of IL-4 into tumour cells induced release of this cytokine and it was effective, in vivo, against a wide range of tumour cells implanted in nude mice. 92 Similarly, treatment of established murine renal cancer by tumour cells engineered to secrete IL-4, induced speci c T-cell dependent systemic immunity against the non-transfected tumour. 93 To my knowledge, IL-4 therapy has not yet been tested in human neoplasia.
Interleukin-6 (INF-b 2 )
Interleukin-6 is induced in T cells by antigen stimulation. Puri ed human recombinant (rIL-6) mediated substantial reduction in the number of tumours. 95 Recombinant human IL-6 produced in Escherichi a coli inhibited the growth of human breast carcinoma and leukaemia/lymphoma cell lines in vitro. 96 Anti IL-6 receptor antibody prevented muscle atrophy in Colon-26 adenocarcinoma bearing mice, thus suggesting that this antibody could be a potential agent against muscle atrophia in cancer cachexia. 97 IL-6 gene transfer into murine syngeneic tumours inhibited growth of lung carcinoma 98 and of sarcoma. 99 IL-6 gene transfected into Lewis Lung Carcinoma tumour cells suppressed the malignant phenotype and was effective against parental metastatic cells. 98 Mice rejecting murine brosarcoma cells, transduced with retroviral vectors containing the murine IL-6 gene and secreting IL-6, exhibited a later resistance to challenge with wild tumour cells. 98 Aquisition of the ability to synthesize endogenous IL-6 markedly accelerated the growth of weakly tumorigenic rat urothelial cells, but did not induce a tumorigenic phenotype in non-tumorigenic cells. 100
Interleukin-7 (B/ T m atur ation f acto r)
Murine plasmacytoma cells transfected with the IL-7 gene produced IL-7 in vivo and were completely rejected in syngeneic mice by a Tcell dependent process. 101
Interleukin-12
IL-12 has received considerable attention during recent years as a strong anti-tumour agent: IL-12 was reported to react in vivo against B16F10 tumour-bearing mice and its anti-tumour effect was inhibited by anti-IFN-c antibody; 102 IL-12transfected broblast cells admixed with the murine melanoma, BL-6, showed that local IL-12 expression suppressed tumour growth and promoted development of speci c anti-tumour immunity; 103 IL-12 engineered dendritic cells were effective in vivo against murine tumour cells. 104 In another report, it was shown that anti IFN-c antibody blocked IL-12-mediated tumour regression in mice. 105 In view of the results obtained from IL-12 therapy against murine tumours, clinical trials were initiated to determine the effect of IL-12 in human cancer. Phase I clinical trials of IL-12 gene therapy were done by direct injection of tumours with genetically engineered autologous broblasts. 106,107 Out of 13 cancer patients (six breast, ve melanoma, and two head and neck), signi cant reduction in tumour size was observed in three patients with melanoma and one with head and neck cancer. 108
Interferon-a 2b
A clinical trial was done in renal cancer and melanoma patients by treatment with INF-a 2b . Of 12 melanoma patients, four patients showed a partial response whereas eight patients progressed. 109 In the case of 35 patients with advanced renal cancer, an increase in immune response potential was observed. 109 The conclusion was that more studies are required to determine the therapeutic effectiveness of INFa 2b . 109 A human renal carcinoma line transfected with the IL-2 and/or the IFN a gene was suggested for use in preparation of live cancer vaccines. 90
Interferon-c (IFN-c)
IFN-c was rst described as an antiviral agent and was among the rst cytokines assayed for therapeutic effectiveness in human neoplasia. In one of the rst clinical trials done in Hodgkin's disease patients it was reported that treatment with IFN-c led to an extension of the diseasefree survival time. 110 Complete or partial remission in multiple myeloma by IFN-c treatment was also reported. 111 Remissions induced by IFN-c were also reported in patients with lymphocytic lymphoma, 112 in cases of metastatic breast cancer, 113 non-Hodgkin's lymphoma 112 and multiple myeloma. 113 However, in another study in a group of non-small cell lung cancer patients, INF-c treatment did not induce tumour regression. 114 A phase I trial on the effect of IFN-c treatment was conducted in six patients with lung cancer; 115 systemic side effects such as transient fever, nausea, headaches and u-like symptoms were noted 115 but no data on therapeutic effect were given. 115 In a recent in vitro study it was reported that human carcinoma cell lines cultured with IFN-c expressed more CD80 and CD86 costimulatory molecules and that this increase in expression was inhibited by IL-10. 116 Transfer of the IFN-c gene into murine neuroblastoma, 117 brosarcoma, 118 adenocarcinoma, 119 colon carcinoma 119 and lung carcinoma 120 cell lines induced tumour inhibition in syngeneic mice. Human peripheral blood monocytes collected from cancer patients were activated in vitro to express anti-tumour activity in the presence of IFN-c and LPS. 67,68 Anti-tumour effects vs other functions of cytokines Some facts which should be considered in the context of the anti-tumour activity of cytokines are discussed below.
Rel ationship with in amm ato ry functio ns
Macrophage-derived cytokines such as TNF-a and IL-1 are also major in ammatory mediators. 5,27,28,37,56 ,57 Human peritoneal macrophages from CAPD patients collected during infectious peritonitis are 'primed' to produce and secrete more TNF-a and IL-1 when cultured in the presence of LPS. 121,122 These data indicate that in ammation and cancer are interrelated events.
Inter actions in cy tokine production
Production and release of various eicosanoids and cytokines are closely interrelated. 37 For, example: production of IL-1, TNF-a and IL-6 by human mononuclear cells was induced by stimulatory agents such as LPS, 36,56 LPS-induced TNF-a production is inhibited by PGE 2 , 123 endotoxin, TNF-a and IL-1 induce IL-6 production in vivo; 124 production of IL1 and TNF-a was induced in human blood mononuclear cells by LPS, whereas IL-6 suppressed the induction of IL-1b and TNF-a by LPS or PHA. 125
Production of eicos ano ids and cy tokines by tumour cells
Production of eicosanoids and cytokines by tumour cells may have an in uence on the effect of these products on tumour development: tumour cells might produce PGE 2 ; 126 the response of murine tumours to indomethacin therapy was directly related to their ability to produce prostaglandin; 127 tumours from cachectic mice produced both TNF-a and IL-1a in vivo; 128 a myeloma human cell line produced both TNF-a and IL-6; 129 leukaemic cells from patients with acute myeloid leukaemia produced both IL-6 and IL-1. 130 These are a few examples; the relationship between the ability of tumour cells to produce eicosanoids or cytokines and development of cancer is not yet clear.
Production of eicos anoids and cyto kines in the tumour-be aring host
During tumour growth in rats cycloxygenase or thromboxane synthase was inhibited whereas C5 and C12-lipoxygenases of the alveolar macrophages were activated. 131 Macrophages derived from tumour-bearing animals suppressed activation of T cells, of NK cells, of LAK cells and of generation of tumoricidal activity in normal syngeneic splenic macrophages in cultures stimulated by LPS. 132 The secretion of IL-1, TNF-a but not of IL-6 was impaired in alveolar macrophages collected from tumour-bearing mice. 133 A decrease in production of IL-1 was also reported in peritoneal macrophages collected from sarcoma-bearing mice. 134 Production of IL-1 and TNF-a by tumour associated mononuclear monocytes from cancer patients was examined and showed that production IL-1 was suppressed whereas production of TNF-a was not affected. 135 LPS induced TNF-a production was impaired in macrophages from breast cancer patients, 136 but increased in patients with malig-nant brain tumours. 137 An example of correlation between production of TNF-a and PGE 2 by peripheral blood monocytes was seen in patients with bladder cancer: these patients had either higher TNF-a production or higher PGE 2 production. 138
Cell-free mediators as cancer therapeutic agents
In vitro activation of cells for immunotherapy Activation of autologous human peripheral blood monocytes by culture in the presence of LPS and IFN-c was assayed with the aim of inducing anti-tumour cytotoxic activity in the treated monocytes and to use such activated cells for immunotherapy. 68,69 Clinical trials reported limited success. 68,69 Induction of LAK cells, and later of activated tumour in ltrating lymphocytes (TIL) by culturing in the presence of IL-2 was also suggested in a series of reports. 71 -74 Clinical trials were carried out in groups of cancer patients; the results obtained indicated some therapeutic bene t in melanoma and renal carcinoma with approximately 10% clinical improvement in patients treated with both IL-2 and LAK cells. 72
Direct administration of cytokines in cancer patients
As previously mentioned the results of treatment with TNF-a were disappointing; injection of IL-2 with or without LAK cells was problematic due to the toxicity of the agent; the effect of IL-2 in patients with squamous cell carcinoma of the head and neck was limited to temporary regression of the tumours. 139 Another approach suggested was to counteract the activity of PGE 2 by using a PGE inhibitor: while some improvement in immunological functions was observed no data are yet available on the therapeutic effectiveness of this treatment. 21
Transfection of tumour cells with cytokine genes
This approach has been extensively investigated during recent years, rstly in experimental tumour models and later in clinical trials. The transfected tumours released the appropriate cytokine in vitro and in some cases induced Tcell mediated speci c anti-tumour immunity. Partial success was observed in terms of tumour regression and clinical improvement. 140 -142 Productive transfer of the IL-2 gene was shown for melanoma, renal cell carcinoma, neuroblastoma and acute leukaemia cell lines. 86,108,139 -141 Several reviews have been published on the role of cytokines in cancer therapy: in a review published in 1989 it was concluded ''The clinical results obtained with cytokines (INF-a, IL-2, TNF-a, GM-CSF, G-CSF) expected to show direct tumoristatic or tumoricidal effects have been disappointing''. 143 A more optimistic view was expressed in another review: ''local presence of cytokines, either injected repeatedly at the tumour site or released by cytokine-engineered tumour cells, arouses immunogenicity in apparently nonimmunogenic spontaneous tumours'' and as such they might indicate ''potential use of cytokines as a component of new tumour vaccines''. 144 In another review, it was concluded: ''Clinical applications [of cytokines] are progressing, but many trials must follow to assess precisely the multitude of potential uses of these molecules''. 94 Finally, in a recent review it was concluded: ''Although several years will probably be required to establish the true impact of the gene-transfer modalities . . . technological advances have opened prospects for the management of cancer patients''. 142
Concluding Remarks
The 'golden' era on the role of cytokines in the ght against cancer started with the description by Carswell in 1975 of tumour necrosis factor. 35 Since then, various aspects of the interaction between eicosanoids, macrophage and lymphocyte cytokines and tumour state have been described and investigated. The various topics of investigation included: · therapeutic effectiveness of tumour cells transfected with cytokine genes. It is assumed that transfected tumour cells are more immunogenic and as such they induce a T cell mediated response against the wild type non-transfected tumour. They might also secrete cytokines in vivo which react against the tumour cells.
Marked progress has been made during recent years on the interaction between macrophage and lymphocyte products and tumour cells, and on their possible role in immunotherapy against cancer. However, many more clinical trials are required before these agents can be used in routine therapy of human neoplasia. | 2014-10-01T00:00:00.000Z | 1997-06-01T00:00:00.000 | {
"year": 1997,
"sha1": "48c0daf2ff49729e1f333f9aa6a6a9e9637a343a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/1997/317148.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71a67f877fa428097da0441e0f6ed8ca5194a320",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118966288 | pes2o/s2orc | v3-fos-license | Hadronic Invariant Mass Spectrum in B ->X_u l nu Decay with Lepton Energy Cut
We discuss the implications of charged lepton energy cut to the hadronic invariant mass spectrum in charmless semileptonic B decays. Charged-lepton energy cut is inevitable in order to remove secondary leptonic events such as b ->c, tau ->l, and to identify the chaged leptons at detectors experimentally. We consider three possible lepton energy cuts, E_l^{cuts} = 0.6,1.5,2.3 GeV, and found that with the most probable cuts E_l^{cut} = 1.5 GeV and M_X^{max} = 1.5~(1.86) GeV, 45 ~ 60 % ~(58 ~ 67 %) of decay events survive. Therefore, B ->X_u l nu decay events can be efficiently distinguished from B ->X_c l nu decay events. We also discuss the possible model dependence on the results.
I. INTRODUCTION
The determination of the CKM parameter |V ub | is important for constructing the so-called unitary triangle. It is hard to determine |V ub | from semileptonic B-meson decays because Cabbibo dominated decay mode B → X c lν obscures the B → X u lν mode. The traditional method for extracting |V ub | from experimental data involves a study of the charged lepton energy spectrum in inclusive semileptonic B decays, B → X u,c lν [1]. The b → u events are selected above charm threshold, i.e. for lepton energies E l above (M 2 B − M 2 D )/(2M B ) ≈ 2.3 GeV. However, this cut on E l is not very efficient (only less than 10% of b → u events survive) and also the dependence of the lepton energy spectrum on perturbative and non-perturbative QCD corrections is the strongest in this end-point region [2][3][4][5][6].
As an alternative, the determination of |V ub | may come from the measurement of the hadronic invariant mass spectrum [7] in the region M X < M D . For B → X c lν decays, one necessarily has M X > M D = 1.86 GeV. Therefore, if we impose a condition M X < M D , the resulting events come only from B → X u lν decay and most of the B → X u lν decays are expected to lie in this region. There is an experimental problem to be expected, though -the charmleaking of misidentified charmed particles below the kinematic b → c threshold. To avoid this leakage, one may concentrate on hadronic invariant mass below a certain value M max X (< M D ), say M max X = 1.5 GeV. The detailed studies of this method were performed in Refs. [8][9][10]. The integrated fraction of events was introduced, and studied in its sensitivity to the three basic parameters, µ 2 π , m b and α s . However, we cannot apply the above results to the experimental data directly. From the practical point of view, the leptons with low energy, i.e. less than 0.6 GeV, cannot be experimentally identified within the dectectors. And also a larger lepton energy cut might be needed to select 'prompt leptonic events' (b → l) from 'secondary leptonic events' (b → c → l, τ → l).
An experimental method such as the technique of neutrino reconstruction [11], which would be used to measure hadronic invariant mass directly and inclusively, may require a lower cut on the charged lepton energy. Therefore, the hadronic invariant mass spectrum would be affected by the various lepton energy cuts. In this Letter, we study the effects of lepton energy cut on the hadronic invariant mass spectrum in inclusive charmless semileptonic B decays and discuss their implications.
II. DIFFERENTIAL DECAY RATE
At the parton level, the most general hadronic tensor for B → X u lν is given by Here v is the b-quark velocity and p is the total parton momentum. The total momentum carried by the leptons is q = m b v − p. At the tree level, W 1 = 2δ(p 2 ) and all other W i =1 = 0.
We follow the O(α s ) corrections to the hadronic tensor from the paper of De Fazio and Neubert [10]. Introducing the scaling variables where E l is the charged-lepton energy which is defined in the B-meson rest frame, the triple differential decay rate is given by wherex ≡ 1 − x, and In terms of the parton variables, the total invariant mass of the hadronic final state is given by whereΛ ≡ M B − m b . Using relation (6), and the scaling variables, we find the following double differential decay rate where the phase space for the relevant variables is given bŷ for for If we integrate the above double differential decay rate over the variable x, we get the single differential decay rate forŝ H which reproduces numerically the result of Ref. [10].
In order to obtain the physical decay distributions, we should also consider the nonperturbative corrections. The physical decay distributions are obtained from convolution of the above parton level spectra with non-perturbative shape function F (k + ), which governs the light-cone momentum distribution of the heavy quark inside the B-meson [5,6]. The convolution of parton spectra with this function is such that for the decay distributions the b-quark mass m b is replaced by the momentum dependent mass, m b + k + , and similarly the parameter Here k + can take values between −m b andΛ, with a distribution centered around k + = 0 and with a characteristic width of O(Λ). Then, the scaling variables x,ŝ H and ǫ are replaced by the new variables where q + ≡Λ − k + . The physical double differential decay rate for the charged-lepton energy and the hadronic invariant mass is given by where q max Finally, the physical distribution for the hadronic invariant mass can be obtained in two different ways; which should give the same results. We note that, after the implementation of such Fermi motion, now the kinematic variables take values in the entire phase space determined by hadron Eqs. (12)(13)(14) are our starting point for the numerical calculations.
III. NUMERICAL ANALYSES AND CONCLUSIONS
To perform the numerical calculation we should choose a specific form for the shape function F (k + ≡Λ − q + ). It is subject to the constraints on the moments of the function which are given by the expectation values of local heavy quark operators. In practice we know only the size of the first few moments; one finds where µ 2 π is the average momentum squared of the b quark inside the B-meson. The parameters Λ and µ 2 π were obtained by the HQET and QCD sum rules in Refs. [12,13]; Λ = 0.4 ∼ 0.6, µ 2 π = 0.6 ± 0.1 [12] or µ 2 π = 0.10 ± 0.05 [13].
One then chooses some reasonable ansatz for F (k + ); its parameters are adjusted so as to reproduce its known moments. Several functional forms for this function have been suggested in the literature [6,[14][15][16]. We adopt the simple form of Ref. [16], which is such that respectively. We fix the QCD coupling α s = 0.22. Now the calculation of the hadronic invariant mass spectrum is straightforward. The 3dimensional plot of double differential decay rate, is shown in Fig. 1 we can see that there is no decay event above M X ≥ 1.86 GeV. This is consistent with the kinematics that leptons with energy greater than 2.3 GeV are purely from b → ulν decay. And also we can see that this lepton energy cut is very inefficient because only a small part of the decays survive this cut, as mentioned earlier. Fig. 3 shows the integrated decay rate up to M max X , The numerical values are summarized in Tables I, II and III Now we discuss the dependence of the results on the various input parameters and the choice of the universal distribution function. The dependence on µ 2 π is found not so significant, compared to theΛ (or equivalently m b ) dependence. The decay rates with the parameters m b = 4.8 GeV and µ 2 π = 0.6 GeV 2 are shown in Table IV. By comparing Table IV with Tables I, II and III, we can see that the main uncertainty in decay rates comes from the uncertainty inΛ (or m b ). The dependence of the result on the α s variation could be quite large since the perturbative correction to the total decay rate is linear with the parameter α s , and the size of α s correction is almost 20% of the leading approximation. From Tables I, II, III Next, in order to estimate the possible dependence of the results on the choice of universal distribution funtion, we adopt ACCMM [18] model-induced distribution function [6,15], As seen, this shape function is dependent on the two parameters, p F and m sp . We choose the three sets, Tables I and III. The differences in the decay rate for two universal distribution functions are within 2% only.
Finally we note that the results would be affected by some resonance effects around M max X . The real result would be a sum of all the exclusive decays in which a few resonances dominate at some specific M X 's, so the actual M X distribution will be with humps and bumps, while our results are smoothed inclusive results using the duality. However, our integrated results would be quite correct, once there is no significant resonance around the region of M X cut. If M max X is around 1 GeV, then there are a few important resonances, but if M X is large enough, e.g. 1.5 GeV, then there is no significant resonance from b → u, but rather there will be decays with many pions.
In summary, we investigated the effects of charged lepton energy cut on the hadronic invariant mass spectrum for B → X u lν decays and their implications. As is well known, the charged-lepton energy cut is experimentally inevitable in order to remove secondary leptonic events such as b → c → l, τ → l, and to identify the chaged leptons at detectors. We found that with E cut GeV. Therefore, B → X u lν decay events can be efficiently distinguished from B → X c lν decay events by using the hadronic recoil mass and the charged lepton energy together. | 2019-04-14T02:31:47.767Z | 2000-06-28T00:00:00.000 | {
"year": 2000,
"sha1": "95c07385879feee2178b9f59826b329d5cff228d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0006322",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "95c07385879feee2178b9f59826b329d5cff228d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255500153 | pes2o/s2orc | v3-fos-license | Centers for Disease Control and Prevention–Recognized Diabetes Prevention Program After Gestational Diabetes Mellitus
Gestational diabetes mellitus is associated with an increased risk of developing type 2 diabetes mellitus. To decrease or delay the risk of developing type 2 diabetes mellitus after gestational diabetes mellitus, postpartum care should include a recommendation that the individual participates in a recognized Diabetes Prevention Program.
Introduction
The background of prediabetes mellitus affecting 96 million people and the rising diabetes mellitus prevalence in the United States motivates us to advocate for the adoption of an evidence-based program to prevent diabetes mellitus in groups of individuals with a history of gestational diabetes mellitus (GDM). After a pregnancy complicated by GDM, during the immediate postpartum period or initial healthcare encounter, we suggest that the affected person participates in a Centers for Disease Control and Prevention (CDC) −recognized Diabetes Prevention Program (DPP) to decrease their risk of type 2 diabetes mellitus (T2D) and conceivably lower their risk of recurrent GDM.
Background
In January of 2021, the CDC reported that of the US population, 37.3 million (11.3%) and 96 million (23%) people were living with diabetes or prediabetes mellitus, respectively. 1 In 2001, Tuomilehto 2 reported the results of the Finnish Diabetes Prevention Study that found that T2D could be prevented if high-risk patients adopted prescribed lifestyle changes. A 10-year follow-up of the adoption of these lifestyle changes was evaluated in the National Institute of Diabetes and Digestive and Kidney Diseases−sponsored CDC DPP that was a randomized, controlled clinical trial conducted at 27 US clinical centers enrolling >3230 individuals. This trial demonstrated that individuals at high risk for T2D, who achieve significant weight reduction by participating in a lifestyle modification program (through increased physical activity and dietary changes), could prevent or postpone the onset of T2D. 3
Lifestyle modification
The CDC DPP trial demonstrated that lifestyle modification reduced the chances of developing T2D by 60% when compared with those in the placebo group. The study groups included individuals at high risk for developing T2D by virtue of having had GDM. 4 Several other studies using the DPP model demonstrated that a 7% weight reduction via lifestyle modification could significantly decrease the risk of developing T2D. DPP has been found to be an effective technique to induce behavioral changes and weight reduction, and reduce cardiometabolic risk factors in general, especially for individuals with a history of GDM. 5 Gestational diabetes mellitus and type 2 diabetes mellitus risk GDM is any form of glucose/carbohydrate intolerance with first onset or recognition during pregnancy. The US Preventive Services Task Force recommends routine antenatal glucose screening for GDM between 24 and 26 weeks of gestation. Two to 10% of pregnancies in the United States are affected by GDM, which confers a 35% to 60% risk of developing T2D during the subsequent 10 to 20 years. 6 Referral to the National Diabetes Prevention Program The link https://dprp.cdc.gov/Registry can be used to access the national registry of >2000 CDC-recognized DPPs and their contact information. Several reports have documented numerous barriers to attending postnatal clinical appointments and program interventions. The most cited barriers to participation include inaccurate patient contact information, lack of child care, work or school obligations, and lack of access to transportation. 7,8 Participation in postpartum DPPs would likely be hampered by similar barriers. To overcome these potential barriers, we suggest that the increased number of remote, distance, or online CDC-recognized DPPs created in response to the pandemic-related change in healthcare delivery might mitigate or remove some of the obstacles and thus facilitate participation in postpartum DPPs.
Discussion
Many medical organizations including the American Diabetes Association have encouraged healthcare practitioners (HCPs) to refer their high-risk patients to a lifestyle-change program, such as the one offered through the National DPP. A recent study showed that HCPs who were familiar with lifestyle-change DPPs and aware of available programs were more likely to make DPP referrals. There is also evidence that patients who were referred to a lifestyle-change program by their HCP were more likely to join the program. Unfortunately, according to the CDC, 80% of patients with prediabetes mellitus have no knowledge of their diagnosis, and only 5% of patients with prediabetes mellitus or at high risk of T2D receive referrals to a program for lifestyle change. 9 Currently, after a GDM-affected pregnancy, to evaluate persistent or recurrent glucose intolerance, the postdelivery recommendation is to perform an oral glucose tolerance test (OGTT) between 4 and 12 weeks after delivery and subsequent serial OGTTs at 1-to-3year intervals. 10 Up to 70% of individuals with GDM will develop T2D in the absence of intervention. 6 GDM is associated with higher rates of preeclampsia, cesarean delivery, fetal macrosomia, neonatal hypoglycemia, hyperlipidemia, shoulder dystocia, birth trauma, and stillbirth. Moreover, the offspring from GDM-affected pregnancies have an increased risk for childhood and adult-onset obesity. 11, 12 Decreasing the risk of T2D and perhaps the recurrence of GDM is a desirable and realizable DPP goal for public health.
Conclusion
To decrease the GDM-associated risk of developing T2D, routine postpartum care should include a recommendation that the affected individual participates in a CDC-recognized DPP. & | 2023-01-08T05:08:49.275Z | 2022-12-17T00:00:00.000 | {
"year": 2022,
"sha1": "94a3caeda9b62bc58e72ae04fabfc4d374524598",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.xagr.2022.100150",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94a3caeda9b62bc58e72ae04fabfc4d374524598",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14116940 | pes2o/s2orc | v3-fos-license | Relaxation Properties of Small-World Networks
Recently, Watts and Strogatz introduced the so-called small-world networks in order to describe systems which combine simultaneously properties of regular and of random lattices. In this work we study diffusion processes defined on such structures by considering explicitly the probability for a random walker to be present at the origin. The results are intermediate between the corresponding ones for fractals and for Cayley trees.
Introduction
Networks of the real world often seem to combine aspects from regular and from completely random lattices. Social networks, neural networks, electrical powergrids, and traffic networks [1][2][3] are all examples of patterns not described satisfactorly by conventional regular lattices, nor by completely random lattices. Social structures, for instance, do not behave as regular lattices, since (as is well known) randomly chosen people are connected in general by a small number of intermediary bilateral ties. Here, as in random graphs, the minimal (chemical) distance between any two points in the system scales logarithmically with the system size [4].
To combine these two properties, Watts and Strogatz recently introduced the idea of small-world networks [1]. This construction is a superposition of a regular lattice with a random lattice, and includes simultaneously well defined local clusters and short global connections. As we will demonstrate, these systems also display properties intermediate between those of regular and tree-like (loop-less) lattices, already under a small number of global connections, provided the system size is large enough.
Much work has already been done on the properties of small-world networks [1,2,[5][6][7][8][9][10][11][12][13][14] but most of it has focused on static (geometric) properties. We shall not address these issues, but rather concentrate on a dynamical model defined on the structure. Treatments of the dynamics of small-world networks include for instance the study of an Ising model defined on the lattice [5], spectral properties of the small-world Laplacian, [6], percolation [7], spreading of diseases [8] and neural networks [9]. In the following we will examine the properties of random walks on small-world networks, in particular the relaxation, exemplified by the probability for a random walker of being at the original site at a later time. This is a simple quantity to extract numerically, and very relevant for various physical properties: It is sensitive to the topology of the network, and is related to its vibrational modes.
2 Definition of the model and presentation of the results.
The small-world networks we consider are built as follows: We start from a regular lattice with L vertices in 1 dimension under periodic boundary conditions, each site being connected symmetrically to its 2k nearest neighbours, i.e. having as coordination number z = 2k. Then we add to each of the sites with probability p a new bond. The other end gets attached with equal probability to any of the lattice sites; this allows also the possi-bility of vertices to become connected to themselves. In this way we add, independent of k, on the average pL new bonds to the underlying regular lattice. This construction follows [7] for k = 1 and is simpler than the original procedure [1], by which one rewires with probability p each of the original kL bonds randomly.
A step-wise diffusion process is now defined by specifying all the transition probabilities W i,j entering the master-equation: The W f,i is the probability to go from site i to site f during one time step, and the probability P (i, n), i = 1 . . . L is just the probability of being at site i after the nth step.
The process defined in Eq. (1) is the discrete variant of diffusion on an arbitrary lattice, a topic interesting in its own right. Diffusion on regular lattices is ubiquitous, and diffusion on random graphs has (among other things) also been studied in the context of glassy relaxation [15]. We are therefore inspired to investigate what happens on the small-world model, which interpolates between these two extremes. Previously a lot of interest has also been seen in the related problem of diffusion on fractals (see for example [16][17][18][19] and references therein). As we proceed to show, diffusion on Cayley trees [20][21][22] shows also features closely related to the present problem. Furthermore, the motion of charge carriers or of excitons over polymer chains, where steps between spatially close sites can connect regions far apart along the chemical backbone, also involves global shortcuts [23,24].
4
The transition probabilities W i,j in Eq. (1) are as follows: First W i,j = 0 if there are no bonds between i and j. For i connected to j by one or more direct bonds, W i,j is proportional to the number of such bonds. The same holds for the probability of remaining at the same site after one time unit, i.e. we allow "sticking". Formally In this equation, z i,j is the number of bonds between the two sites i and j, and z i is the total number of bonds emanating from vertex i, i.e. the coordination number of the site. Hence z i = j z i,j . Note that the z i,j -values are determined both by the additional wiring as well as by the underlying lattice. The δ i,j and the 1 in the denominator appear because we allow for the possibility of the walker to remain at site i during a time step.
This procedure renders the numerically determined P (i, n) smoother in n. We remark that the rates defined according to Eq. (2) are not symmetrical in i and j, i.e. in general The algorithm we have used is the exact (cellular automaton) enumeration of random walks [18], corresponding to the implementation of Eq. (1). All the results plotted are averaged over 200 disorder configurations. We have worked mostly with the value k = 1.
This is also the value implied if we do not state otherwise.
We focus on the probability P (i, n|i, 0) that a particle initially at site i is found at inhomogenous equilibrium distribution P eq (i) ∝ (z i + 1).
Therefore P (i, ∞|i, 0) depends on the specific small-world realisation, and will fluctuate from realisation to realisation around its average value 1/L.
To find out how much of the behavior is due to finite size effects, we subtract from each average curve in Fig. 1 its corresponding average equilibrium value P ∞ (0) ≡ 1/L, and replot P n (0) − P ∞ (0) in Fig. 2. From Fig. 2 we see that all curves collapse nicely onto what we view as representing P n (0) on small-world networks in the limit L → ∞. The results can be understood qualitatively in the following way: For a fractal one has [25] P n (0) ∼ n −ds/2 , where d s is the spectral dimension. Thus the initial decay in Figs. 1 to 3 follows that of a fractal with a d s close to 1, i.e. that of a quasi 1-d system. This is reasonable given 8 our construction: for sufficiently small p and small n, only relatively few random walkers encounter any long-range connections (shortcuts). Therefore in the beginning the behavior of P n (0) closely reflects the character of the underlying 1-d lattice. However for larger n, the random walkers probe larger and larger portions of the graph, and thus follow more and more shortcuts. This speeds up progressively the decay of P n (0) as more regions at larger and larger length scales are visited, and the fractal picture is lost. One would thus expect that the concept of a d s begins to be invalid when the random walkers visit enough short cuts, i.e. when the 1-d diffusion extends longer than the typical distance between shortcuts. This is the fundamental length scale ξ of small-world networks, besides the lattice constant, which is less important here. In our case we have measuring ξ in units of the lattice constant. For diffusion on scales smaller than ξ one furthermore has in terms of the diffusion constant D of the regular lattice ξ 2 ∼ 2Dn, so that n ∼ 1 2Dp 2 .
Given that we allow random walkers to stay at a site during a time step, D = 1/3 and thus n = 2/3p −2 . However some walkers do encounter shortcuts at length-scales below ∼ p −1 , longer times the decay appears to be slower than exponential. Also shown is a fit to a stretched exponential, indistinguishable from the data. and numerically the cross-over to a region that does not have approximate power-law character is seen to take place earlier than n ∼ p −2 .
We turn now to the analysis of this region. To be able to follow the it more closely, we replot the results of Fig. 2 for L = 10000 on semi-logarithmic scales in Fig. 4. Evidently, the decay for larger n is slower than exponential.
The decay of P n (0) is hence quicker than a power-law, but slower than the one for Cayleytrees, for which one has (for coordination numbers greater than 2) [21,22] We now consider the dependence of the decay on the value of p. For this we plot in Fig. 5 the decay law P n (0) for L = 2000 and p ranging from p = 0.01 to p = 0.8. We note that the initial power-law like region diminishes with increasing p. Furthermore, the plateauregion P n (0) ≃ 1/L is reached earlier for larger p. This is in accordance with our argument above, that the long-range connections (short-cuts) interrupt the simple diffusion on the underlying lattice, such that the crossover length decreases with increasing p (c.f. also Eq. (6)). As p becomes large enough, the power law regime practically disappears. This is so because the random walker rapidly meets a short-cut. As before, the influence of finite size effects can be reduced by plotting, as in Fig. 2, P n (0) − P ∞ (0).
We have also performed simulations of the random walk on small-world networks where the underlying lattice has a k value larger than 1. In Fig. 6 we plot the results for p = 0.1 and L = 2000 in the cases of k = 1, k = 2, k = 3 and k = 4. The findings reproduce the general picture: P n (0) behaves like a power law for small n, while decaying more rapidly as n gets larger. The curves for different k are mainly shifted with respect to each other, and the network with the largest coordination number (largest k) also displays the quickest relaxation. To be noted, however, is that the case k = 1 has the largest dynamical range and thus shows best the decay forms, while also being the one simplest to implement; hence k = 1 may be the ideal small-world model.
Conclusions
In this work we have studied numerically the behavior of random walks on small-world lattices. Our work has focused on the probability of being at the initial site P n (0) as a function of the number of steps n. This quantity is found to show a complex, very interesting pattern: initially P n (0) displays a power-law, "quasi-fractal" regime. At larger n a quicker decay takes over, reminiscent of stretched exponentials. In this respect the P n (0) decay is intermediate between the decays found for fractal structures and the ones found for tree-like (loop-less) structures, exemplified here by Cayley trees. | 2014-10-01T00:00:00.000Z | 2000-04-13T00:00:00.000 | {
"year": 2000,
"sha1": "6ce6402d2ffb1baf687da2fb8de48912fd881ee7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0004214",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6ce6402d2ffb1baf687da2fb8de48912fd881ee7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Mathematics"
]
} |
73525588 | pes2o/s2orc | v3-fos-license | The nuclear dimension of C*-algebras associated to homeomorphisms
We show that if X is a finite dimensional locally compact Hausdorff space, then the crossed product of C_0(X) by any automorphism has finite nuclear dimension. This generalizes previous results, in which the automorphism was required to be free. As an application, we show that group C*-algebras of certain non-nilpotent groups have finite nuclear dimension.
obvious fixed point is 1 G , and in some cases, such as the lamplighter group, the set of periodic points may even be dense.
In this paper, we settle the case of crossed products arising from homeomorphisms. We show in Theorem 5.1 that if X is a locally compact metrizable space with finite covering dimension and α ∈ Aut(C 0 (X)) then dim nuc (C 0 (X) ⋊ α Z) ≤ 2 dim(X) 2 + 6 dim(X) + 4 .
The non-metrizable case is addressed in Corollary 5.4. As indicated above, our formula implies that the group C * -algebra C * (G⋊ α Z) has finite nuclear dimension, whenever G is abelian with finite dimensional Pontryagin dual. Notable examples of such groups include: (1) The lamplighter group (Z/2Z)≀Z is the semidirect product of G = n∈Z Z/2Z by Z using the shift action. Notice that by [CDE13,Corollary 3.5], the group C * -algebra C * ((Z/2Z) ≀ Z) is not strongly quasidiagonal, and thus it has infinite decomposition rank ( [KW04,Theorem 4.4]). This shows that there exists a group C * -algebra which has finite nuclear dimension but infinite decomposition rank.
(2) When G = Z n , we obtain polycyclic groups which may not be nilpotent.
More precisely, if we pick a matrix A ∈ GL n (Z) such that A has no eigenvalues which are roots of unity and use it to define an automorphism of Z n , then the crossed product Z n ⋊ A Z by this automorphism has trivial center, and in particular it is not nilpotent.
We note that nuclear dimension of group C * -algebras was studied recently in [EM14]. It was shown there that if G is a finitely generated nilpotent group then C * (G) has finite nuclear dimension. The following question now appears natural: Question: Let G be a (virtually) polycyclic group. Does C * (G) have finite nuclear dimension? What about elementary amenable groups with finite Hirsch lengths?
In the remainder of the introduction, we sketch the idea of our proof. We note that we cannot directly use the previously known results concerning Rokhlin dimension, since actions with finite Rokhlin dimension are necessarily free. Let us first suppose we are in the other extreme: the homeomorphism h under consideration is periodic, with h n = id. In this case, although the action does not have finite Rokhlin dimension, one can show directly that the crossed product has finite nuclear dimension (in fact, finite decomposition rank): the crossed product is subhomogeneous, so we only need to find a bound on the dimensions of the spaces of irreducible representations of different dimensions and appeal to [Win04]. (In this setting, however, one has more information about the structure of the crossed product; this allows us to provide a short and more direct proof, which will also be applicable for actions of groups other than Z). A key fact here is that the bound on dim nuc (C 0 (X) ⋊ Z) does not depend on the period n of the action, but only on dim(X).
Next, let us consider the somewhat more complicated case, in which there are both periodic and non-periodic points, but there is a bound on the length of the orbits: each point is either periodic with period at most n, or acted on freely by h.
In such a case, if we denote by X periodic the set of all periodic points and by X free the set of all points on which h acts freely, then X periodic is a closed invariant set and we have an equivariant extension 0 → C 0 (X free ) → C 0 (X) → C 0 (X periodic ) → 0.
An extension of Szabó's arguments ( [Sza15b]) to the non-compact setting shows that the restriction of α to the ideal C 0 (X free ) has finite Rokhlin dimension. One can now use the fact that finite nuclear dimension is preserved by extensions.
Of course, in general there may be no bound on the length of the orbits, and the set of periodic points need not be closed. However, if we fix some N , we can consider the set of points which are periodic with orbit length ≤ N , which we denote by X ≤N , and we let X >N = X X ≤N . Then X ≤N is a closed subset, and again we have an equivariant extension 0 → C 0 (X >N ) → C 0 (X) → C 0 (X ≤N ) → 0.
As discussed above, we have a bound on dim nuc (C 0 (X ≤N ) ⋊ Z) which does not depend on N . As for X >N , although in general the restriction of α to C 0 (X >N ) does not have finite Rokhlin dimension, we can still use a refined version of the marker property, detailed in the appendix, to show that α satisfies some fragment of the definition of finite Rokhlin dimension. Recall that to have finite Rokhlin dimension, the automorphism α should admit arbitrarily long Rokhlin towers, each consisting of positive contractions permuted by α to within any given tolerance. Here, we can find Rokhlin towers, provided they are not too long and the error is not too small (compared to N ). This will be made precise in Lemma 4.2. Finally, in order to construct a decomposable approximation for a given finite subset of C 0 (X) ⋊ α Z to within a specified tolerance, we can choose N to be large enough so as to be able to construct sufficiently long Rokhlin towers with a sufficiently small error, and then apply a localized version of the argument which shows that finite nuclear dimension passes to extensions.
The paper is organized as follows. We begin by fixing notation and listing a few general lemmas. In Section 2, we find an upper bound on the nuclear dimension of the crossed product of C 0 (X) by a periodic action, using Winter's bound for subhomogeneous algebras. In Section 3, we obtain an upper bound via a different approach. Although the second upper bound we obtain is higher than the one we find in Section 2, the method works for groups other than Z, which we hope will be useful for future work. In Section 4 we show that there exist Rokhlin towers (of certain length and tolerance) for homeomorphisms whose orbits are all sufficiently long. In the last section, we combine those results to derive our main theorem, Theorem 5.1. The appendix, by Szabó, contains the refinement of the marker property needed in Section 4.
The authors are grateful to Gábor Szabó for providing the said refinement of the marker property method and presenting it in the appendix. The second author would also like to thank Caleb Eckhardt, Stuart White and Joachim Zacharias for pointing out certain applications of our results.
Preliminaries
Throughout the paper, we use the following conventions. To simplify formulas, we use the notations dim +1 nuc (A) = dim nuc (A) + 1, dim +1 (X) = dim(X) + 1 and dr +1 (A) = dr(A) + 1. If A is a C * -algebra, we denote by A + the positive part, and by A +,≤1 the set of positive elements of norm at most 1. If G is a locally compact Hausdorff group and A is a C * -algebra, we denote by α : G A an action, that is, a continuous homomorphism α : G → Aut(A), where Aut(A) is topologized by pointwise convergence. If G = Z, we shall often denote by α both the action and the homomorphism α 1 which generates it, when it causes no confusion. We are interested here in the case in which A is commutative, that is, A ∼ = C 0 (X) for some locally compact Hausdorff space X (namely the spectrum A). By Gel'fand's theorem, an action α : G C 0 (X) is completely determined by a continuous action α : G X on the spectrum, and vice versa. They are related by the identity α g (f ) = f • α g −1 for any f ∈ C 0 (X) and any g ∈ G. Thus taking a C * -algebraic point of view, we will denote by α : G X an action on a locally compact Hausdorff space by homeomorphisms, and save the notation α for the corresponding action on C 0 (X). In the case of G = Z, we shall also use α to denote the homeomorphism given by the generator 1.
are order zero contractions into some C * -algebra B for k = 0, 1, . . . , d, we say that the map ϕ = d k=0 ϕ (k) is a piecewise contractive (d + 1)-decomposable completely positive map. The following fact concerning order zero maps is standard and used often in the literature. It follows immediately from the fact that cones over finite dimensional C * -algebras are projective. See [Win09, Proposition 1.2.4] and the proof of [WZ10, Proposition 2.9]. We record it here for further reference.
Lemma 1.1. Let A be a finite dimensional C * -algebra, let B be a C * -algebra and let I ⊳ B be an ideal. Then any piecewise contractive (d + 1)-decomposable completely positive map ϕ : A → B/I lifts to a piecewise contractive (d+1)-decomposable completely positive map ϕ : A → B.
The following technical lemma is straightforward, and variants of it have been used in the literature. We include a short proof for the reader's convenience.
Lemma 1.2. Let B be a separable and nuclear C * -algebra and B 0 a dense subset of the unit ball of B. Then dim nuc (B) ≤ d if and only if for any finite subset F ⊆ B 0 and for any ε > 0 there exists a C * -algebra A ε = A (0) ε ⊕ · · · ⊕ A (m) ε and completely positive maps Proof. The forward implication is immediate from the definition of nuclear dimension. For the converse, let F ⊂ B be a finite set, and fix ε > 0. We wish to find a piecewise contractive (d + 1)-decomposable completely positive approximation for F through a finite dimensional C * -algebra. Since B 0 is dense in the unit ball of B, we may assume that F ⊂ B 0 , by applying a rescaling and a small perturbation if needed. Let A ε , ψ and ϕ be as in the statement. Set ε ′ = max{ ϕ(ψ(x)) − x | x ∈ F }, and note that ε ′ < ε. For each l = 0, 1, . . . , m, pick a piecewise contractive dim +1 nuc (A is a decomposable approximation, as required.
The following lemma is an invariant version of [WZ10, Proposition 2.6]. The modification is straightforward as well, but we include a proof for the reader's convenience.
Lemma 1.3. Let G be a locally compact Hausdorff and second countable group, and let A be a G-C * -algebra. Then any countable subset S ⊂ A is contained in a G-invariant separable C * -subalgebra B ⊂ A with dim nuc (B) ≤ dim nuc (A). In particular, A can be written as a direct limit of separable G-C * -algebras with nuclear dimension no more than dim nuc (A).
Proof. We define an increasing sequence of separable G-invariant C * -subalgebras of A as follows. Let B 0 be the G-C * -subalgebra of A generated by S. Now, suppose B n has been defined. We pick a countable dense sequence x 1 , x 2 , . . . in B n . For any k, pick a piecewise contractive dim +1 nuc (A)-decomposable approximation for {x 1 , x 2 , . . . , x k } to within tolerance 1 k . We set B n+1 ⊂ A to be the G-C *subalgebra generated by B n and the images of η (j) k for all applicable k and j. Since each E k is finite dimensional, and G is second countable, the algebra B n+1 is separable. Furthermore, by construction, for any finite subset F ⊂ B n and any ε > 0, we may choose k large enough so that the diagram gives a piecewise contractive dim +1 nuc (A)-decomposable approximation for (F, ε) whose image lies in B n+1 . We now define B = ∞ n=0 B n ⊂ A. This is a separable G-C *subalgebra. We claim that dim nuc (B) ≤ dim nuc (A). Indeed, for any finite set F ⊂ ∞ n=0 B n (which is dense in B) and any ε > 0, there is a piecewise contractive dim +1 nuc (A)-decomposable approximation for (F, ε) whose image lies in B. Restricting the domain and co-domain to B gives us the required approximation.
Lemma 1.4. Let X be a locally compact Hausdorff space, let G be a locally compact Hausdorff group, and let α : G X be a continuous action. Suppose U is a G-invariant open subset of X. Then there is a quasicentral approximate unit for Proof. Let K be a compact subset of U , and let S be a symmetric compact neighborhood of the identity in G. Then S · K is also a compact subset of U . If e ∈ C c (U ) is a positive contraction which is identically 1 on S · K, then for any function f ∈ C c (G, C c (U )) ⊆ A ⋊ α G such that f is supported in S and f (g) is supported in K for each g ∈ S, we have f e = ef = f . Therefore, for any finite subset F of C c (G, C c (U )) there exists a positive contraction e F ∈ C c (U ) so that e F acts as the identity on F . Since C c (G, C c (U )) is dense in A ⋊ α G, it follows that there exists an approximate identity for C 0 (U ) + C 0 (U ) ⋊ α G whose elements are all in C c (U ) +,≤1 . By the remark after [Arv77, Theorem 1], it follows that there exists a quasicentral approximate unit for C 0 (U ) + C 0 (U ) ⋊ α G ⊂ M (C 0 (X) ⋊ α G) in the convex hull of the elements in C c (U ) described above, and in particular it is contained in C c (U ) +,≤1 , as required.
We record the following two results from classical dimension theory, which are used later in the paper. Those two results apply to the case of metrizable spaces, since any metrizable space is paracompact, Hausdorff and totally normal. For a discussion of different variants of paracompactness and normality, we refer the reader to [Pea75, Chapter 1, section 4].
Crossed products by a periodic automorphism
We identify the crossed product of C 0 (X) by a periodic action as a subhomogeneous algebra, and use Winter's method to provide an upper bound on its decomposition rank (and thus on its nuclear dimension, too).
Proposition 2.1. Suppose Y is a locally compact metrizable space of finite covering dimension. Let α : C 0 (Y ) → C 0 (Y ) be a periodic automorphism, that is, α n = id for some positive integer n. Then C 0 (Y ) ⋊ α Z is subhomogeneous and Proof. That C 0 (Y ) ⋊ α Z is subhomogeneous follows from the more general fact that if α is a periodic automorphism of a C * -algebra A with α n = id, then A embeds in M n (A) ⊗ C(T). We give the full details here, since we need a concrete description of the primitive ideal space of the crossed product to establish the bound on the decomposition rank of the crossed product.
Let z ∈ C(T) be the standard generator. (Think of T as the unit circle in C, and z as the inclusion map.) Define β : We restrict γ λ to B and keep the same notation. Then γ λ (u) = λu (where we extend γ λ to the multiplier algebra if need be), and be the canonical expectation, and let E γ : B → β(C 0 (Y )) be the expectation given by integrating along λ. We have a commuting diagram: Since β is injective and the two expectations are faithful, the map β is an isomorphism. It follows that C 0 (Y ) ⋊ α Z is subhomogeneous.
By what we have just shown, we can identify , all irreducible representations of B have dimension at most n. Fix k ≤ n. We denote by Prim k (B) the primitive ideal space associated to irreducible representations of B on M k . We denote by Y k the set of all points whose orbit consists of exactly k points. We claim that Prim for some y 1 , y 2 , . . . , y k ∈ Y . Set v = ϕ(u). We claim that those points are distinct, and constitute a k-periodic orbit of the action. Fix j ∈ {1, 2, . . . , k}. Suppose f ∈ C 0 (Y ) is a positive element such that f (y j ) = 1, f (y) < 1 for all y ∈ Y {y j }, and f (y) = 0 for all other points in the orbit of y j and for all y ∈ {y 1 , y 2 , . . . , y k } {y j }. Let k ′ be the period of y j . Then ϕ(α l (f )) are mutually orthogonal projections for l = 0, 1, . . . , k ′ − 1. Thus, the orbit of y j is contained in {y 1 , y 2 , . . . , y k }. To see that the orbit of y j in fact equals {y 1 , y 2 , . . . , y k }, note that k ′ −1 l=0 ϕ(α l (f )) is a projection which commutes with v, and since ϕ is irreducible, it equals 1, so evaluation at the points of the orbit of y j coincides with evaluation at {y 1 , y 2 , . . . , y k }. Likewise, if y j is repeated m times in the sequence y 1 , y 2 , . . . , y k , then any other element of the orbit of y j is repeated m times, since ϕ(f ) is unitarily equivalent to ϕ(α l (f )) for all l. If m > 1, we can pick p = diag(1, 0, 0, . . . , 0) using this diagonalization, and then k ′ −1 l=0 v l pv * l is a nontrivial projection in ϕ(B) ′ , which cannot happen. Therefore, k = k ′ and {y 1 , y 2 , . . . , y k } is an orbit.
Through a unitary equivalence, we may assume that α −1 (y j ) = y j+1 for j ∈ {1, . . . , k − 1} and α −1 (y k ) = y 1 . Thus v is forced to be of the form and φ was assumed to be irreducible, there exists a λ ∈ T such that v k = λ1 k . The choice of orbit and this λ define a map Ψ : We first check that Ψ is continuous. If ϕ : B → M k is an irreducible representation, then a direct computation shows λ = k j=1 λ j = (−1) k+1 det(ϕ(u)), whence the second component is continuous. As for the first component, pick [y] ∈ Y k /Z and a neighborhood U . Let {y 1 , y 2 , . . . , y k } be the orbit of y, and let V ⊆ Y be an open neighborhood of {y 1 , y 2 , . . . , | π(f ) = 0}, which is a neighborhood of the set of irreducible representations associated to this orbit. Therefore, Ψ is continuous.
Given ([y], λ) ∈ (Y k /Z) × T, we can consider the covariant representation given by Up to unitary equivalence, this does not depend on the choice of y in the orbit [y], so it is a preimage for ([y], λ). This shows that Ψ is surjective. Furthermore, any two preimages of ([y], λ) are unitarily equivalent. To see this, note that after conjugating by a suitable unitary, we can assume that the representation is of the standard form given by 2.1.1 and 2.1.2. However the matrix in 2.1.2 is unitarily equivalent via conjugation by a diagonal matrix to the matrix v above. This shows that Ψ is injective as well.
Lastly, we note that Ψ −1 is continuous. To see that, it suffices to consider a sufficiently small neighborhood of ([y], λ), for some y ∈ Y k and λ ∈ T, which is homeomorphic to a neighborhood of (y, λ) ∈ Y k × T. The map sending (y, λ) to the pair (ϕ y , w λ ), thought of as a map from Y k × T to the set of representations of A on M k , is clearly continuous, and therefore the composition with the quotient map to Prim(B) is continuous as well.
Since Y is metrizable, by Proposition 1.6, we have dim(Y k ) = dim(Y k /Z). (A more elementary way to see it is to observe that Y k → Y k /Z is a covering map, so Y k /Z can be written as a finite union of closed sets, each of which is homeomorphic to a closed subset of Y k . The identity then follows from [Pea75, Chapter 3, Proposition 5.7]. This uses less advanced techniques from dimension theory.) Now, by Theorem By the main theorem of [Win04] (page 430 of that article), it follows that As we have remarked in the introduction, it is crucial that the upper bound we get does not depend on the minimal period of α.
Actions with uniformly compact orbits
We provide here an alternative way to bound the nuclear dimension of a crossed product by a periodic action. This method has the advantage of working in the much more general setting of crossed products by locally compact Hausdorff second countable groups. Throughout the section, such a group will be denoted by G. Following the notations in Section 1, we let X be a locally compact Hausdorff space and we let α : G X be a continuous action by homeomorphisms.
Definition 3.1. A continuous action α : G X is said to have uniformly compact orbits if there exists a compact subset K ⊂ G such that for any x ∈ X, we have It is clear that any Z-action arising from a periodic homeomorphism has uniformly compact orbits.
Lemma 3.2. Let α : G X be a continuous action with uniformly compact orbits. Then: (1) Every orbit G · x is compact.
(2) For any x ∈ X and for any neighborhood U of the orbit G · x, there exists a The quotient space X/G is Hausdorff and locally compact.
Proof. (1) is obvious. For (2), the action extends trivially to the one point compactification, X + , and the resulting action again has uniformly compact orbits, witnessed by the same compact subset K ⊂ G. Since X + is normal, there exists an open subset W such that G · x ⊆ W ⊆ W ⊆ U . Now X + W is a compact set and thus so is K · (X + W ), which is equal to G · (X + W ) by the definition of having uniformly compact orbits. Define V = X + G · (X + W ). One readily checks that V satisfies the required properties. (3) follows from (2) by the definition of the quotient topology. As for (4), let C ⊆ X/G be a compact set. For any x ∈ π −1 (C), forms an open cover of C, and therefore has a finite subcover, π(V x 1 ), π(V x 2 ), . . . , π(V xn ). Thus, π −1 (C) is a closed subset of n j=1 V x j , which is compact, and therefore π −1 (C) is compact.
Using the notation of Lemma 3.2 above, we have a homomorphism π * : C 0 (X/G) → C 0 (X) given by f → f • π. Each element in π * (C 0 (X/G)) is G-invariant, and therefore defines an element in the center of the multiplier algebra of C 0 (X) ⋊ G. This gives C 0 (X) ⋊ G the structure of a C 0 (X/G)-algebra. We suppress the notation for π * in what follows.
If Y ⊆ X is a closed subset, then we denote the restriction homomorphism from The following lemma is essentially taken from [Car11, Lemma 3.1], with two minor differences. First, [Car11, Lemma 3.1] is stated for decomposition rank, whereas we need the analogous statement for nuclear dimension. However, the proof carries over essentially verbatim for nuclear dimension as well, and therefore we do not repeat it. Second, the statement there applies to C(X)-algebras where X is assumed to be compact, and we need to use it for locally compact spaces. Again, the modification is trivial: any C 0 (X)-algebra can be viewed as a C(X + )-algebra, where the fiber at infinity is 0. As the modifications needed are immediate, we do not repeat the proof.
Lemma 3.3. Let Y be a locally compact Hausdorff second countable space, and let A be a separable C 0 (Y )-algebra. We denote by A y the fiber over y. Then: We now apply this lemma to crossed products induced from actions with uniformly compact orbits. We denote the stabilizer group of a point x ∈ X by Theorem 3.4. Let α : G X be a continuous action with uniformly compact orbits. Then and Proof. Fix x ∈ X. Recall that by [Bla06, II.10.4.14], C(G · x) ⋊ G is strongly Morita equivalent to C * (G x ). Therefore, dim nuc (C(G · x) ⋊ G) = dim nuc (C * (G x )) and dr(C(G · x) ⋊ G) = dr(C * (G x )). The statement now follows immediately from Lemma 3.3 above.
Finally, we relate the dimension of X to that of its quotient by the action of G.
Proposition 3.5. Let α : G X be a continuous action with uniformly compact orbits. If G is discrete, then dim(X/G) = dim(X).
Proof. Since each orbit is finite, the quotient map from X to X/G is a finite-to-one map. The quotient map is furthermore open, since if U is an open set in X, so is G · U , and by the definition of the quotient topology, the image of G · U is open, and coincides with the image of U . The conclusion now follows directly from Proposition 1.6.
Rokhlin towers for homeomorphisms without short orbits
We return to topological actions by Z, and use the results from the appendix to construct Rokhlin-type towers, provided that there is lower bound on the lengths of the orbits which is large enough compared to the desired lengths of the towers. This is achieved through a refined version of the marker property, whose details are contained in Appendix A, by Gábor Szabó. The following lemma is a special case of Lemma A.9 for actions of Z.
Notice that because of condition (a) in the previous lemma, the open cover topological Rokhlin towers of length (2m + 1) in the sense of [Sza15b, Section 2], possibly with some overlaps among the towers. Next, we construct a partition of unity subordinate to this open cover of K, which will play the role of C * -algebraic Rokhlin towers in the sense of [HWZ15] (see Remark 4.3 for further discussion). When we split {α i (Z)} i=1,...(d+1)(4m+1) into topological Rokhlin towers, it is advantageous to first do so with sufficiently large overlaps among the towers. These overlaps are controlled by a new parameter k in the following lemma.
Lemma 4.2. Let X be a locally compact metrizable space with covering dimension at most d. Fix k, m ∈ Z + and a compact subset K ⊂ X, and suppose α : Z X is an action such that |Z · x| > (d + 1)(4m + 1) for any x ∈ X.
In order to prove (2), we set k ′ = k⌈ 1 ε ⌉ and K ′ = k ′ i=−k ′ α i (K), and apply (1) with k ′ and K ′ in place of k and K to obtain open sets Z (0) , . . . , Z (2d+2) ⊂ X which satisfy (1a) and (1b) for m, k ′ and K ′ (and thus automatically for k and K, too, as k ≤ k ′ and K ⊂ K ′ ). Pick a partition of unity such that supp(p (l) j ) ⊂ α j (Z (l) ) for all possible indices and supp(p (∞) ) ⊂ X + K ′ . We set p (l) j = 0 for all l = 0, . . . , 2d + 2 and j ∈ Z [−(m − k ′ ), m − k ′ ]. Notice that the family {p (l) j } l=0,...,2d+2;j∈Z already satisfies (2c) and (2d). In order to produce elements {µ (l) j } l=0,...,2d+2;j∈Z which also satisfy (2e), we apply an averaging procedure to {p (l) j } j∈Z over a large Følner set, for each l ∈ {0, . . . , 2d + 2}, as follows. For any l ∈ {0, . . . , 2d + 2} and j ∈ Z, set µ (l) We check that the family {µ (l) j } l=0,...,2d+2;j∈Z ⊂ C 0 (X) +,≤1 satisfies the desired conditions: (2c) For any j ∈ {−m, . . . , m}, we have (2e) For all j ∈ Z, for all i ∈ Z ∩ [0, k] and for all l ∈ {0, . . . , 2d + 2}, j+1 , for all applicable j and l, and addition is taken modulo m (so the tower is cyclic). What we obtained here is elements µ −m . One could refer to such Rokhlin towers as decaying Rokhlin towers, and those could be used to define a variant of Rokhlin dimension. This is studied in [SWZ14] under the term amenability dimension. This kind of dimension would be comparable to the ordinary Rokhlin dimension, with a factor of 2: any decaying Rokhlin tower is cyclic, and any cyclic tower can be made into two decaying Rokhlin towers by applying decay factors, as follows. 2m . This technique is used in the proof of [HWZ15, Theorem 4.1], and is responsible for a factor of 2 in the dimension estimates. Since in our setting we already obtain decaying Rokhlin towers, we avoid the need to repeat this trick in the proof of Theorem 5.1 below. As a result, the bound we get here has a factor of 2, as opposed to a factor of 4 in [HWZ15, Theorem 4.1].
The metrizability condition on X can be removed. See Corollary 5.4.
Proof. We assume that X is finite dimensional, otherwise there is nothing to prove. Set dim(X) = d. We need to show that dim nuc (C 0 (X) ⋊ α Z) ≤ 2d 2 + 6d + 4. Recall that C c (X)⋊ α,alg Z = finite sum
THE NUCLEAR DIMENSION OF C * -ALGEBRAS ASSOCIATED TO HOMEOMORPHISMS 15
Therefore, it is enough to verify that, given a finite subset F ⊂ (C c (X) ⋊ α,alg Z) ≤1 and ε > 0, the condition of Lemma 1.2 holds for F and (3d + 7)ε.
By Lemma 1.4, there is a quasicentral approximate unit for C 0 (X Y ) ⋊ Z ⊆ C 0 (X) ⋊ Z which is contained in C c (X Y ) +,≤1 . Thus, we may choose an element e ∈ C c (X Y ) +,≤1 which satisfies: Therefore, we can find maps ϕ (l) Y as described in the previous paragraph, and sum them up to obtain a piecewise contractive (d + 2)-decomposable completely positive map Indeed, observe that for any b ∈ F , This proves the claim. The next step is to find an approximation for e j } l=0,...,2d+2;j∈Z ⊂ C c (X) +,≤1 satisfying the conditions of the lemma (with ε ′ in place of ε).
Consider the regular representation of C 0 (X) ⋊ α Z on the Hilbert module E = ℓ 2 (Z, C 0 (X)). That is, we embed C 0 (X) in B(E) by (f · ξ)(n) = (f • α n )ξ(n), where f ∈ C 0 (X) and ξ ∈ E, and we identify the canonical unitary u with the bilateral shift operator in B(E).
be the canonical linear basis consisting of matrices with 1 at the (i, j)-th entry and 0 elsewhere. For any l ∈ {0, . . . , 2d + 2}, we define By construction, we have supp(µ (l) j • α j ) ⊂ Z (l) for any j ∈ Z. Thus, we may define .
We will prove this claim after we complete the main body of the proof.
To finish the proof, set ψ : and consider the diagram Observe that: (1) ψ is completely positive and contractive.
The following corollary addresses the case in which the space is not separable. The reason for the difference in the statement is to avoid the issue of defining dim(X) when X is not metrizable. The appropriate definition in our case is dim nuc (C 0 (X)).
Proof. We reduce the situation to the metrizable case as follows. By Lemma 1.3, ). Therefore if we let I be the net of all Z-invariant separable C *subalgebras of C 0 (X) with nuclear dimension no more than dim nuc (C 0 (X)), ordered by inclusion, then since the spectrum of a commutative separable C * -algebra is metrizable, we have and hence the statement follows immediately from Theorem 5.1.
Remark 5.5. One can distill from the proof of Theorem 5.1 the following more general statement, which can be seen as a refinement of [WZ10, Proposition 2.9]: Let A be a C * -algebra and d 1 , d 2 ∈ N. Suppose for any finite subset F ⊂ A and any ε > 0, there exists an ideal I ⊳ A, a quasicentral approximate unit {e λ } λ∈Λ ⊂ I and λ 0 ∈ Λ such that dim nuc (A/I) ≤ d 1 and for all λ ≥ λ 0 , there exists a d 2 -decomposable approximation for e 1 2 λ F e 1 2 λ with tolerance ε. Then we have dim nuc (A) ≤ d 1 + d 2 + 1.
Appendix A.
By Gábor Szabó 1 In this note, we establish a technical result of topological nature, Lemma A.9, that is needed to prove Theorem 5.1. (The lemma is stated for actions of arbitrary groups, however only the special case of Z-actions is needed for Theorem 5.1.) To be more specific, we need to generalize the topological results from [Sza15b, Section 3] that have led to the marker property [Sza15b,4.3,4.4] for free actions on finitedimensional spaces. For aperiodic homeomorphisms, this has been proved previously by Gutman in [Gut15b], building on a technical result by Lindenstrauss from [Lin95].
The way that we generalize the topological results from [Sza15b] is twofold. First, we do not restrict our attention to topological dynamical systems on compact spaces, but consider the locally compact case. Although it was remarked in [Sza15b,5.4] how one could modify the proofs to cover the locally compact case, this was never carried out explicitly or in detail. Secondly, we do not focus only on free actions, or actions having other weaker global freeness properties such as [Sza15b,3.4]. Instead, we consider sufficiently good local freeness properties of group actions (with respect to certain finite subsets of the acting group), and deduce a weaker, localized markertype property, see Lemma A.9. Otherwise, the approach is almost identical with that in [Sza15b]. The reader is warned that what we refer to as a 'localized markertype property' in this note is not related to Gutman's notion of the local marker property as considered in [Gut15a].
This general perspective, which takes into account local information about a group action rather than global, is crucial for the proof of Theorem 5.1.
The results of this note were obtained during the author's doctoral studies and are part of his dissertation [Sza15a].
Definition A.1 (cf. [Lin95, 3.1] and [Sza15b, 3.1]). Let X be a locally compact metric space, G a discrete group and α : G X an action. Let M ⊂ G be a subset and k ∈ N be some natural number. We say that a set E ⊂ X is (M, k)-disjoint, if for all distinct elements γ(0), . . . , γ(k) ∈ M we have Lemma A.2 (cf. [Sza15b,3.7]). Let X be a locally compact metric space with a group action α : G X. Let F ⊂ ⊂G be a finite subset and n ∈ N a natural number. If a compact subset E ⊂ X is (F, n)-disjoint, then there exists an open, relatively compact neighbourhood V of E such that V is (F, n)-disjoint.
Proof. Note that for all S ⊂ F with n = |S|, we have In order to make general statements for actions on finite-dimensional spaces, we naturally need to apply dimension theory for topological spaces. More specifically, we shall now record some well-known facts about properties of covering dimension, which we will refer to throughout this section. These statements come up in [Lin95, Section 3] and [Sza15b, Section 3], but a detailed treatment can be found in [Eng78], see in particular [Eng78, 4.1.5, 4.1.7, 4.1.9, 4.1.14, 4.1.16]. All spaces in question are assumed to be separable metric spaces.
Proof. Clearly ∂K is compact. For x ∈ ∂K, apply (D3) and find relatively compact, The following is an ad-hoc notational convention for this Appendix that makes it easier to keep track of local freeness properties of group actions.
Definition A.5. Let X be a locally compact metric space, G a group and α : G X an action. Let M ⊂ ⊂G be a finite subset. We define Proof. First observe the following. If E ⊂ X is any subset such that B = {α γ (∂E)} γ∈M is in general position, then the set E is automatically (M, d)-disjoint. This is because by definition, the intersection of d + 1 distinct sets in B has dimension at most −1, and is thus empty. So it suffices to show the first part of the above statement.
We prove this by induction in the variable k = |M |. The assertion trivially holds for k = 1. Now assume that the assertion holds for some natural number k. We show that it also holds for k + 1.
Since A 0 ⊂ V ⊂ X(M ), we can find for every point x ∈ ∂A 0 a number η(x) > 0 such that B η(x) (x) ⊂ V and such that the sets α γ(j) (B η(x) (x)) are pairwise disjoint for j = 0, . . . , k. Denote B x = B η(x) (x) and B x = B η(x)/2 (x). Note that since A 0 was relatively compact, its boundary ∂A 0 is compact. So find some finite subcover ∂A 0 ⊂ N i=1 B i . We will now construct relatively compact, open sets A i for i = 0, . . . , N (A 0 is already defined) with the following properties: (3) The collection is in general position.
Once we have done this construction, combining (1) with (3) implies that the set U = A N has the desired property. It remains to show how to construct the sets A i .
Having established localized small boundary conditions out of local freeness properties of a group action, we can now also prove localized marker-type properties: Lemma A.8 (cf. [Gut15b, 6.2] and [Sza15b,4.3]). Let G be a group and d ∈ N a natural number. Let F ⊂ ⊂G be a finite subset and let g 1 , . . . , g d ∈ G be group elements with the property that the sets F −1 F , g 1 F −1 F , . . . , g d F −1 F are pairwise disjoint. Using the notation g 0 = 1 G , set M = d l=0 g l F −1 F . Let X be a locally compact metric space and α : G X an action. Then the following holds: Let U, V ⊂ X be relatively compact, open sets such that Then there exists a relatively compact, open set W ⊂ X such that U ⊂ W, V ⊂ g∈M α g (W ) and W is (F, 1)-disjoint.
Assume that this is not true. Let x n ∈ R be elements with δ n > 0 such that δ n → 0 and | g ∈ M | α g (U ) ∩ B δn (x n ) = ∅ | ≥ d + 1 for all n.
By compactness, we can assume that x n converges to some x ∈ R by passing to a subsequence. Moreover, since M has only finitely many subsets, we can also assume (again by passing to a subsequence if necessary) that there are distinct γ(0), . . . , γ(d) ∈ M such that α γ(l) (U ) ∩ B δn (x n ) = ∅ for all n and all l = 0, . . . , d. But then δ n → 0 implies x ∈ R ∩ Note that the right-hand side is relatively compact and (M −1 , 1)-disjoint by our choice of ρ. Since the sets g l F −1 F | l = 0, . . . , d are pairwise disjoint, observe that A.8.1 enables us to define a map c : {1, . . . , s} → {0, . . . , d} such that (A.8.2) α g (U ) ∩ B δ (z i ) = ∅ for all g ∈ g c(i) F −1 F.
Finally, set
Obviously, W is a relatively compact, open set with U ⊂ W . Moreover, we have At last we have to show that W is (F, 1)-disjoint. Suppose that α a (W ) ∩ α b (W ) = ∅ for some a = b in F . That is, there exist x, y ∈ W such that α a (x) = α b (y). Let us go through all the possible cases: • x, y ∈ U is obviously impossible. • x ∈ α g −1 c(i 1 ) (B δ (z i 1 )) and y ∈ α g −1 c(i 2 ) (B δ (z i 2 )) for some 1 ≤ i 1 , i 2 ≤ s. It Observe that by a = b, we have b −1 ag −1 c(i 1 ) = g −1 c(i 2 ) in M −1 . Since B ρ (R) is (M −1 , 1)-disjoint, the right side of the above is empty. So this is impossible. • x ∈ U and y ∈ α g −1 c(i) (B δ (z i )) for some 1 ≤ i ≤ s. Then it follows that α a (x) = α b (y) ∈ α a (U ) ∩ α bg −1 c(i) (B δ (z i )) = ∅.
Or equivalently, α g c(i) b −1 a (U ) ∩ B δ (z i ) = ∅, a contradiction to the definition of c(i), see A.8.2. So we see that W is indeed (F, 1)-disjoint.
The following result constitutes the main technical result of this Appendix: Lemma A.9 (cf. [Gut15b, 6.1] and [Sza15b,4.4]). Let G be a group and d ∈ N a natural number. Let F ⊂ ⊂G be a finite subset and let g 0 , g 1 , . . . , g d ∈ G be group elements with the property that the sets g 0 F −1 F , g 1 F −1 F , . . . , g d F −1 F are pairwise disjoint. Set M = d l=0 g l F −1 F . | 2016-08-19T17:39:19.000Z | 2015-09-04T00:00:00.000 | {
"year": 2015,
"sha1": "386f44a5e1171ecd60ce684d22ca94e119fb24a2",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.aim.2016.08.022",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e010621d7d29a6a9d8e40da5d909f9786b1773ee",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
109547921 | pes2o/s2orc | v3-fos-license | Plasma protein adsorption on Fe3O4-PEG nanoparticles activates the complement system and induces an inflammatory response
Background Understanding of iron oxide nanoparticles (IONP) interaction with the body milieu is crucial to guarantee their efficiency and biocompatibility in nanomedicine. Polymer coating to IONP, with polyethyleneglycol (PEG) and polyvinylpyrrolidone (PVP), is an accepted strategy to prevent toxicity and excessive protein binding. Aim The aim of this study was to investigate the feature of IONP adsorption of complement proteins, their activation and consequent inflammatory response as a strategy to further elucidate their biocompatibility. Methods Three types of IONP with different surface characteristics were used: bare (IONP-bare), coated with PVP (IONP-PVP) and PEG-coated (IONP-PEG). IONPs were incubated with human plasma and adsorbed proteins were identified. BALB/c mice were intravenously exposed to IONP to evaluate complement activation and proinflammatory response. Results Protein corona fingerprinting showed that PEG surface around IONP promoted a selective adsorption of complement recognition molecules which would be responsible for the complement system activation. Furthermore, IONP-PEG activated in vitro, the complement system and induced a substantial increment of C3a and C4a anaphylatoxins while IONP-bare and IONP-PVP did not. In vivo IONP-PEG induced an increment in complement activation markers (C5a and C5b-9), and proinflammatory cytokines (IL-1β, IL-6, TNF-α). Conclusion The engineering of nanoparticles must incorporate the association between complement proteins and nanomedicines, which will regulate the immunostimulatory effects through a selective adsorption of plasma proteins and will enable a safer application of IONP in human therapy.
Introduction
Iron oxide nanoparticles (IONP) have been subject of intensive research for many years due to their intrinsic properties for which they are used in nanomedicine as contrast agents, 1 hyperthermia inductors, 2 and drug-delivery carriers. 3 However, despite the initial enthusiasm to popularize and spread their use in biomedical areas, promising candidates previously approved by the Food and Drug Administration have been withdrawn due to hypersensitivity and toxicity concerns. 4,5 One of the major pitfalls of IONP-based nanomedicines is the lack of biocompatibility with blood components and the immune system. When IONP and other nanomedicines are injected into the bloodstream, proteins and other biomolecules are rapidly adsorbed on their surface creating a novel biological entity which will would be dependent of nanoparticle (NP) physicochemical properties. 6,7 This newly formed complex, known as "protein corona" (PC), shapes the biological interaction between the IONP with other biomolecules, cells, and physical barriers. 8 Moreover, the interaction of IONP with components of the immune system such as phagocytic cells, and the complement system are key modulators for their efficacy distribution and toxicity. 9,10 The complement system is a group of ~30 proteins (distributed as soluble elements in plasma and as extracellular receptors in immune cells) that provides critical immunoprotective and immunoregulatory functions; it opsonizes and induces a series of inflammatory processes against pathogens and nanostructured materials, which are perceived as foreign agents. 11 The complement system can be triggered through three different pathways: 1) the classic pathway, activated by the immune complexes (antigen-antibody) and by others molecules such as reactive protein C, 2) the lectin pathway, activated by the union of the mannan-binding lectin (MBL) or ficolins to the mannosecontaining carbohydrates or N-acetylglucosamine, and 3) the alternative pathway that can be spontaneously initiated when the complement component C3 binds to a reactive surface. 12 Although complement pathways depend on different molecules for their initiation, they converge to generate the same set of effector molecules. Anaphylatoxins (C3a, C4a, and C5a) are soluble byproducts of the complement activation that are potent inflammatory inductors. When these peptides are produced in an uncontrolled condition, they can induce a wide spectrum of side effects such as cardiac and respiratory complications. 13 Some nanomedicines, such as Doxil, 14 nanomaterials such as carbon nanotubes (CNT), 15 and IONP 16 have shown the potential to interact with the complement system resulting in a significant activation. Consequently, evaluation of the complement system has recently gained attention in nanomedicine development since its activation has been linked to numerous adverse effects in animal models and patients. 12,13,17 In this study, we aimed to test whether the use of polymeric coatings on IONP is capable of adsorbing specific proteins to modulate the complement activation and if this interaction would be translated to an inflammatory response. Through characterization of the human PC, we demonstrate that polyethyleneglycol (PEG) coating on IONP promoted a selective adsorption of complement recognition molecules, which would be responsible for the complement system activation. These observations will help to gain insight on how nanomedicines interact with the proteins of the immune system and to develop potential strategic interventions to modulate (inhibiting or stimulating) complement activation in vivo to increase therapeutic applications of IONP.
Fe 3 O 4 NP and physicochemical characterization
Three Fe 3 O 4 NP were used in this study; bare Fe 3 O 4 NP (IONP-bare) were a kind gift from Dr Jaime Santoyo (Physics Department, Cinvestav-IPN), and were synthesized as previously described. 18 Polyvinylpyrrolidone-(PVP) and PEG-coated IONP were purchased from Sigma Aldrich (St Louis, MO, USA). IONP were examined by transmission electron microscope (TEM), JEM2010 (JEOL Ltd.) for particle morphology and size distribution in TEM mode and elemental mapping by energy-dispersive X-ray spectroscopy. Hydrodynamic particle size distributions were measured by centrifugal liquid sedimentation in a DC24000 system (CPS Instruments Inc.) A certified polyvinyl chloride particle calibration standard provided by the instrument supplier, was used to calibrate all measurements. Zeta potential of IONP was analyzed by Laser Doppler microelectrophoresis using the Zetasizer Nano ZS90 size analyzer (Malvern Instruments Ltd.). The endotoxin content was assessed by the endpoint chromogenic Limulus Amebocyte Lysate. The three IONP were negative for endotoxin contamination (,0.1 EU/mL).
PC formation and protein identification
IONP were incubated with pooled human plasma (2.5 mg/mL) at 37°C for 30 minutes. This temperature was chosen to emulate the physiological conditions in the bloodstream and to functionally evaluate complement in vitro. 19 Unbound proteins were removed after centrifugation at 22,000× g for 30 minutes followed by three washing steps with PBS-EDTA. The resulting IONP-protein complex is considered as the "hard corona," which consists of those proteins adsorbed on the NP surface for enough time to influence the NPs interactions with living system. 20 Proteins were desorbed from IONP by incubation with LDS Sample Buffer (Novex) at 70°C for 10 minutes. Then, the total recovered protein of each IONP was separated by one-dimensional SDS-PAGE 4%-12% Bis-Tris polyacrylamide gels. 21,22 The lanes were cut into fractions 23 ( Figure S1) and prepared for further analysis by reduction, alkylation, and tryptic digestion (trypsin-LysC, 37°C, overnight). 24 The peptide extracts were analyzed on a nanoliquid chromatography system (Acquity UPLC, Waters), coupled by an electrospray ionization interface to a linear ion trap mass analyzer (LTQ values, Thermo). The raw files were converted to mzML HUPO standard archives using the ProteoWizard converter. 25 The protein search was performed with Comet 26 against a Uniprot fasta database for Homo sapiens, employing a target-decoy strategy. 27 The resulting peptide and protein hits were validated using PeptideProphet/ ProteinProphet, 28,29 and converted to the final result files. The complete bioinformatic workflow was implemented in taverna (https://taverna.incubator.apache.org/), 30 running on massypup64. 30 The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http:// www.proteomexchange.org/) 31 via the PRoteomics IDEntifications (PRIDE) partner repository (http://www.ebi.ac.uk/ pride/archive/) 32 with the dataset identifier PXD004441. The complete list of identified proteins is listed in Table S1.
Pc biological analysis using cluegO
ClueGO, a Cytoscape plug-in (version 2.2.5), was employed to analyze, classify, and visualize the network of biological process related to each PC. 33 Data were analyzed by hypergeometric test and P-value correction with the Bonferroni step down. Global Homo sapiens was used as a background annotation database and Gene Ontology terms were visualized as nodes linked base on their kappa score level (0.5).
complement activation in human plasma
Blood plasma samples were obtained from 30 healthy volunteer donors by venipuncture of whole blood. Samples were pooled, aliquoted, and stored at -80°C until use. IONP were dispersed at 1 mg/mL in plasma and incubated for 30 minutes at 37°C. After that, cold EDTA was added (at 10 mM final concentration) to stop all the complement activation pathways. Protein-IONP complexes were separated by centrifugation at 15,000× g for 30 minutes at 4°C. After that, C3a, C4a, and C5a concentrations were determined in plasma samples with the Cytometric Bead Array Human Anaphylatoxin Kit (Cat No 561418, Becton Dickinson), which measures both C3a, C4a, and C5a, and their desArg forms (C3a desArg , C4a desArg , and C5a desArg ). Informed consent was obtained in writing from each participant prior to inclusion in the study, which was performed in accordance with the Declaration of Helsinki and according to institutionalized bioethics code. The procedure followed for extracting plasma proteins from human blood is classified as research with minimal risk according to the current Regulation of the General Health Law in the Field of Health Research, Art 17, and does not require approval from an institutional research ethics committee.
In vivo complement activation
Male BALB/c mice (7-8 weeks old) were randomly assigned into four experimentation groups (five mice per group): 1) control (sterile 0.9% NaCl solution); 2) exposed to IONP-bare; 3) exposed to IONP-PVP; and 4) exposed to IONP-PEG. Then, mice were anesthetized under 3% isoflurane/oxygen mixture. Mice were intravenously injected by the lateral tail vein at a dose of 5 mg NP/kg body weight. The selected dose of 5 mg/kg of body weight is in agreement with a high dose achieved in Phase II clinical studies of IONP used for MR angiography. 34,35 A pilot study with Zymosan A, a known activator of the complement system, was performed to determine a time point where a significant complement activation occurs. Based on our results, the order and timeline of the hypothesized biological events (complement activation -increment of cytokines), and previous literature, 17,36 we chose 90 minutes as exposure time in order to evidence an increment of both the complement markers and cytokines. After exposure, mice were deeply anesthetized and euthanized by terminal exsanguination (intracardiac puncture). Liver, spleen, kidney, heart, and brain were extensively washed in saline solution and collected for Fe biodistribution analysis. Plasma concentrations of sC5a and sC5b-9 were determined by ELISA with commercial kits (Mouse Complement Component C5a assay kit Cat; DY2150, R&D Systems Inc., and Mouse Terminal Complement Complex C5b-9 Kit Cat; CSB-E08710m Cusabio Biotech Co. Ltd., respectively). Levels of TFN-α, IL-1β, and IL-6 were measured with a multiplexed cytokine assay (Mouse Magnetic Panel catalog no. LMC0001M, Novex; Life Technologies), following the manufacturer's instructions. Quantification of Fe in the tissues was performed by particleinduced X-ray emission (PIXE) following a methodology previously described. 37 PIXE measurements were validated with two standards from the International Atomic Energy Agency (IAEA 153 and IAEA 155). All animal experimental procedures were approved by the Institutional Committee for the Care and Use of Laboratory Animals at CINVESTAV-IPN that follows the regulations established by the Mexican Official Norm for the Use and Welfare of Laboratory Animals (NOM-062-ZOO-1999), which is in accordance with the Guide for the Care and Use of Laboratory Animals, USA.
statistical analysis
Data analysis was performed using GraphPad Prism 6 (GraphPad Software Inc.). Normal distribution of the data was tested with Shaphiro-Wilk test. Comparison between experimentation groups was analyzed by one-way ANOVA followed by Dunnett's post hoc test. Differences with P,0.05 were considered statistically significant. Data are presented as the mean ± standard error of the mean (SEM). showed the absence of contamination with other elements rather than iron and oxygen. The IONP were characterized suspended in physiological sterile saline solution since this was the vehicle used for corona formation and complement activation assays. The hydrodynamic diameter measurements showed that three IONP tend to spread within two size populations, the smaller (few nanometers) corresponding to primary particles and a larger one corresponding to agglomerates of primary particles ( Table 1). The percentages of primary particles that remain after IONP suspension were: 71.2% for IONP-bare, 83.3% for IONP-PVP, and 65.1% for IONP-PEG. Compared to IONP-bare and IONP-bare, the IONP-PVP particles suspended in saline media were less susceptible to form low-sized agglomerates from primary particles.
Protein adsorption to IONP surface
In order to identify the adsorbed proteins onto the three IONP and to determine how surface coatings influence the activation of the immune response, we obtained a proteomic profile of the PC that surrounds the three IONP (PC fingerprinting) using LC-MS/MS. A total of 263 shared proteins (present on all three IONP) were identified and represented 60.59% of IONP-PEG associated proteins, for, 64.30% for IONP-bare, and 52.9% for IONP-PVP. The set of non-shared proteins in the respective coronas contributed to the 8.5% of IONP-bare PC, 18.3% of the IONP-PVP PC, and 15.5% for the PC on IONP-PEG (58, 125, and 106 proteins respectively) ( Figure 2A). In regard of physicochemical properties of the proteins such as the isoelectric point (pI) and molecular weight (MW), we observed slight differences between the three IONP ( Figure 2B and C). For example, IONP-PEG adsorbed more proteins with a pI from 5 to 6, and fewer proteins with a pI from 8 to 9 compared to IONP-bare and IONP-PVP; IONP-PEG do not have a preference for proteins of a particular MW ( Figure 2C). IONP-bare adsorbed more proteins with a pI from 6 to 7 and fewer proteins with a Regarding the identity and function of the adsorbed proteins, the biological processes network of associated proteins on IONP-bare have a role in blood coagulation, fibrin clot formation, angiogenesis, regulation of cell migration, substrate adhesion, and proteins related to the modulation by host of viral process ( Figure 3A). On the other hand, IONP-PVP adsorbed proteins related to regulation of proteolysis, small GTPase, receptor-mediated endocytosis, Fc receptor-mediated stimulatory signaling pathway, and negative regulation of wound healing ( Figure 3B).
Whereas the corona of IONP-PEG showed that their proteins belong to the activation of the immune response, opsonization, lectin pathway of the complement activation, regulation of protein processing, platelet activation, actin filament organization, and the Fc signaling pathway involved in phagocytosis ( Figure 3C). Although the three IONPcoronas share a set of proteins that participate in particular biological processes, there are proteins that were exclusive to each coating (Table S1). For example, IONP-PEG adsorbed proteins involved in plasma lipoprotein particle remodeling ( Figure 4).
complement activation in vitro
In order to test whether adsorption of complement proteins could translate into an activation of this system, human plasma was exposed to the three different IONP, and the levels of anaphylatoxins were measured. After 30 minutes, samples exposed to IONP-PEG showed a significant 2-fold increment for C3a and 4.8-fold in C5a concentration ( Figure 5A and B). However, no evidence of change was observed for C4a ( Figure 5C). In contrast, plasma exposed to IONP-PVP and IONP-bare did not induce any significant change in Figure 5D shows a representative immunoblot of C3a complement component adsorbed in PC of IONP, where the increment for IONP-PEG can be observed. As previously mentioned, complement activation pathways possess common downstream points, the excision of C4 into C4b (which will form the C3 convertase with C2a) and C4a is a common endpoint of the classical and lectin pathways. Consequently, the production of C3a and C5a with the absence of C4a in the human plasma exposed to IONP-PEG hints toward a complement activation that can be predominant through the alternative pathway. Male BALB/c mice were administered with IONP to get an insight of the capabilities of IONP to activate the complement system and induce a proinflammatory response in vivo. After exposure, the plasma concentration of anaphylatoxin C5a in IONP-bare and IONP-PVP treated animals remained without changes compared to the control group ( Figure 6A). In contrast, IONP-PEG induced a 1.3-fold increment of C5a in plasma of exposed mice. Quantification of sC5b-9 showed that IONP-bare and IONP-PVP did not induce a significant increment (P.0.05) in the animals exposed. However, a 2.3-fold increment was observed in the group exposed to IONP-PEG ( Figure 6B). The proinflammatory profile of cytokines showed that mice exposed to IONP- Figure 6C). While the group exposed to IONP-PEG showed a 1.60-fold increment of IL-1β, 5.7-fold in TNF-α and 2.6-fold of IL-6 compared to the control group. PIXE measurements revealed that animals exposed to IONP-PEG accumulate higher concentrations of NP in the liver, spleen, and kidney (2.3-fold, 1.57-fold, and 1.43-fold, respectively) compared to control ( Figure 6D). IONP-bare induced an increment in the iron content of the liver (1.77-fold) compared to control group. IONP-PVP did not induce a significant change (compared to control) of iron concentration in the selected organs (P.0.05). Given that in such short exposure times IONP dissolution is negligible and the PIXE technique allows the quantification of iron in whole organs, 37,38 the retention of initial dose was calculated taking into account the weight of the organs, the concentration of iron in the control, and the IONP dose administered to each animal. These estimations showed that IONP-PEG are retained in larger amounts in the selected organs with a retention of 86% of the initial dose, followed by IONP-bare with 55.9% and IONP-PVP with 33% of the administered dose.
Discussion
In this study, we investigated whether the interaction of plasma proteins with different coatings on IONP is capable of triggering an adverse effect, namely the proinflammatory response in an in vivo and in vitro model. activation and complement-mediated phagocytosis in human and mouse macrophages. Activation of the complement system by PEG coating on IONP could be achieved by direct or indirect mechanisms such as the specific recognition by anti-PEG IgG 2 and IgM specific antibodies that could initiate the classical pathway. 41 Also, PEGylated nanostructures can interact directly with complement proteins; particularly, if the C3 component is trapped inside the hydrated structure of PEG, the conformational changes of C3 and its spontaneous hydrolysis could be accelerated leading to the assembly of the fluid-phase C3 convertase C3bB. 42 Moreover, Szebeni et al 43 have proposed that the exposed hydroxyl groups at the end of PEG structures could act as molecular anchors for C3b and thus initiate the alternative pathway. Recent studies focused on magnetic NP have reported that size, charge, surface, polymer conformation, and molecular structure influences protein adsorption. For example, Hu et al 44 reported an effect of the primary size on the composition and abundance of proteins on pristine IONP; at smaller sizes (,30 nm) NP adsorbed fewer proteins from FBS and displayed different composition comparing to IONP of 200 and 400 nm. In contrast, in this study, we did not observe a relation between size and the amount or identity of proteins adsorbed. In regard to the surface charge, Sakulkhu et al 45 observed that positively charged IONP coated with polyvinyl alcohol or dextran adsorbed more proteins from FBS rather than ones with a negative charge. In contrast, IONP used in this study had a negative charge and the most abundant proteins in physiological conditions at pH 7.4 are negative; in particular, IONP-PEG adsorbed more proteins with a pI from 5 to 6 and their zeta potential was the highest negative -46.3 mV compared to the other two IONP. Moreover, Hofmann 46 proposed that protein positive-charge domains could mediate these interactions, protein-particle, protein-protein interactions as well as protein conformational change and denaturation. These results demonstrated that PC depends on the entire physicochemical characteristics of IONP and suggest that electrostatic force is not the only factor that can modulate protein adsorption on IONP. The PC profiles were associated with several biological processes; this protein fingerprinting gives an insight into responses that potentially could be triggered by IONP such as blood coagulation, complement activation, regulation of protein processing, lipid metabolism, and cytoskeleton organization. For example, IONP-bare adsorbed ex situ proteins related to blood coagulation and angiogenesis. It has been reported that IONP adsorbed coagulation factor VII and fibrinogen, 45 which can activate the kallikrein system. In this regard, Simberg et al 47 demonstrated that amine-modified IONP adsorb plasma kallikrein and high molecular weight kininogen and induce thrombosis and activation of the kallikrein-kinin system in vivo. On the other hand, IONP-PVP adsorbed proteins related to regulation of proteolysis (associated with blood coagulation) and the Fc receptor-mediated stimulatory signaling pathway, which could potentially modify the risk of intravascular coagulation or incidents of vascular thrombosis. 48 Moreover, nanomedicines could exacerbate some pathologies by impairment or depletion of plasma proteins. For example, the adsorption of lipoproteins is related to hypercholesterolemia and a higher risk of atherosclerosis. Muller et al 49 observed that lipoproteins that are adsorbed to polystyrene NP disintegrate. Also, proteins adsorbed onto redox-active NP can undergo denaturation by oxidation, 50 which can induce systemic effects such as thrombosis. 51,52 It is noteworthy that some of the unique proteins associated to IONP-PEG corona have a critical role in the complement activation, immune response, and coagulation systems. For example, we identified MBL2, L-ficolin, collectin-liver 1, galectin 3, SKAP2, and protein G6b, PDGF, Annexin V, and collagen alpha chain (Table S1). Particularly, the first three proteins are implicated in complement activation. MBL belongs to the collectin family and is an element in the innate immune system capable of activating the lectin complement pathway. 53 Also, MBL binds to late apoptotic cells and necrotic cells facilitating their uptake by macrophages. 54 L-ficolin is a pattern recognition molecule that specifically binds to mannan, LPS, 1,3-β-glucans, and lipoteichoic acids. 55 Meanwhile, collectin-liver 1 has been recently recognized as a pattern recognition molecule that could interact with carbohydrates through its specific recognition domain and it's found associated in circulation with other lectins. 56 Both MBL2 and L-ficolin are part of the few described activators of the lectin pathway, associated to MASP. The resulting complex activates MASP, which cleaves C2 and C4 to form the C3b convertase. 57 Altogether, MBL2, L-ficolin, and collectin-liver 1 evidenced that IONP-PEG induced a selective adsorption of complement recognition molecules, which would be the responsible for the complement system activation. This association of recognition molecules is in agreement with previous observations with polyethyleneoxide polymeric NP, 58 soluble PEG, 39 PEGylated CNT, 59 and iron oxide nanoworms. 60 Although the lectin pathway is a strong candidate to explain the route for complement activation, it has been reported that PEG and PEGylated CNT can also increase the C3 turnover leading to an amplification loop in the alternative pathway. 39,59 In addition to complement activation proteins and immunoglobulins, we identified proteins implicated in the activation of immune system cells that could explain the proinflammatory responses at other levels. Such is the case of galectin, a protein that activates the inflammatory response in immune cells, promotes the adhesion of neutrophils, and promotes the phagocytosis in macrophages. 61 It is clear that after proteomic analysis for fingerprinting PC on nanomedicines, a next logical step is to identify whether the components of the PC may have a causal involvement in physiological disorders, a tool that could be used for both personalized diagnostics and therapeutic treatments. 62 Although differences exist between mouse and human immune system, the complement system is highly conserved among these species. 63 The main differences in these two species are present in the proteins that modulate the amplification cascade rather than in proteins that participate in the triggering of the proteolytic pathways. 64 In agreement with our results, Banda et al 60 demonstrated that iron oxide nanoworms coated with dextran induced complement activation both in human and mouse. However, they found that the alternative complement pathway is the predominant pathway in humans, whereas in mice the main activation occurs through the lectin pathway. It has been demonstrated that interspecies differences in protein-binding profiles could exist for some NP; however, adsorption of complement protein appears to be a unifying factor among PCs and the particular differences in the magnitude of the complement activation may be due to the amount of recruited complement triggering proteins. 22 The agreement between the observations in mice and humans opens the possibility to include systematic evaluations of complement activation by nanomedicines as an integral part of preclinical studies to use this parameter as a predictor of biocompatibility and bioavailability.
Our biodistribution findings are in agreement with previous reports, wherein IONP have been observed accumulated in the liver and spleen of exposed animals. For example Xue et al 65 found that IONP grafted with different PEG MW intravenously administered in mice are rapidly cleared of the circulation (t ½ 15-27 minutes) and accumulated predominantly in the liver and spleen. The authors confirmed that in these organs, IONP did not induce visible necrosis or inflammatory infiltration. Moreover, Huang et al 66 observed that regardless of the size of PVP coating, IONP are significantly accumulated in the liver and spleen after 1 hour. It is possible that after internalization in the organs, IONP could exert an inflammatory effect through the induction of necrosis, increase of reactive species of oxygen, and lysosomal or mitochondrial damage as previously described in vitro. Due to our experimental approach, it is not possible to exclude some contribution of the cytokines originated in the organs. However, we postulate that given our exposure time (90 minutes), it is probable that the proinflammatory effect was generated as a response to the anaphylatoxins (the soluble mediators of complement activation) generated upon the first contact between IONP-PEG and plasma, which is in agreement with previous results. 67 The anaphylatoxin C5a is the most potent proinflammatory mediator released upon complement activation, it has greater potency to induce histamine release compared to C3a, and it acts as strong chemoattract that activates and guides neutrophils, monocytes, and macrophages to the complement activation. 68 Additionally, C3a and C5a trigger the proinflammatory response through their corresponding G-protein-coupled receptors (C3aR and C5aR, respectively) causing the release of proinflammatory cytokines such as IL-1β, TNF-α, and IL-6 in monocytes, macrophages, basophils, and neutrophils. 11 In addition, interaction with their receptors stimulates oxidative metabolism in neutrophils, the production of ROS, and the release of lysosomal enzymes from various phagocytic cells.
From our results, it can be suggested that in order to predict systemic effects associated to the nanomedicines, it is not enough to know only the identity of proteins that are forming the PC but it is also becoming necessary to develop experimental approaches capable of considering the complex interactions between biomolecules, immune cells, and nanomedicines.
Conclusion
Our results suggest that the elicited biological effects from the interaction with IONP are not only associated with their pristine properties but also to the identity of the protein-IONP complexes. It is highly probable that specific polymeric determinants, projected in the PEGylated structure of IONP-PEG, are recognized as a pathogen associated pattern and promote their recognition by complement proteins. Additionally, we have shown that complement activation measurements in humans and mice are in agreement and could be used as an integral part of preclinical studies to use this parameter as a predictor of biocompatibility and bioavailability. The engineering of nanoparticles, which takes into account the association between complement proteins and nanomedicines, will reduce the immunostimulatory effects through a selective adsorption of plasma proteins and will enable a safer application of IONP in human therapy. | 2019-04-12T13:41:38.859Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "77867c45a404789d5e34e91a4f306c91b976f589",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=48706",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77867c45a404789d5e34e91a4f306c91b976f589",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
4656845 | pes2o/s2orc | v3-fos-license | Effect of actions promoting healthy eating on students ’ lipid profile : A controlled trial
1 Universidade Federal da Bahia, Escola de Nutrição, Departamento Ciência da Nutrição. Av. Araújo Pinho, 32, Canela, 40110-150, Salvador, BA, Brasil. Correspondência para/Correspondence to: RC RIBEIRO-SILVA. E-mail: <rcrsilva@ufba.br>. 2 Universidade Federal da Bahia, Faculdade de Odontologia, Departamento de Odontologia Social e Pediátrica. Salvador, BA, Brasil. Support: Fundação de Amparo à Pesquisa do Estado da Bahia (Project no 5250/2009) and Programa de Educação para o Trabalho Projeto Parceria: Educação e Trabalho (PROPET), Ministério da Saúde (no 25000.1118531/2012-49). Effect of actions promoting healthy eating on students’ lipid profile: A controlled trial
Effect of actions promoting healthy eating on students' lipid profile: A controlled trial Efeito de ações de promoção da alimentação saudável sobre o perfil lipídico de estudantes: um estudo controlado
Conclusion
Actions of this nature have a positive impact on lipid profile.This study adds to those that use effective and viable public health strategies implementable at the primary care level.
I N T R O D U C T I O N
The risk factors associated with the relative and absolute increase in the prevalence of chronic Non-Communicable Diseases (NCD) are also reaching children and already constitute an important cause of morbidity in this life stage 1 .Concern with the morbidity profile of people everywhere led the World Health Organization (WHO) to create a proposal called "Global Strategy on Diet, Physical Activity, and Health" 2 .The general aim of this proposal, also adopted by the Brazilian Ministry of Health, is to promote and protect health by implementing sustainable actions that support healthy lifestyles, relying on the participation of health professionals and pertinent sectors.Because of the importance of schools in the formation of healthy eating habits, the Interministerial Ordinance nº 1.010, published on May 8, 2006, instituted the guidelines for promoting healthy eating in schools 3 .Thus, the construction and assessment of intervention models made for schools meet the Brazilian Ministry of Health's expectation and are based on the evidence promulgated by the WHO that positive behavioral changes -especially those that aim to control and reduce the risks associated with poor food choices, physical inactivity, and use of alcohol and tobacco -result in strategies capable of reducing the rates of NCD-related morbidity and mortality 4 .http://dx.doi.org/10.1590/1415-52732014000200005 The urgent need to stop the growing prevalences of NCD in Brazil justifies studies that attempt to develop effective and sustainable strategies for preventing and controlling these diseases, by focusing on their main risk factors 1,2 .Very few studies in Brazil were created with this purpose, especially youth-oriented studies.Hence, the present study aims to assess the effect of actions promoting healthy eating on the lipid profile of children and adolescents attending municipal elementary schools of a poor neighborhood located in the outskirts of Salvador, Bahia, Brazil.
M E T H O D S
This is a nine-month controlled intervention study with male and female students aged 7 to 14 years, attending grades first to eighth of two medium-sized schools of a neighborhood of the Distrito Sanitário do Subúrbio Ferroviário (DSSF, District of the Railroad Outskirts) of Salvador (BA).This district is one of the most populous in the city and represents a typical example of the complexity of the social and sanitary problem that characterizes some city areas.Today this area is mostly occupied by people from low socioeconomic classes who suffer with the lack of appropriate infrastructure and government services 5 .
The sample size was calculated as follows: an intervention subject-to-control subject ratio of one (1), a statistical power of 0.80, a 95% Confidence Interval (95%CI), and a mean Total Cholesterol (TC) difference of 0.2 mmol/L 6 , which resulted in a sample size of 336 students.An extra 10% was added to account for losses due to student's refusal to participate in the study, relocation to other cities, and transfer to other schools.Therefore, the initial sample should consist of 372 students, 186 from the intervention school and 186 from the control school.The DSSF has a total of 71 public schools; of these, two were randomly selected for the study, one to be the control and the other, the intervention.All regularly enrolled students of both schools who agreed to participate in the study were included.
The study protocol was approved by the Research Ethics Committee of the Universidade Federal da Bahia under Protocol number 18/09.In compliance with ethical precepts, all underweight, overweight, and dyslipidemic students were referred to primary health care units for treatment and follow-up.All guardians signed an Informed Consent Form before the students were included in the study.
The data collected at baseline included biochemical, maturation, and anthropometric measurements and a survey about Fruit and Vegetable (FV) intake.Over the nine-month study period, lectures and workshops discussing the benefits of a healthy lifestyle to promote health were provided to the intervention students.To assess the effects of these actions, all students (intervention and control) were submitted to the same measurements and survey at the end of the study, nine months after the baseline data were collected.Anthropometric assessment consisted of weight and height measurements.Blood was collected to determine the lipid profile: CT, Low Density Lipoprotein-cholesterol (LCL-c), High Density Lipoprotein-cholesterol (HDL-c), and Triglycerides (TG).When possible, the participant's guardian was invited to answer a questionnaire about the family's socioeconomic status.
Blood collection for lipid profiling
Five milliliters of blood were collected after a 12-hour fast at school, in an appropriate environment.The blood samples were properly conditioned and transported to the Central Laboratory of the Complexo Hospitalar Universitário Professor Edgard Santos, where they were analyzed.Serum TC, HDL-c, and triglycerides were determined by enzymatic methods, and LDL-c was calculated by the Friedewald equation: LDL-c=CT-(HDL+TG/5) when TG exceeds 400 mg/dL.CT<150 mg/dL; LDL-c<100 mg/dL; HDL-c≥45 mg/dL; and TG <100 mg/dL were considered appropriate 7
Anthropometric status
Weight was determined by a microelectronic scale of the brand Marte, Model PP 200-50, with a capacity of 199.95 kg and accuracy of 50 g.Height was determined by the stadiometer Leicester Height Measure, with an accuracy of one millimeter.The anthropometric status was given by the WHO 8 reference tables of percentiles of Body Mass Index (BMI)-for-gender and BMIfor-age.BMI was given by dividing the weight in kilograms by the square of the height in meters.The classification followed the WHO 9 proposal.Overweight and obesity were grouped together for the analyses.Therefore, the BMI of individuals with excess weight were equal to or above the 85 th percentile.
Maturation stage
Pubertal development was self-assessed based on male and female sexual characteristics.The age of menarche was also collected.For girls, Tanner Stage II marked the beginning of puberty, and menarche, postpuberty.For boys, Tanner Stage III of genital development marked the beginning of the pubertal growth spurt and Stage V, the end of puberty 10,11 .Hence, the students were grouped into three categories: prepubertal ( category of reference ), pubertal, and postpubertal.
Food intake: fruits and vegetables
The participants' FV intake frequency was determined by the Food Frequency Questionnaire (FFQ).Item intake frequency was divided into four categories as follows: never consumes=0; 1 to 3 monthly=1; 1 to 2 weekly=2; 2 to 4 weekly=3; and ≥4 weekly=4.The intake frequencies of the food groups were summarized into a single value for each student.This value was given by the formula: (∑ of the intake frequencies of all foods in the food group)/number of foods in the group *the maximum frequency provided by the study FFQ 12 .The resulting scores were stratified into two categories, having the 75 th percentile as the cut-off point (percentile <p75% versus percentile ≥p75% category of reference ).
Collection of socioeconomic and demographic data
A structured questionnaire collected these data.The socioeconomic level of the family was given by the mother's education level, which was classified as follows: I ≤ fourth grade; II ≥ fifth grade (category of reference) .The demographic variables were: gender (male (category of reference) ; female) and age group (<10 years (category of reference) ; and ≥10 years).
The intervention protocol covered three major axes
1.The student: This axis regarded actions to keep students within a healthy weight range.Six pertinent classes were provided to promote healthy eating and physical activity, each lasting fifty minutes, over the nine-month intervention.The subjects covered during each class were: a) the importance of a healthy diet and physical activity for health promotion; b) the food pyramid, introduction of the food groups, their nutrients, and their functions in the body; c) the importance of drinking water; d) promotion of physical activity at school; e) sugar: the villain in caries; and f) the ten steps to a healthy diet 13 .Next, short videos taken from the Internet were shown to the students, commented, and discussed.
The teacher and cooks:
The intervention for the school staff involved training the teachers and cooks.Three one-hour workshops on healthy eating were provided.They focused on the food preparation techniques and good food handling practices needed for preparing healthy and safe school meals and encouraged the use of locally produced and minimally processed fresh foods; 3. The family: Two one-hour workshops about healthy eating were provided to the http://dx.doi.org/10.1590/1415-52732014000200005families.Their objective was to inform and motivate students and their families to adopt healthy eating habits.
The subjects covered in the workshops for all categories were: a) food groups, their nutrients, and their respective functions in the body, in addition to their representations in the food pyramid; and b) the ten steps for promoting a healthy diet at school 13 .Problematization followed, with a discussion about healthy eating and food preparation techniques as healthprotecting factors.
All activities (lectures and workshops) planned for the intervention protocol were performed by a dietician who collaborated with the study.
Data analysis
The questionnaires were reviewed, checking the answer and the code of each question and correcting errors possibly caused by coding.The database was constructed in the software Epi-Info version 6.0, which was then checked for discrepancies, conflicting simple variable frequencies, and answer coherence.
The population was characterized by descriptive analysis using proportion for categorized data and mean ± standard deviation for the continuous variables.
Analysis of Covariance (Ancova) assessed the influence of the intervention program on the lipid profile and anthropometric changes.The dependent variables were the changes that occurred in TC, HDL-c, LDL-c, and triglycerides during the study period.The main independent variable was the intervention itself (yes/no).The biochemical parameters were assessed at baseline and end of the intervention.The estimates were adjusted as recommended by the literature [14][15][16][17][18][19][20] and the study dataset.Baseline age, gender, maturation stage, and BMI were the adjustment variables in Ancova.Two-tailed tests were also used and the significance level was set to 5%.The data were treated by the Statistical Package Social Sciences (SPSS) version 13.0.
Characteristics of the study population at baseline
When the study began, 718 students were enrolled in both schools and 531 were attending classes.Of the 272 intervention students, 227 underwent blood collection and anthropometric assessment (Figure 1).Of the 259 control students, 259 underwent anthropometric assessment, and 155 underwent blood collection.At the end of the follow-up, 142 and 192 intervention students underwent anthropometric assessment and blood collection, respectively; and 137 and 60 control students underwent anthropometric assessment and blood collection, respectively.Most losses were due to dropping out of school and refusal to participate in the second stage of the study.Hence, 202 students were effectively studied to assess the effect of actions promoting healthy eating and physical activity on the lipid profile.Of these, 53.1% were females, most aged less than 10 years (69.9%);11.9% were malnourished; 78.7% were normal weight; and 11.1% had excess weight, being 6.9% overweight and 4.2% obese.Most (66.6%) students had high TC, 45.9% had high LDL-c, 47.1% had low HDL-c, and 26.8% had high triglycerides.Many (75.1%) had inadequate FV intake (data not shown).
The sociodemographic, anthropometric, biochemical, and FV intake characteristics of the two groups at baseline were similar (Table 1).After the nine-month intervention, the intervention group was consuming significantly more FV than the control group (p<0.001).Furthermore, at the end of the study, the mean TC, LDL-c, and triglycerides of the intervention students decreased by 13.18 (p=0.001),7.41 mg/dL (p=0.038), and 12.37 mg/dL (p=0.029),respectively.These results were adjusted for age, gender, maturation stage, and BMI.Models including the variable mother's education level as indicator of socioeconomic status were tested.However, since the regression parameters did not change and the sample size was small, it was decided to be kept the parsimonious model (Table 2).
D I S C U S S I O N
This study was planned to assess how the actions of a program designed to encourage healthy eating impacted the lipid profile of children and adolescents enrolled in municipal schools of Salvador (BA).This is a study developed in an epidemiological landscape characterized by a high prevalence of dyslipidemia, excess weight, and low FV intake.This scenario, similar to that of the Brazilian and other youth in the last decades 4,21,22 , indicates the need of creating and implementing effective and integrated strategies for reducing cardiovascular risk factors 2 .The results found herein are in agreement with those that demonstrate that actions promoting healthy eating and/or regular physical activity have a positive impact on the lipid profile of children and adolescents 6,23 .Unquestionably, a strong ally of the study results is the biological plausibility of the existing associations.Dyslipidemia may be credited to physical inactivity and inappropriate food patterns, that is, patterns with a prevalence of energy-dense foods, such as foods high in fats and simple carbohydrates, to the detriment of of intervention performed by the present study mainly encourages higher FV intake and the restriction of foods and meals high in sugars, saturated fats, and trans fats.Such intervention may be an alternative for preventing childhood dyslipidemia and associated factors.This life stage is critinstruction and planning must be incorporated into food and nutrition education actions, one cannot ignore that the construction is local, that is, it is based on a specific reality as stated by the "Reference Landmark for Food and Nutrition Education" 31 .
Study limitations
Even though the number of study dropouts is considered high for a nine-month intervention when compared with studies using similar high-fiber foods, such as fruits and vegetables, which contain fewer calories and better nutritional quality 24 .At the end of the nine-month followup, the intervention students were consuming more FV than the control students (p<0.001), a finding corroborated by similar studies 13,25 .Dietary fiber content is positively correlated with the intake of whole grains, fruits, and vegetables 26,27 .Fibers increase satiety and reduce appetite, TC synthesis, and LDL-c synthesis 20,28 .Another important benefit is their action on the gastrointestinal tract, given that they reduce gastrointestinal transit time, helping to eliminate cholesterol 29 .Intervention studies have consistently reported that fiber intake benefits the lipid profile 20,24,28,30 and other health indicators 13-20 .The study findings are particularly important because of the challenge associated with convincing youth to adopt a healthy diet.The type 13,15 and may have reduced the study power and accuracy, these losses were not unbalanced.That is, the characteristics of the dropouts and completers were similar, indicating the random character of the losses (data not shown).
The non-inclusion of the students' levels of physical activity should also be mentioned.There is evidence suggesting an association between physical inactivity and dyslipidemia [32][33][34] .The lack of consistency of the collected physical activity data encouraged excluding this variable from the analysis.Due to logistics, the intake of foods high in saturated fats and/or sugars was not assessed.A higher intake of carbohydrates has been associated with low HDL-c and high LDL-c and triglycerides 35 .Nevertheless, FV intake is one of the food intake indicators used globally to monitor NCD risk factors 35,36 .
The possible confounding effect of socioeconomic status was minimized by the socioeconomic homogeneity of the sample, that is, all participants were from poor families.A wide range of program components was used in the study intervention protocol.Therefore, it is not possible to distinguish which components contributed most to the benefits promoted by the study intervention.
Despite these limitations, one cannot ignore the methodological rigor and the analytical techniques used for controlling potential confounders, reinforcing the findings and the knowledge about the positive influence of nutritional counseling on the lipid profile of children and adolescents.
C O N C L U S I O N
The results show that actions promoting healthy eating improve lipid levels.Once again this finding evidences that the study intervention model may prevent and/or treat cardiovascular risk factors in adolescents.Moreover, these results suggest that the combination of school and home actions can positively impact the health status of this population.The multi-professional approach required by such strategies is also recognized, given that the behavior and lifestyle of individuals and social groups are largely determined by their physical, socioeconomic, and cultural environments.
C O N T R I B U T O R SRCR SILVA and LA SILVA helped to conceive the study, collect data, analyze and interpret the results, and review the manuscript.MCT CANGUSSU helped to interpret the results and review the manuscript.
Table 2 .
Analysis of covariance for the influence of the intervention program on the lipid profile of the intervention students.Salvador (BA), Brazil, 2011.
* Regression coefficients adjusted for age, gender, maturation stage, and body mass index at baseline.HDL-c: High Density Lipoprotein-cholesterol; LDL-c: Low Density Lipoprotein-cholesterol. | 2018-04-06T17:59:27.820Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "964e29ceba60ed3f4b893daece8a5505a1199133",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rn/a/LRzx4phPQy5KyVGJHpxMhbR/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "964e29ceba60ed3f4b893daece8a5505a1199133",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18972871 | pes2o/s2orc | v3-fos-license | Increase of Long-Term ‘Diabesity’ Risk, Hyperphagia, and Altered Hypothalamic Neuropeptide Expression in Neonatally Overnourished ‘Small-For-Gestational-Age’ (SGA) Rats
Background Epidemiological data have shown long-term health adversity in low birth weight subjects, especially concerning the metabolic syndrome and ‘diabesity’ risk. Alterations in adult food intake have been suggested to be causally involved. Responsible mechanisms remain unclear. Methods and Findings By rearing in normal (NL) vs. small litters (SL), small-for-gestational-age (SGA) rats were neonatally exposed to either normal (SGA-in-NL) or over-feeding (SGA-in-SL), and followed up into late adult age as compared to normally reared appropriate-for-gestational-age control rats (AGA-in-NL). SGA-in-SL rats displayed rapid neonatal weight gain within one week after birth, while SGA-in-NL growth caught up only at juvenile age (day 60), as compared to AGA-in-NL controls. In adulthood, an increase in lipids, leptin, insulin, insulin/glucose-ratio (all p<0.05), and hyperphagia under normal chow as well as high-energy/high-fat diet, modelling modern ‘westernized’ lifestyle, were observed only in SGA-in-SL as compared to both SGA-in-NL and AGA-in-NL rats (p<0.05). Lasercapture microdissection (LMD)-based neuropeptide expression analyses in single neuron pools of the arcuate hypothalamic nucleus (ARC) revealed a significant shift towards down-regulation of the anorexigenic melanocortinergic system (proopiomelanocortin, Pomc) in SGA-in-SL rats (p<0.05). Neuropeptide expression within the orexigenic system (neuropeptide Y (Npy), agouti-related-peptide (Agrp) and galanin (Gal)) was not significantly altered. In essence, the ‘orexigenic index’, proposed here as a neuroendocrine ‘net-indicator’, was increased in SGA-in-SL regarding Npy/Pomc expression (p<0.01), correlated to food intake (p<0.05). Conclusion Adult SGA rats developed increased ‘diabesity’ risk only if exposed to neonatal overfeeding. Hypothalamic malprogramming towards decreased anorexigenic activity was involved into the pathophysiology of this neonatally acquired adverse phenotype. Neonatal overfeeding appears to be a critical long-term risk factor in ‘small-for-gestational-age babies’.
Introduction
Prevalence of obesity, diabetes and accompanying disturbances has increased globally reaching epidemic levels in adults, adolescents and even children [1][2][3][4]. To prevent the further spread of this epidemic, identifying early risk factors is urgently needed to develop appropriate prevention strategies.
Since the early 1990s, great attention has been given to the association between a low birth weight (LBW) and long-term risk of developing cardiovascular diseases, type 2 diabetes and the metabolic syndrome. The respective 'small-baby-syndrome' hypothesis proposes that poor materno-fetal nutrition leads to growth restriction and, consequently, long-lasting programming towards 'diabesogenic' alterations [5,6]. A 'thrifty phenotype' acquired in utero through poor fetal nutrition should enable affected individuals to better adaptation towards reduced food availability in later life [5,7]. However, when those individuals are exposed to affluent conditions later on, according to the hypothesis this acquired disposition leads to the development of type 2 diabetes, cardiovascular diseases, and the metabolic syndrome.
Many epidemiological studies have confirmed the phenomenological association between low birth weight and later development of symptoms of the metabolic syndrome [8][9][10]. Causal mechanisms, however, of the 'small-baby-syndrome' are still unclear. A recent epidemiological study showed that in formerly 'small babies', altered dietary habits are linked to the increased risk in later life [11], in line with the 'thrifty phenotype' hypothesis. Being small at birth was associated with higher intake of fat at later adult age. However, a number of studies have reported that individuals born with high birth weight, induced by prenatal overnutrition, are also at increased risk of 'diabesity' later on. Meta-analyses demonstrated that both low and high birth weight are associated with increased risk of developing type 2 diabetes and hypertension [12][13][14]. Moreover, long-term risk for overweight, i.e., the most important cardio-metabolic risk factor, has even been shown to be linearly positively related to birth weight [15]. The explanation of this developmental paradox remains unclear and the role of prenatal undernutrition and/or low birth weight as independent risk factors for the development of 'diabesity' and metabolic syndrome later on has to be challenged [16].
A number of animal studies, especially in rodents, were performed to investigate mechanisms of the association between reduced materno-fetal food supply, low birth weight (LBW) and later diseases. Rats with LBW, however, showed rather reduced body weight in the long-term, reduced food intake and normal glucose tolerance [17][18][19]. The fact that findings from animal models do not completely coincide with the observations from epidemiological studies leads to the suggestion that there must be additional factors predisposing to increased 'diabesity' risk in later life of 'small babies' [16].
Over several years, our group has proposed that neonatal overnutrition after low birth weight might play a decisive role in this scenario [20]. Currently, catch-up growth through 'rapid' neonatal weight gain has become a potential mechanism within the 'small-baby-syndrome' hypothesis [21][22][23]. To investigate consequences of neonatal overfeeding, i.e., the probably most important cause of rapid neonatal weight gain, the small litter model is a long-established experimental paradigm [24][25][26][27][28]. Reduction of litter size in rodents causes increased weight gain during the early postnatal period due to qualitative as well as quantitative overnutrition [29]. Rats raised in small litters display early overweight, increased food intake, impaired glucose tolerance, hyperinsulinemia, hyperleptinemia, and hypertriglyceridemia in later life [25][26][27]30,31]. This neonatally acquired phenotype has been linked with permanent dysregulation of neuropeptides critically involved in the central nervous regulation of food intake and body weight [26,27,[31][32][33][34][35][36][37][38].
Interestingly, altered food intake in the long-term has recently also been considered causal for adverse health outcomes in low birth weight humans [11]. Consequently, the question arises whether the increased risk of 'diabesogenic' alterations after low birth weight might rather be a consequence of neonatal overnutrition than fetal underfeeding and low birth weight per se. Up to now, this has rarely been considered in clinical and experimental studies.
Thus, we established a new, 'genuine' rat model of low birth weight to investigate the long-term outcome of 'small-forgestational-age' rats additionally exposed to neonatal overnutrition, as compared to normal neonatal feeding [39]. We examined later food intake both under normal conditions by feeding standard laboratory chow as well as under dietary provocation by exposing the animals to a high-energy/high-fat diet at higher adult age. Long-term metabolic profile was characterized and hypothalamic expression patterns of orexigenic (Agrp, Gal, Npy) and anorexigenic (Pomc) neuropeptides in single neurons from the arcuate hypothalamic nucleus (ARC) were measured, using lasercapture microdissection (LMD) combined with quantitative real-time PCR to ensure highest possible specificity and sensitivity [40].
Ethics Statement
All animal procedures were carried out in accordance with the European Communities Council Directive (86/609/EEC) and were approved by the local animal welfare committee (G 0093/02; Lageso Berlin, Germany).
Animal Model and Study Design
Virgin female Wistar rats (Charles River Laboratories, Sulzfeld, Germany), weighing 200-250 g, were time mated with normal males and delivered spontaneously. Pups were defined as smallfor-gestational-age (SGA) if their birth weight was below the lower limit of the 95% confidence interval of the mean birth weight of all pups of the same litter and sex. Pups which had a birth weight within the limits of the 95% confidence interval for litter and sex were assigned as appropriate-for-gestational-age (AGA). The study groups (neonatal overnutrition vs. control) were generated by adjusting the litter sizes per mother on day 3 of life into litters of only three pups (small litters, SL) or 12 pups (normal litters, NL) through random distribution [24,32]. SGA rats were raised then in normal (SGA-in-NL) or small litters (SGA-in-SL) until weaning. AGA rats were raised in normal litters, i.e., under normal neonatal feeding conditions, and served as controls (AGA-in-NL).
After weaning (day 21 of life), female rats were housed under standard conditions with 12/12 h inverse light-dark rhythm, controlled temperature (2262uC) and free access to tap water and standard laboratory chow (commercial control diet for rats; ssniff R/M-H, Soest, Germany, Code V1536-000). Feeding studies were performed from day 470 to 560 (see below). At day 560, animals were sacrificed and tissues and blood were collected.
Body Weight and Body Composition
Body weight and body length (nose to anus length) and mortality were monitored and recorded throughout life. Relative body weight/body length was evaluated in g/cm. On day 560, body composition was determined by weighing first the carcass mass after stomach and intestine removal. Next, dry mass and fatfree dry mass (FFDM) were determined by drying carcasses to constant weight followed by whole body chloroform extraction in a Soxhlet apparatus. FFDM and body fat were calculated as percentage of carcass mass [41].
Basal Metabolic Parameters
Blood samples were taken after an overnight fast (16 h) by puncture of the retroorbital plexus under light ether anaesthesia [31] at days 360 and 560 of life to determine basal metabolic parameters. Blood glucose was measured photometrically using the glucoseoxidase-peroxidase (GOD-PAP) method (Dr Lange GmbH, Berlin, Germany). Total plasma cholesterol and plasma triglyceride concentrations were quantified using the cholesterinoxidase-peroxidase (CHOD-PAP) method and the glyceride-3phosphatoxidase-peroxidase (GPO-PAP) method, respectively (Dr Lange GmbH, Berlin, Germany).
Leptin concentration was quantified using a commercial radioimmunoassay (rat leptin RIA kit, Linco, St. Charles, MO, USA). Recombinant rat leptin (Linco) served as the standard preparation. The intra-assay variation ranged between 2.4-4.6% in a concentration range of 1.6-11.6 mg/l.
For determination of insulin, within one assay a modified commercial radioimmunoassay was performed (Adaltis, Freiburg, Germany). Rat insulin (Novo Nordisk Biolabs, Copenhagen, Denmark) with a biological potency of 21.3 IU/mg was used as standard preparation. The intra-assay coefficient of variation was 4.5-7.4% in a concentration range of 9.2-94.2 mIU/l. The insulin/glucose-ratio was calculated as a measure of peripheral insulin resistance [42].
Glucose Tolerance Test
Glucose tolerance tests were performed at days 130 and 530 of life. After an overnight period of fasting (16 h), a 20% glucose solution (1.5 g/kg body weight) was injected intraperitoneally. Blood samples were taken at 0, 15, 30, and 90 minutes after glucose loading for determination of blood glucose levels. Using these values, the area under the curve of glucose (AUCG) against time was calculated for each animal [31].
Food Intake Study
Food intake was studied at older adult age (between days 470 and 560 of life), with individual housing. First, food intake of standard laboratory chow was measured for 30 days (days 470-500 of life). Chow comprised of 9% fat, 33% protein, and 58% carbohydrates with a metabolizable energy content of 3.1 kcal/g (ssniff R/M-H, Soest, Germany, Code V1536-000). For the following 60 days (500-560 day of life), rats were exposed to a palatable high-energy/high-fat (HE/HF) diet containing 34% fat, 23% protein, 43% carbohydrates with a metabolizable energy content of 4.1 kcal/g (specific diet, Code 132006; Altromin, Lage, Germany). This was a modified version of the diet described by Levin et al. and has previously been shown as highly palatable [34,43]. As both diets have different energy contents caloric intake per day was calculated (kcal/d). Rats were fed ad libitum throughout the study period and had free access to tap water. Food intake was recorded daily and body weight measured weekly to the nearest 0.1 g (Sartorius MC 1, Laboratory LC 6200, Sartorius AG, Göttingen, Germany).
Neuropeptide Expression in the Hypothalamic Arcuate Nucleus (ARC)
Lasercapture Microdissection (LMD). Following rapid sacrifice on day 560 of life, brains were immediately isolated, frozen in isopentane and stored at 280uC. For LMD, 10 mm-thick coronary serial sections were cut through the deep-frozen hypothalami, mounted on glass slides (Leica frame slides with 1.4 mm Polyethylene terephthalat (PET)-membrane), dried on air, and finally Nissl-stained with cresyl violet under RNase-free conditions. After staining, slides were kept at 280uC until LMD. Anatomical location of the ARC was verified according to a rat brain atlas [44]. Using the Leica Microsystems AS/LMDH instrument (Leica Microsystems CMS GmbH, Wetzlar, Germany), in total 100 neurons were randomly picked individually from each brain and animal, respectively, pooled from serial sections across the full rostral-caudal extent of the ARC, corresponding to planes 26 to 32 as defined by Paxinos and Watson [44] (Figure 1). In order to ensure neuronal specificity, only neurons with a distinct nucleolus and soma appearance were LMD-prepared and used for subsequent measurements [40]. LMD-captured neuronal cells were additionally verified by microscopical inspection of the tube cap.
RNA preparation and quantitative real-time PCR. Total RNA was isolated from LMD-prepared samples and DNase treated using the PureLink RNA Micro Kit (Invitrogen, Carlsbad, USA), according to the manufacturer's protocol as described previously [40]. RNA was reverse-transcribed into complementary DNA (cDNA) using the Superscript-First-Strand-Synthesis System (Invitrogen), and cDNA was amplified in subsequent real-time PCR.
Duplex real-time PCR was performed in triplicate in an Applied Biosystems 7500 instrument [35,40]. Npy, Gal, Agrp, and Pomc mRNA expression were analyzed using commercial intronspanning TaqManH gene expression assays from Applied Biosystems (Npy: Rn00561681_m1; Gal: Rn01501525_m1; Pomc: Rn00595020_m1; Agrp: Rn01431703_g1; all FAM-labeled), together with an endogenous control assay for the housekeeping gene Beta actin (4352340E, VIC-labeled), validated previously [40]. For all amplifications, a standard protocol was used: 1 cycle of 95uC for 10 min, followed by 40 two-step cycles at 95uC for 15 s and 60uC for 1 min [TaqMan Gene Expression Assays Protocol, part number 4333458, Applied Biosystems]. Relative expression of target genes (vs. Beta actin) was determined by respective C t values according to the 2 2DCt method as described elsewhere [40,45,46].
Statistics
Data are expressed as means 6 SD unless otherwise indicated. Real-time PCR data are given as arbitrary units. One-way analysis of variance (one-way ANOVA over all groups) followed by Tukey's HSD post hoc analysis was used to analyze group differences (SPSS Software 19.0, Munich, Germany). For analysis of relations between two variables, Spearman's rank correlation test was performed with GraphPad Prism Version 4.03 (GraphPad Software, Inc., San Diego, California, USA). Statistical significance was set at p,0.05.
Mortality
Mortality over the entire study period (day 1-560 of life) was increased in neonatally overfed SGA-in-SL rats (12.5%; 2 of 16) as compared to the normal-fed AGA-in-NL rats of the control group (8.5%; 6 of 71), while mortality in neonatally normal-fed SGA-in-NL rats was rather decreased (0%; 0 of 14; differences not statistically significant).
Body Weight and Body Composition
Mean birth weight of rats born small-for-gestational-age (SGA) was significantly decreased as compared to control rats that had a birth weight appropriate-for-gestational-age (AGA) (p,0.001; Table 1). SGA rats exposed to neonatal overnutrition by rearing in small litters (SGA-in-SL) showed rapid neonatal weight gain. From day 7 of life onwards, they did not show any further difference in body weight, as compared to AGA pups raised in normal litters. In contrast, SGA pups raised in normal litters (SGA-in-NL) did not catch up in body weight until day 60 of life. After day 60, no further differences in body weight were observed among the three groups. Additionally, no significant differences in body length and relative body weight were observed in SGA rats raised in normal vs. small litters as compared to the rats of the control group until the end of the study ( Table 1).
Analysis of body composition revealed no significant group differences in fat-free dry-mass at day 560, i.e., at the end of the study. However, a trend towards increased body fat content was observed in neonatally overfed SGA-in-SL rats (Table 1).
Basal Metabolic Parameters
While basal, i.e., fasting blood glucose levels did not differ significantly between groups over the entire observational period, plasma insulin concentrations were significantly increased in neonatally overnourished SGA-in-SL rats on days 360 of life, i.e., before the feeding study, and 560 of life, i.e., at the end of the HE/HF feeding study, as compared to AGA-in-NL control rats (day 360: AGA-in-NL: 29.3610.6 mIU/l vs. SGA-in-NL: Figure 2A and 2B). This was accompanied by hyperleptinemia (p,0.01), significantly correlated with body fat ( Figure 3A), significantly increased cholesterol levels (p,0.05) and slightly increased levels of triglycerides at day 560 of life in SGA-in-SL rats. In contrast, at no time point were significant differences in metabolic profile found in SGA rats raised in normal litters (SGA-in-NL) as compared to controls (AGA-in-NL).
Neuropeptide Expression in the Hypothalamic Arcuate Nucleus (ARC)
Analyses of mRNA expression of neuropeptides in single neuron pools (n = 100 neurons per animal) from the ARC at the end of the experiment on day 560 of life revealed no significant differences between groups ( Figure 5A). Increased overall food intake in neonatally overnourished SGA-in-SL rats, however, was accompanied by a non-significant tendency towards decreased levels of anorexigenic Figure 5A). Expression of Agrp and Gal was unchanged. Neonatally normal-fed SGAin-NL rats exhibited decreased expression (not statistically Figure 5A), corresponding to their overall tendency towards reduced food intake under chow diet (Figure 4).
Because of the well-known dependency of neuropeptide expression (Pomc, Agrp, Npy, Gal) on their regulating hormones leptin and insulin [47][48][49][50], we additionally calculated the quotient of neuropeptide expression per unit of leptin and insulin, respectively, as described elsewhere [32]. In SGA-in-SL rats, Pomc expression per corresponding insulin was clearly decreased as compared to AGA-in-NL rats (AGA-in-NL: 13.661.9 vs. SGA-in-NL: 12.262.6 vs. SGA-in-SL: 6.060.4; p,0.05), whereas the above mentioned non-significant increase in Npy expression was no longer present (AGA-in-NL: 13.861.8 vs. SGA-in-NL: 9.261.9 vs. SGA-in-SL: 13.561.5; p = 0.158; Figure 5B). In contrast, SGA-in-NL rats showed a marked decrease in Agrp expression per corresponding leptin as compared to controls (AGA-in-NL: 10.962.4 vs. SGA-in-NL: 4.861.0 vs. SGA-in-SL: 6.160.8; p,0.05; Figure 5C). In general, the trend towards decreased expression of orexigenic Npy, Agrp, and Gal in SGA-in-NL rats remained even when referring to insulin and leptin, respectively ( Figure 5B and 5C). Expression of anorexigenic Pomc was unchanged in SGA-in-NL rats, even when referred to insulin and leptin, respectively ( Figure 5B and 5C).
Finally, quotients of orexigenic (Agrp, Npy, Gal) per anorexigenic (Pomc) mRNA expression were calculated to get a proxy of the 'netbalance' here [ Figure 5D]. Gal/Pomc was increased significantly in SGA-in-SL as compared to SGA-in-NL rats. Moreover, Npy/Pomc was found to be nearly doubled in SGA-in-SL as compared to both AGA controls as well as SGA-in-NL rats, and found to be positively correlated to food intake over all groups ( Figure 3C).
Discussion
Low birth weight (IUGR, SGA) has been shown to be related to increased long-term 'diabesity' risk though reasons remain unclear. Acquired alterations of food intake have been suggested as a possible mechanism. In most of epidemiological-clinical studies, in utero causes of low birth weight and, especially, the potential impact of neonatal nutrition and resulting early growth pattern for the long-term outcome have not been adequately considered. We investigated the impact of neonatal overfeeding for the long-term 'diabesity' risk, food intake, and related neuropeptidergic regulatory parameters of body weight control in 'small-for- gestational-age' rats, introducing a novel 'genuine' rodent LBWmodel set according to clinical definitions of SGA (birth weight below the lower limit of the 95% confidence interval of the mean birth weight of same litter and sex). In our study, all animals with a reduced birth weight (SGA) showed catch-up growth, irrespective of whether they were neonatally normal-fed or overfed. Adult age SGA rats did not differ in total body weight and body length when compared to AGA control rats. This is consistent with epidemiological observations which described that 80-90% of SGA newborns show full catch-up growth within the first two years of life [51,52]. However, while neonatally normal-fed SGA pups only caught up in body weight at day 60 of life, neonatal overnutrition of SGA pups resulted in 'rapid' neonatal weight gain within some days. From the first week of life onwards, these SGA-in-SL rats did not further differ from AGA controls.
Epidemiological-clinical studies have indicated that 'rapid' neonatal weight gain appears to be a risk factor for the development of obesity, increased body fat, insulin resistance, impaired glucose tolerance and cardiovascular diseases in the longterm [8,20,[53][54][55][56][57][58], even independent of birth weight [23,59]. In a large population-based study, Stettler and colleagues examined the extent to which the development of overweight depends on birth weight and/or weight gain during the first 4 months of life. They observed that increased weight gain from birth until the age of 4 months is associated with later increased overweight risk [59]. This association has been completely confirmed in prenatally 'overfed' (hyperglycemia-exposed) offspring of diabetic mothers [23]. By these studies, rapid neonatal weight gain has been shown to be an independent risk factor in general as well as in particular at risk populations [23,59].
According to the 'small-baby-syndrome' hypothesis [6], children with a low birth weight are per se at increased risk to develop metabolic disturbances in later life, e.g., impaired glucose tolerance and dyslipidemia, resulting from prenatal undernutrition. Neonatally normal-fed SGA rats in the present study, however, did not show increased risk throughout later life. In contrast, during glucose tolerance test at day 130 of life, neonatally overnourished SGA-in-SL rats showed significantly increased blood glucose levels at 90 minutes. This tendency towards altered metabolism became further accentuated in older animals, when neonatally overfed SGA-in-SL rats developed hyperinsulinemia and an increased (B). Data are shown as percentages of AGA-in-NL-levels (means 6 SEM). Plasma glucose levels after intraperitoneal glucose loading on day 130 of life, i.e., before feeding study (C), and day 530 of life, i.e., during high-energy/high-fat (HE/HF) feeding study (D) in rats born small-forgestational-age, raised in normal litters (SGA-in-NL) or small litters (SGA-in-SL), as compared to rats with normal birth weight raised in normal litters (AGA-in-NL). Data are means 6 SD. *p,0.05, **p,0.01 (one-way ANOVA followed by Tukey's HSD post hoc analysis). doi:10.1371/journal.pone.0078799.g002 insulin/glucose ratio under basal conditions (days 360 and 560 of life), indicating insulin resistance [42]. Consequently, metabolic alterations observed in SGA rats cannot be attributed to decreased birth weight per se, but suggest rather early postnatal overnutrition as the critical risk factor here. This appears to be in line with findings of a clinical study in which the influence of prematurity on later occurrence of insulin resistance has been studied [9]. Hofman et al. observed that children born with low birth weight had reduced insulin sensitivity later on, irrespective of their gestational age (preterm AGA or term SGA). They found that the risk among preterm AGA-children was similar to the risk of term SGAchildren. This observation gives rise to reasonable doubts concerning the role of diminished prenatal food supply as an independent risk factor in the pathogenesis of 'diabesity' in the 'smallbaby-syndrome' [60]. It can be assumed that rather postnatal influences, especially neonatal overnutrition leading to rapid neonatal weight gain, are of critical importance for the long-term 'diabesogenic' outcome of 'SGA babies'. Accordingly, the association between rapid neonatal weight gain and later metabolic disorders was examined in a prospective cohort study. Fabricius-Bjerre et al. observed that accelerated growth during the first three months of life leads to disturbances of glucose metabolism later in life [8]. High infant weight gain was positively related with high insulin levels as well as high HOMA-IR later on (homeostasis model assessment of insulin resistance).
In addition to the above mentioned diabetic alterations at older adult age in neonatally overfed SGA rats, these animals also showed an altered fat metabolism. SGA-in-SL rats displayed increased levels of cholesterol and triglycerides and clearly increased leptin levels, while SGA-in-NL rats did not show any adipogenic alterations. Hyperleptinemia in neonatally overnourished rats was accompanied by a tendency towards increased body fat content which was, however, not associated with significantly increased total body weight, finally indicating increased 'adiposity' in these animals. Thus, elevated leptin levels in SGA-in-SL appear to reflect increased body fat, not necessarily increased total body weight, which is underlined by a positive correlation of body fat with plasma leptin levels ( Figure 3A). Findings from human studies support this relationship. For instance, Ibánez et al. have demonstrated that children who were born SGA at term have abdominal fat mass at 4 years closely related to the rate of catchup weight gain within the first 2 years of life [57]. It is well known from a number of clinical studies that increased fat mass decisively contributes to the development of insulin resistance [61][62][63], accompanied with increased plasma insulin and triglyceride levels [63], as observed here experimentally. In the context of the pathogenesis of the 'small-baby-syndrome', a recently published cohort study considered altered dietary habits in later life to be causal for adverse health outcomes. Being small at birth was associated with higher intake of fat at later adult age [11]. This confirms findings of another longitudinal study in which an inverse relation between birth weight and fat intake in 43month-old children was described [64]. Interestingly, these epidemiological findings seem to be confirmed by data from our animal experiment. However, the clinical studies did not consider neonatal nutrition and growth pattern. In our experimental study, only SGA rats which were neonatally overfed showed increased food intake later on, especially under high-energy/high-fat diet. Worthy to note, milk of SL dams has been characterized to be altered in terms of a high-energy/high-fat quality as compared to milk of dams nourishing litters of normal size [29]. Therefore, our observations might even indicate an early food preference conditioning of SGA-in-SL rats through their dams' milk composition towards a HE/HF diet preference, persisting throughout later life. Trend towards elevated body fat content observed in later life of SGA-in-SL rats was possibly caused by hyperphagia and high-energy/high-fat food preference, confirmed by positive correlation between body fat and food intake ( Figure 3B). In contrast, neonatally normal-fed SGA-in-NL rats did not show hyperphagia, neither under chow nor under highenergy/high-fat diet. However, a trend towards 'relative preference' for HE/HF diet vs. chow was observed within the SGA-in-NL group, although neither indicating hyperphagia nor HE/HF preference as compared to AGA or SGA-in-SL rats (Figure 4).
In the presented study, neonatally normal-and overfed SGA rats did not differ significantly with respect to mortality. However, in SGA-in-NL rats mortality was trending to be even lower as compared to AGA controls while, in contrast, SGA-in-SL rats showed a tendency towards increased mortality. This appears to be in line with findings by Ozanne and Hales [65]. While in their studies pre-and neonatal underfeeding gave rise to increased longevity, decreased life span of male mice that underwent fetal growth restriction and thereafter experienced rapid catch-up growth has been observed [65]. Note, in the mentioned experiment [65] reduced birth weight was induced by maternal protein-restriction during pregnancy. Rodent models of maternal malnutrition during pregnancy and lactation are among the most frequently used animal models to investigate mechanisms of perinatal programming according to the 'small-baby-syndrome' hypothesis. However, offspring whose dams were fed a low protein diet during pregnancy did not always exhibit the full spectrum of metabolic and cardiovascular alterations as observed in epidemi- ological and clinical studies. Moreover, experiments mostly were carried out exclusively in males and/or offspring were not examined into older adult age [18,19,65]. Therefore, in our study we focused on females and later adult aged animals to properly examine the long-term outcomes.
Food intake and body weight are decisively regulated by orexigenic and anorexigenic neuropeptides expressed in the hypothalamic arcuate nucleus (ARC) [66]. The hypothalamic expression of these neuropeptides is mainly regulated by the circulating satiety signals leptin and insulin [50]. Lasercapture microdissection of single neurons combined with quantitative PCR has been proven to be the most powerful technique allowing a highly precise and complex analysis of gene expression patterns in discrete neuronal cell populations [40]. Therefore, we applied this method here to perform gene expression analyses. Measurements at older adult age revealed a trend towards down-regulation of orexigenic Agrp and Npy in neonatally normal-fed SGA-in-NL rats whereas expression of the anorexigenic Pomc was slightly increased. Expression of orexigenic Gal was rather decreased in SGA-in-NL. This is of particular interest since Gal is known to particularly stimulate fat ingestion [48,67]. Altogether, results show a longterm decreased activity of the orexigenic system in neonatally normal-fed SGA-in-NL rats, which is strengthened by consideration of circulating leptin and insulin levels ( Figures 5B and 5C) and might be causal for the significantly decreased food intake as compared to controls, especially under chow diet.
In contrast, hypothalamic expression of Agrp and Gal was unchanged and Npy even slightly increased in neonatally overfed SGA-in-SL rats as compared to AGA controls, despite their marked basal hyperleptinemia and hyperinsulinemia. Corresponding expression of the anorexigenic Pomc was even decreased, also when referred to the increased levels of the satiety signals insulin and leptin. This gene expression pattern strongly indicates a neonatally acquired neuropeptidergic malprogramming, especially of the anorexigenic Pomc-system, due to neonatal overfeeding in SGA rats (SGA-in-SL).
Finally, since food intake is regulated by both orexigenic and anorexigenic neuropeptides, we additionally introduced here an integrative neuropeptidergic 'net-indicator'. Dividing the expression levels of Npy, the most potent orexigenic neuropeptide [66], by the expression levels of Pomc, the most important anorexigenic neuropeptide [66], may provide an orientating proxy ('orexigenic index') for better estimation of neuropeptidergic appetite vs. satiety activity and regulation in a given situation. In neonatally overfed SGA-in-SL rats, reduced expression of anorexigenic Pomc and unchanged expression of orexigenic Npy resulted in an increased 'orexigenic index' (Npy/Pomc), corresponding with hyperphagia and supported by a positive correlation with overall food intake. Similar was observed for the Gal/Pomc index ( Figure 5D). In contrast, in neonatally normal-fed SGA-in-NL no respective alterations were observed as compared to AGA controls ( Figure 5D).
In summary, neonatally normal-fed SGA rats (SGA-in-NL) growth caught up only at late juvenile age and did not develop 'diabesity' and hyperphagia later on, neither under normal chow diet nor under high-energy/high-fat dietary provocation representing a 'westernized' lifestyle. Their long-term hypothalamically driven orexigenic activity was rather decreased than increased. In contrast, neonatally overfed SGA-in-SL rats displayed rapid neonatal weight gain and catch-up growth within the first week of postnatal life. In the long-term, these SGA rats displayed significantly increased 'diabesity' risk as compared to normal rats. Hyperphagia, particularly pronounced under high-energy/highfat dietary provocation, was accompanied with hyperleptinemia, hyperinsulinemia, increased insulin-glucose-ratio, and correlated with body fat. This was accompanied with and correlated to reduced expression of the anorexigenic hypothalamic ARC-Pomc, and respective increase of the 'orexigenic index' (Npy/Pomc, Gal/ Pomc), even under consideration of the circulating regulators insulin and leptin. Altogether, this indicates a neonatally acquired hypothalamic resistance of the anorexigenic system towards peripheral satiety signals (insulin, leptin) in neonatally overfed SGA-in-SL rats.
In conclusion, the early neonatal period appears to be at least as critical as prenatal life for long-term programming of 'diabesity' risk and altered food intake in SGA rats, as we previously suggested and proposed [16,20,31,33,39]. Neonatal overfeeding may predispose via hypothalamic malprogramming to hyperphagia and accompanying/subsequent disorders in terms of the metabolic syndrome in 'small-for-gestational-age' subjects. This should be considered in future experimental as well as clinical approaches to unravel mechanisms underlying the 'small-babysyndrome'. | 2016-05-04T20:20:58.661Z | 2013-11-12T00:00:00.000 | {
"year": 2013,
"sha1": "83c98d3fd62b69dbcfa226562ceec7374b8dabd6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0078799&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83c98d3fd62b69dbcfa226562ceec7374b8dabd6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259049928 | pes2o/s2orc | v3-fos-license | Generalized Functional Observer for Descriptor Nonlinear Systems—A Takagi-Sugeno Approach
: This paper concerns the design of a generalized functional observer for Takagi–Sugeno descriptor systems. Furthermore, a generalized structure is herein introduced for purposes of estimating linear functions of the states of descriptor nonlinear systems represented into a Takagi– Sugeno descriptor form. The originality of the functional generalized observer structure is that it provides additional degrees of freedom in the observer design, which allows for improvements in the estimation against parametric uncertainties. The effectiveness of the developed design is illustrated by a nonlinear model of a single link robotic arm with a flexible link. A comparison between the functional generalized observer and the functional proportional observer is given to demonstrate the observer performances.
Introduction
A functional observer is a dynamical system that is designed to estimate one or more functions of the system states. They can be viewed as a generalization of classical state observers. For instance, suppose that there exist n functions z 1 (x), . . . , z n (x) to be estimated for an n-order system by using a functional observer, and each function is a different state x i , i = 1, . . . , n, of an n-order system, i.e., z 1 (x) = x 1 , . . . , z n (x) = x n , then a functional observer coincides with a full-order state observer. However, the main advantage of a functional observer is that when a specific function needs to be estimated, it may be easier than the standard approach. It may even be the case that the system does not need to be observable but only be functional observable [1,2]. A complete study about the existence and design of functional observers can be further explored in [3][4][5].
Numerous works in the literature can be found about applications of functional observers. For instance, in [6] a functional observer for fault-detection of linear timeinvariant systems which is designed to converge in a finite-time is proposed. In [7], a real-time implementation of a functional observer-based feedback controller is performed to control the position of a ball on a balancing table. The authors demonstrate that this task can be accomplished with only a minimum order functional observer. Their work clearly reflects the advantages to implementing functional observers compared to classical full-order state observers. Another interesting work where functional observers are used as a method to cope with unknown inputs (which can represent process uncertainties, sensor faults, communication problems or cyber-attacks) is presented in [8], where a functional observer is used to perform load frequency control for a complex power system.
The case of functional observers for descriptor systems has been discussed by several authors. For instance, in [9], the authors propose a functional observer for switched de-of the generalized functional observer (GFO) are compared with a simple proportional observer (PO).
Preliminaries
In this section, the notations used in this paper are introduced. A + denotes the generalized inverse of A and verifies AA + A = A. The notation E ⊥ denotes a maximal row rank matrix such that E ⊥ E = 0. When E is a full row rank matrix, E ⊥ = 0 by convention.
The number of local models A i , and B i depends on the scheduling variables, κ = 2 s , where s is the number of scheduling variables. For systems with a high number of scheduling variables, the value of κ will rapidly increase, this can be a disadvantage for the calculus of the observers. Thus, this approach is suitable for nonlinear systems with a convenient-fordesign number of nonlinearities. There exist different T-S models that can be obtained from a nonlinear system, the way to select the appropriate T-S model is to take into account the variable states of the final transformed system, and those that are needed to be estimated by the observer.
Consider a Takagi-Sugeno descriptor system of the form: where x(t) ∈ R n is the semi-state of the system, u(t) ∈ R m is the input vector and y(t) ∈ R p is the measured output of the system. z(t) ∈ R q is a linear function of the state to be estimated. w i (t) are membership functions formed with ρ ∈ R s scheduled variables, which can depend on states, inputs, measured variables or other exogenous variables of the system. The membership functions have the following properties: for i = 1, . . . , κ = 2 s . Matrices E ∈ R n×n , A i ∈ R n×n , B i ∈ R n×m , C ∈ R p×n and L ∈ R q×n are assumed known.
Assumption 1 ([4]).
The triplet (C, E, A i ) is partially impulse observable with respect to matrix L if the following equivalent statements hold i. rank Partial impulse observability allows us to estimate the function z(t) by using only the available output. It is important to note that the observer must correctly estimate the function of the state, even in the presence of impulsive behavior of the descriptor system, thus the importance of this lemma.
The following Lemma will be used later in this paper. Lemma 1 ([33]). Let matrices B and Q be given. The following statements are equivalent: 1.
There exists a matrix X satisfying The following condition holds B ⊥ QB ⊥T < 0 Suppose the above statements hold and assume that B ⊥ B > 0. Then matrix X in statement 1 is given by where L is any matrix such that ||L|| < 1 and γ > 0 is any scalar such that
Problem Statement
Consider the following generalized functional observer (GFO) of the forṁ where ζ(t) ∈ R q 0 is the state of the observer, v(t) ∈ R q 1 is an auxiliary vector andẑ(t) ∈ R q is the estimate of z(t). N i ∈ R q 0 ×q 0 , J i ∈ R q 0 ×q 1 , F i ∈ R q 0 ×p , H i ∈ R q 0 ×m , S i ∈ R q 1 ×q 0 , G i ∈ R q 1 ×q 1 , M i ∈ R q 1 ×p , P i ∈ R q×q 0 and Q i ∈ R q×p are constant matrices of appropriate dimensions to be determined such that lim t→∞ (ẑ(t) − z(t)) = 0. Equation (5) is the generalized form of the observer, Equation (6) is an auxiliary vector that is used to give more degrees of liberty, finally, Equation (7) makes it possible to design an observer with q 0 = q. Consider a parameter matrix T ∈ R q 0 ×n and define the transformed error vector whose derivative is given bẏ replacing ζ(t) from Equation (8) in Equation (9).
By using the definition of ζ(t), Equations (6) and (7) can be rewritten aṡ and alsoẑ By considering that the following conditions ∀i = 1, . . . , κ are verified, then Equations (10) and (11) becomė and, from Equation (12) we obtain By defining an augmented state vector (17) and (18) can be rewritten as:σ . It can be seen that, if matrix A i is Hurwitz, then lim t→∞ ε(t) = 0 and lim t→∞ e z (t) = 0.
Observer Parameterization
In this section, a specific structure for the observer matrices is determined.
Define matrices Γ = E C ∈ R (n+p)×n and R ∈ R q 0 ×n , which is a full row rank matrix, such that rank R Γ = rank(Γ). In this case there always exists two matrices T ∈ R q 0 ×n and which can be rewritten as the general solution for Equation (22) is which can be decomposed as where Z 1 ∈ R q 0 ×(n+p) is a matrix with arbitrary elements, and T 1 = RΓ + I n 0 , which can be written as The necessary and sufficient condition for the existence of a solution to Equation (27) is the general solution to Equation (27) is if we replace Equation (24) in Equation (28), we obtain From the definition ofK i we can deduce matrix F i as where From Equation (22), we have Conditions (15) and (16) can be written as replacing Equation (32) into (33), we obtain since (34) is given by is a matrix with arbitrary elements. The solutions for S i , M i , P i and Q i are given by The estimation error (19) shows that e z (t) → 0 while ε(t) → 0, so then, the function estimation error e z (t) does not depend on the choice of matrix P. Then, for simplicity we can assume that Y i3 = 0, so P i = P 1 and Q i = Q 1 , being now, constant matrices P and Q. Now, by using Equations (29) and (36), the error dynamics (20) can be rewritten aṡ The problem of the design is to find matrices Y i and Z 1 such that matrix Hurwitz. This can be reached by using the linear matrix inequality (LMI) approach.
Functional Observer Design
Theorem 1. There exist parameter matrices Y i and Z 1 such that error dynamic system (40) is asymptotically stable if there exists a matrix such that the following LMI is satisfied: where Z 1 = X −1 1 W and matrix Y i can be determined as follows where and matrix L is any matrix such that ||L|| < 1 and γ > 0 is any scalar such that Ω i > 0.
Proof. Consider the following Lyapunov function candidate which derivative is given bẏ The asymptotic stability of Equation (40) is guaranteed only ifV(σ(t)) < 0, this leads to the following LMI which can be rewritten as According to Lemma 1, there exists a matrix X i satisfying Equation (47) if and only if the following condition holds.
with B ⊥ = N T⊥ 3 0 . By using the definitions of X, Q i and W = X 1 Z 1 , we obtain Equation (41). Matrix Y i is obtained from Equation (42).
The following Algorithm 1 summarizes the observer design to obtain the corresponding matrices.
Algorithm 1: Methodology of the observer design 1.
Choose a matrix R ∈ R q 0 ×n such that rank R Γ = rank(Γ).
3.
Solve the LMI (41) to find X and Z 1 .
5.
Compute all the matrices gains of the observer (5) by using (29) to determinate N i , (42) to determinate J i and G i , (36)-(39) to find S i , M i , P i and Q i taking matrix Y i3 = 0. F i is given by (31) and matrix H i could be determined with Equation (14).
Proportional Functional Observer Case
In order to obtain a Proportional Functional Observer (PFO) from the GFO, it corresponds to the parameter matrices S i = 0, J i = 0, M i = 0 and G i = 0 which generates the following observer [34]:ζ and the error dynamics (20) becomeṡ The observer matrices can be obtained following Algorithm 1.
Mathematical Model
The mathematical model chosen to show the performance of the generalized observer is a linear-rotational vibration system with an uncertainty in one of the spring rigidity values (see Figure 1). The robot has the following nonlinear model: The measurable states are The parameters considered are given in Table 1. Taking as the states the position of m 1 x 1 (t) = p 1 (t), the linear speed of m 1 x 2 (t) =ṗ 1 (t), the angle of the lever x 3 (t) = θ(t), the angular speed of the lever x 4 (t) =θ(t), and the position of the lever x 5 (t) = p 2 (t), we can represent the nonlinear model (51) as follows: is the force input, and A(x(t)), B(x(t)) are matrices containing nonlinear terms depending on the state variables. E is a singular constant matrix given as: The variable terms presented in matrices A(x(t)), B(x(t)) from the nonlinear model (51) are highlighted in boxes in Equations (57) and (58) Considering the s = 2 scheduling variables as each variable has behavior limits depending on the variation of the input and the states.
Since there are two scheduling variables, then four weighting functions are obtained: where ρ j and ρ j are the upper and lower limit of variation of ρ j (t), respectively, for all j = 1, 2.
In this case s = 2, therefore there are κ = 2 2 = 4 membership functions, as: Once the fuzzy sets are defined, the Takagi-Sugeno model has four rules. For each rule, there is a linear local model of the form: For example, the first rule (65) corresponds to ρ 1 = ρ 1 and ρ 2 = ρ 2 then, lower limits are directly replaced by the nonlinear terms in matrices A(x(t)) and B(x(t)), resulting in the following local model: where The Takagi-Sugeno model that reproduces the dynamics of the nonlinear model is given by: The Takagi-Sugeno model of the single link robot arm is Considering a constant input as u(1) = 1N, we can obtain the variation level of x 3 (t) that allows us to determine the maximum and minimum of variation of ρ 1 (t) and ρ 2 (t), to finally obtain the local matrices for the T-S model as: and matrix C is given in (56).
Results
By following Algorithm 1, a Generalized Functional Observer can be designed. The matrix L was chosen in order to estimate the non-measurable state of the system.
.91 0 0 0 9.91 0 0 0 9.91 0 0 0 Considering a constant unit step input u(t) = 1N and, just in simulation, an additive parameter variation of the rigidity coefficient of spring 2, given as k 2 + δ(t), where δ(t) is presented in Figure 2. It is important to note that the generalized approach to the functional observer allows it to have different matrices as degrees of freedom as well as some robustness to parametric uncertainties. These uncertainties may come as a variation in the parameter values due to different physical processes, and may or may not be timevariant. In this example, we choose a time-varying uncertainty with a sinusoidal form. The generalized observer is capable of estimating the function z(t) with minor impact on performance. This parameter variation allows us to observe the robustness characteristics of the GFO compared with a simplest observer structure as the PFO.
The simulation is realized taking the input and output of the nonlinear system (53) and (54) in face to parametric uncertainty of Figure 2, the T-S system of (71) and (72) is just used for the observer design. Considering the initial conditions for the nonlinear system as As can be seen in Figures 3-5, the generalized observer is capable of estimating the state of the system even in the presence of parametric uncertainties. This presents an advantage compared to the classical proportional observers. The comparison between the performance of the genrealized observer and the Proportional observer is conducted through the error indexes which provide important information about the estimation error. For the case of a parametric uncertainty, the generalized approach outperforms the classical proportional observer. The error indexes of the Integral of the Absolute Error (IAE) and the Integral of the Time weighted Absolute Error (ITAE) are shown in Table 2.
From Table 2, it can be seen that the proposed GFO exhibits better performance in comparison to the proportional observer. The generalized approach allows a better estimation of the function when parametric uncertainties are present in the system.
Conclusions
In this paper, a method for function estimation based on generalized observer for descriptor Takagi-Sugeno systems is presented. A function estimation can be used for two main objectives: The first, and used in the simulation example presented herein, is the estimation of the non-measurable states that could be considered as a reducer order observer, although an even smaller number of states could be estimated, given that the impulse functional observability presented in Assumption 1, is satisfied. The second objective for the function estimation is to estimate a control law based on the states without the need to estimate the states independently, but rather the function directly. The design and conditions of existence of the proposed observer were provided through a stability analysis based on Lyapunov. An example of a physical system was used to provide a comparison of the GFO with a PFO, both in the presence of parametric uncertainties. This paper can be extended in future works to include fault estimation, time delays, or to estimate a fault-tolerant control law. The descriptor approach of this paper can be used in future work since it is a powerful tool for representing a large number of mathematical systems. | 2023-06-04T15:10:47.335Z | 2023-06-02T00:00:00.000 | {
"year": 2023,
"sha1": "dc091fac93d29a80c5c443dc7149eb924f571fdc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/pr11061707",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a242cf28645c86fb146a8d5101c5f2a58167e0cd",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
119246482 | pes2o/s2orc | v3-fos-license | Charmed ${\bf (70,1^-)}$ baryon multiplet
The masses of negative parity $(70,1^-)$ charmed nonstrange baryons are calculated in the relativistic quark model. The relativistic three-quark equations of the $(70,1^-)$ charmed baryon multiplet are found in the framework of the dispersion relation technique. The approximate solutions of these equations using the method based on the extraction of leading singularities of the amplitude are obtained. The calculated mass values of the $(70,1^-)$ charmed baryons are in good agreement with the experimental data.
Introduction.
For many years CLEO was the main source of data on orbitally-excited charmed baryons [1]. An excited Σ c candidate has been seen decaying to Λ c π + , with mass about 510 MeV above M(Λ c ) [2]. The first excitation of the Λ c and Ξ c scale well from the first Λ excitations Λ(1405, 1 2 − ) and Λ(1520, 3 2 − ). The highest Λ c was seen by BaBar in decay mode D 0 p [3]. The highest Ξ c were reported by the Belle Collaboration in Ref. [4] and confirmed by BaBar [5].
In the recent reviews [6,7] the spectroscopy of hadrons containing heavy quarks and some of their theoretical interpretation are given. One discuss progress on orbitally excited charmed baryons.
In the series of papers [8 -12] a practical treatment of relativistic three-hadron systems have been developed. The physics of three-hadron system is usefully described in term of pairwise interactions among the three particles. The theory is based on the two principles of unitarity and analyticity, as applied to the two-body subenergy channels. The linear integral equation in a single variable are obtained for the isobar amplitudes.
Instead of the quadrature methods of obtaining solution the set of suitable functions is identified and used as basis set for the expansion of the desired solutions. By this means the couple integral equation are solved in terms of simple algebra.
In our papers [13,14] relativistic generalization of the three-body Faddeev equations was obtained in the form of dispersion relations in the pair energy of two interacting particles. The mass spectrum of S-wave baryons including u, d, s-quarks was calculated by a method based on isolating the leading singularities in the amplitude. We searched for the approximate solution of integral three-quark equations by taking into account two-particle and triangle singularities, all the weaker ones being neglected. If we considered such an approximation, which corresponds to taking into account two-body and triangle singularities, defined all the smooth functions of the subenergy variables (as compared with the singular part of the amplitude) in the middle point of the physical region of Dalitz-plot, then the problem was reduced to the one of solving a system of simple algebraic equations.
In the recent paper [15] the relativistic three-quark equations of the excited (70, 1 − ) baryons are found in the framework of the dispersion relation technique. In our paper the orbital-spin-flavor wave functions are constructed. We have used the orbital-spinflavor wave functions for the construction of integral equations. We take into account the u, d, s-quarks. We have represented the 30 nonstrange and strange resonances belonging to the (70, 1 − ) multiplet. The 15 resonances are in good agreement with experimental data. We have predicted 15 masses of baryons. In our model the four parameters are used: gluon coupling constants g + and g − for the various parity, cutoff energy parameters λ, λ s for the nonstrange and strange diquarks.
The present paper is organized as follows. Section 2 is devoted to the construction of the orbital-spin-flavor wave functions for the charmed baryons (70, 1 − ) multiplet. In Section 3 the relativistic three-quark equations are constructed in the form of the dispersion relation over the two-body subenergy. In Section 4 the systems of equations for the reduced amplitudes are derived. Section 5 is devoted to the calculation results for the P -wave charmed baryons mass spectrum (Tables I-IV). In Conclusion, the status of the considered model is discussed.
Here we deal with a three-quark system having one unit of orbital excitation. We take into account u, d, c-quarks. The orbital part of wave function must have a mixed symmetry. The spin-flavor part of wave function possesses the same symmetry in order to obtain a totally symmetric state in the orbital-spin-flavor space.
For the sake of simplicity we derived the wave functions for the decuplets (10, 2). The fully symmetric wave function for the decuplet state can be constructed as [16].
Then we obtain: here MA and MS define the mixed antisymmetric and symmetric part of wave function, The functions ϕ M S are given: ↑ and ↓ determine the spin directions. 1 and 0 correspond to the excited or nonexcited quarks. The three projections of orbital angular momentum are l z = 1, 0, −1. The (10, 2) multiplet with J p = 3 2 − can be obtained using the spin S = 1 2 and l z = 1, but the (10, 2) multiplet with J p = 1 2 − is determined by the spin S = 1 2 and l z = 0.
We construct the SU(3)-function for each particle of multiplet. For instance, the SU(3)-function for Σ + c -hyperon of decuplet have following form: We obtain the SU(6) × O(3)-function for the Σ + c of the (10, 2) multiplet: Here the parenthesis determine the symmetrical function: The wave functions of Σ 0 c -and Σ − c -hyperons can be constructed by similar way. For the Ξ 0,− cc state of the (10, 2) multiplet the wave function is similar to the Σ +,− c state with the replacement by u ↔ c or d ↔ c. The wave function for the Ω ccc of (10, 2) decuplet is determined as: The wave functions and the method of the construction for the multiplets (8, 2), (8,4) and (1,2) are similar.
By the construction of (70, 1 − ) charmed baryon multiplet integral equations we need to using the projectors for the different diquark states. The projectors to the symmetric and antisymmetric states can be obtained as: One can obtain the four types of totally symmetric projectors: We use these projectors for the consideration of various diquarks: u ↑ c ↓ : u ↑ c ↑ : Here the lower index determines the value of spin projection, and the upper index corresponds to the value of orbital angular momentum.
We consider the projectors (21)- (24), which are similar to (17)- (20) with the replacement by c → u and use the amplitudes The A is the three-quark amplitude.
For the sake of simplicity we derive the relativistic Faddeev equations using the Σ c hyperon with J p = 3 2 − of the (10,2) multiplets. We use the graphic equations for the amplitudes A J (s, s ik ). In order to represent the amplitude A J (s, s ik ) in the form of dispersion relations, it is necessary to define the amplitudes of quark-quark interaction a J (s ik ). The pair quarks amplitudes qq → qq are calculated in the framework of the dispersion N/D method with the input four-fermion interaction with quantum numbers of the gluon [17]. We use results of our relativistic quark model [18] and write down the pair quark amplitudes in the form: Here G J is the diquark vertex function; B J (s ik ), ρ J (s ik ) are the Chew-Mandelstam function [19] and the phase space consequently. s ik is the two-particle subenergy squared (i,k=1,2,3), s is the systems total energy squared. For the state J p = 3 2 − of the (10,2) multiplet we use three diquarks The coefficients of Chew-Mandelstam function α J , β J and δ J are given in Table V. In the case in question the interacting quarks do not produce bound state, then the integration in dispersion integrals is carried out from (m i + m k ) 2 to ∞.
All diagrams are classified over the last quark pair (Fig.1). We use the diquark projectors. We consider the particle Σ c 3 2 − of the (10, 2) multiplet again. This wave function contains the contribution u 1 ↓ u ↑ c ↑, which includes three diquarks: u 1 ↓ u ↑, u 1 ↓ c ↑ and u ↑ c ↑. The diquark projectors allow us to obtain the equations (28)-(30) (with the definition ρ J (s ij ) ≡ k ij ).
Then all members of wave function can be considered. After the grouping of these members we can obtain: The left side of the diagram (Fig.2) corresponds to the quark interactions. The right side of Fig.2 determines the zero approximation (first diagram) and the subsequent pair interactions (second diagram). The contribution to u 1 ↓ u ↑ c ↑ is shown in the Fig.3.
If we group the same members we obtain the system integral equations for the Σ c state with the J p = 3 2 − (10, 2) multiplet: Here function L J (s ik ) has the form The integral operator K J (s ik ) is: The function b J (s ik ) is the truncated function of Chew-Mandelstam. z is the cosine of the angle between the relative momentum of particles i and k in the intermediate state and the momentum of particle j in the final state, taken in the c.m. of the particles i and k. Let some current produces three quarks (first diagram Fig.1) with the vertex constant λ. This constant do not affect to the spectra mass of excited baryons.
By analogy with the Σ c 3 2 − (10, 2) state we obtain the rescattering amplitudes of the three various quarks for all P -wave states of the (70, 1 − ) multiplet which satisfy the system of integral equations.
Let us extract two-particle singularities in A J (s, s ik ): α J (s, s ik ) is the reduced amplitude. Accordingly all integral equations can be rewritten using the reduced amplitudes. For instance, one consider the first equation of system for the Σ c J p = 3 2 − of the (10, 2) multiplet: The connection between s ′ 12 and s ′ 13 is [20]: The formula for s ′ 23 is similar to (38) with z replaced by −z.
is the cutoff at the large value of s ik , which determines the contribution from small distances.
The construction of the approximate solution of the (37) is based on the extraction of the leading singularities which are close to the region s ik = (m i +m k ) 2 [20]. Amplitudes with different number of rescattering have the following structure of singularities. The main singularities in s ik are from pair rescattering of the particles i and k. First of all there are threshold square root singularities. Also possible are pole singularities, which correspond to the bound states. The diagrams in Fig.2 apart from two-particle singularities have their own specific triangle singularities. Such classification allows us to search the approximate solution of (37) by taking into account some definite number of leading singularities and neglecting all the weaker ones.
We consider the approximation, which corresponds to the single interaction of all three particles (two-particle and triangle singularities) and neglecting all the weaker ones.
The functions α J (s, s ik ) are the smooth functions of s ik as compared with the singular part of the amplitude, hence it can be expanded in a series in the singulary point and only the first term of this series should be employed further. As s 0 it is convenient to take the middle point of physical region of Dalitz-plot in which z = 0. In this case we get s ik = s 0 = Here the reduced amplitudes for the diquarks 1 + , 1 + c , 1 − c are given. The function I J 1 J 2 (s, s 0 ) takes into account singularity which corresponds to the simultaneous vanishing of all propagators in the triangle diagrams. .
The G J (s ik ) functions have the smooth dependence from energy s ik [18] therefore we suggest them as constants. The parameters of model: λ J cutoff parameter, g J vertex constants are chosen dimensionless: Here m i and m k are quark masses in the intermediate state of the quark loop. We calculate the system equations and can determine the mass values of the Σ c J p = 3 2 − (10, 2). We calculate a pole in s which corresponds to the bound state of three quarks.
By analogy with Σ c -hyperon we obtain the system equations for the reduced amplitudes for all particles (70, 1 − ) multiplets.
Calculation results.
The quark masses (m u = m d = m and m c ) are not fixed. In any way we assume m = 570 MeV and m c = 1900 MeV . The value of nonstrange mass m is similar to the our paper ones [15]. In our model the four parameters are used: gluon coupling constants g c = 0.85 and g u = 0.58 and cutoff energy parameters λ c = 9.2, λ u = 10. Tables I-IV we represent the masses of the charmed resonances belonging to the (70, 1 − ) multiplet obtained using the fit of experimental values [21].
The (70, 1 − ) charmed baryon multiplet has 23 baryons with different masses. The 6 resonances are in good agreement with the experimental data [21]. We have predicted 17 masses of charmed excited baryons.
In the framework of the proposed approximate method of solving the relativistic three-particle problem, we have obtained a satisfactory spectrum of P -wave charmed baryons.
Conclusion.
In strongly bound systems of light and heavy quarks, such as the charmed baryons considered, where p/m ∼ 1 for the light quarks, the approximation by nonrelativistic kinematics and dynamics is not justified. The relativized quark model applied to baryon spectroscopy by Capstick and Isgur [22].
In the papers [13,14] the relativistic generalization of Faddeev equations in the framework of dispersion relations are constructed. We calculated the S-wave baryon masses using the method based on the extraction of leading singularities of the amplitude. The behavior of electromagnetic form factor of the nucleon and hyperon in the region of low and intermediate momentum transfers is determined by [23]. In the framework of the dispersion relation approach the charge radii of S-wave baryon multiplets with J p = 1 2 + are calculated.
In our paper [24] the relativistic Faddeev equations for the S-wave charmed baryons are constructed. We calculated the mass spectra of single, double and triple charm baryons using the input four-fermion interaction with quantum numbers of the gluon.
In the framework of a relativistically covariant constituent quark model one calculated on the basic of the Bethe-Salpeter equation in its instantaneous approximation mass spectrum of P -wave charmed baryons [25].
In our paper [15] the relativistic description of three particles amplitudes of P -wave baryons are considered. We take into account the u, d, s-quarks. The mass spectrum of nonstrange and strange states of multiplet (70, 1 − ) are calculated. We use only four parameters for the calculation of 30 baryon masses. We take into account the mass shift of u, d, s quarks which allows us to obtain the P -wave baryon bound states [15]. Recently, the mass spectrum baryons of (70, 1 − ) multiplet using 1/N c expansion are calculated [26]. The authors solved the problem by removing the splitting of generators and using orbital-flavor-spin wave functions.
We also use the orbital-flavor-spin wave functions for the construction of integral equations. It allows as to calculate the mass spectra for all charmed baryons (70, 1 − ) multiplet. The important problem is the mixing of P -wave baryons and the five quark systems (cryptoexotic baryons) [27] and hybrid baryons [28]. We can see that the The wave functions of (10, 2) decuplet.
We considered this decuplet in the Section 2. The totally symmetric SU(6) × O(3) wave function for each decuplet particle has the following form: The functions ϕ For the Σ + c -hyperon SU(3)-function is: Then one obtain the SU(6) × O(3)-function of Σ c the (10, 2) multiplet: The replacement by u ↔ c or d ↔ c allows us to obtain the Ξ cc wave function using the Σ c wave function. In the case of Ω − ccc the SU(6) × O(3) wave functions are given: The wave functions of (8, 2) octet.
The wave functions of octet 3 2 − , 1 2 − (8, 2) multiplet are constructed as: here ϕ SU (6) In the case of Σ have the following form: Then we can obtain the symmetric wave function for Σ + c : The Ξ 0 cc wave functions are obtained with replacement by u ↔ c in Σ + c . The The wave function of (8, 4) octet.
We can use the totally symmetric SU(6) × O(3) wave function in the form: As result we obtain: Appendix B. The integral equations for the (70, 1 − ) multiplet. | 2007-09-04T11:35:31.000Z | 2007-09-04T00:00:00.000 | {
"year": 2007,
"sha1": "c20db93be6722915d91743d1037dc2205d9ef9e2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0709.0397",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c20db93be6722915d91743d1037dc2205d9ef9e2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235398898 | pes2o/s2orc | v3-fos-license | Correlation of gastrointestinal perforation location and amount of free air and ascites on CT imaging
Purpose To analyze the amount of free abdominal gas and ascites on computed tomography (CT) images relative to the location of a perforation. Methods We retrospectively included 172 consecutive patients (93:79 = m:f) with GIT perforation, who underwent abdominal surgery (ground truth for perforation location). The volume of free air and ascites were quantified on CT images by 4 radiologists and a semiautomated software. The relation of the perforation location (upper/lower GIT) and amount of free air and ascites was analyzed by the Mann–Whitney test. Furthermore, best volume cutoff for upper and lower GIT perforation, areas under the curve (AUC), and interreader volume agreement were assessed. Results There was significantly more abdominal ascites with upper GIT perforation (333 ml, range 5 to 2000 ml) than with lower GIT perforation (100 ml, range 5 to 2000 ml, p = 0.022). The highest volume of free air was found with perforations of the stomach, descending colon and sigmoid colon. Significantly less free air was found with perforations of the small bowel and ascending colon compared to the aforementioned. An ascites volume > 333 ml was associated with an upper GIT perforation demonstrating an AUC of 0.63 ± 0.04. Conclusion Using a two-step process based on the volumes of free air and free fluid can help localizing the site of perforation to the upper, middle or lower GI tract. Graphic abstract
Introduction
Breaching of the gastrointestinal (GI) tract wall can be due to ulcer disease, inflammatory disease, blunt or penetrating trauma, iatrogenic factors, a foreign body or a neoplasm [1][2][3][4][5][6]. The most important questions to be answered regard the identification of the presence, location, and cause of the perforation in order to perform the appropriate therapeutic procedure because gastrointestinal tract (GIT) perforation is a major life-threatening condition with high morbidity and mortality that requires emergency surgery; despite improvements in surgical and medical treatments, the overall mortality rate is 30%, and the mortality rate of cases that also involve diffuse peritonitis is up to 70% [7][8][9][10]. Clinical diagnosis of the site of GIT perforation is difficult, as the symptoms may be nonspecific. The clinical presentation varies; esophageal perforations can present with acute chest pain, odynophagia and vomiting, gastroduodenal perforations cause acute, severe abdominal pain, while colonic perforations tend to follow a slower course of progression with secondary bacterial peritonitis or localized abscesses.
The presence of free intraperitoneal gas on a routine radiograph usually indicates bowel perforation. Some studies have shown that as little as 1 ml of gas can be detected below the right hemidiaphragm on properly exposed erect chest radiographs [11]. Plain film radiography (erect chest and abdominal radiographs) is sensitive in only 50-70% of cases, and the site of perforation is almost never elucidated [12,13]. A left lateral decubitus film can also be used for the detection of small amounts of free air that may be interposed between the free edge of the liver and the lateral wall of the peritoneal cavity. When interpreting a right lateral decubitus image, gas within the stomach or colon may obscure small amounts of free air. Other modalities include ultrasound, which may be particularly useful in patient groups where the radiation burden should be limited, notably children and pregnant women. However, ultrasound should not be considered a first choice in excluding pneumoperitoneum [14]. Computed tomography (CT) is useful in detecting minute amounts of extraluminal gas [15,16], the sensitivity of CT for free gas lies between 92 and 100% [17][18][19][20] A study of multidetector CT showed 86% accuracy in identifying the site of perforation [21].
Time is of the essence in these patients. Knowing the exact location of GIT perforation is crucial for surgeons, since the operation time, as well as morbidity and mortality, can thereby be decreased.
In this study, we aimed to predict the location of perforations by analyzing the amount of free abdominal gas and ascites on CT images. Our hypothesis was that more free gas than ascites in the abdomen indicates an upper GIT perforation; and more ascites than gas indicates a lower GIT perforation due to peritonitis.
Materials and methods
The Cantonal Ethics Committee approved this retrospective study (Ethics Approval Nr. 2020-01279).
Recruitment
A full-text search for "perforation" in the radiological information system (RIS, GE Healthcare, Chicago, Illinois, USA) was performed between 01.01.2003 and 01.01.2020 by a PhD student. Patients who had been examined by abdominal CT in our emergency room with or without the use of contrast media and whose records included the word "perforation" in the radiological report were included.
Exclusion criteria: (1) Patients without available operation report were excluded (not operated on, not operated in our hospital). (2) Patients with no perforation in the radiological report were excluded ("suspicion of perforation" or "no signs of perforations", both showed in the radiological fulltext search). (3) Perforation with obvious perforation locations like covert perforations or extraperitoneal perforations were excluded. (4) Patients with other reasons for free air than GIT perforation were excluded like postinterventional (drainage) or postsurgical free gas; but also posttraumatic patients with perforating injuries (abdominal wall laceration/ defect). (5) Patients with other reasons for ascites than GIT perforation were excluded, like liver cirrhosis, peritoneal carcinomatosis, pancreatitis and trauma with hemorrhagic ascites (trauma patients with hyperdense ascites were excluded, nontrauma patients with hyperdense ascites (blood or contrast) were included).
A total of 223 patients were found and matched to the surgical operation report from the CGM CLINICAL clinical information system (CompuGroup Medical Schweiz AG, Bern, Switzerland, Version: 7.16. [1][2][3][4][5]. The location of the perforation was extracted from the operation reports and represented the ground truth. After applying the exclusion criteria 172 patients remained for our study (Flowchart Fig. 1). The matching CT images were selected and anonymized by the PhD student and transferred to a read-out folder in our picture archiving and communication system (PACS, IDS7, Sectra, Linköping, Sweden). Consecutive case numbering was assigned to each patient.
Image acquisition
Two different CT scanners (Siemens SOMATOM Sensation 16 and SOMATOM Definition Edge, both from Siemens Healthcare, Erlangen, Germany) were used. Before 2012, the older model SOMATOM sensation 16 applied 120-140 kVp, 160 reference mAs tube current, 16 × 0.75-mm collimation, 1.15 pitch and standard filtered back projection with a slice thickness of 5 and 2 mm. A total of 120 ml of standard iodinated contrast medium (CM) with 300 mg/ml iodine (iobitridol, Xenetix 300; Guerbet, Aulnay-sous-Bois, France) was administered intravenously (i.v.) with an image acquisition delay of 60 s and a flow rate of 3 ml/s. Telebrix gastro with an iodine concentration of 300 mg/ml (Megluminioxitalamat, Guerbet, Aulnay-sous-Bois, France) was used as oral and rectal CM: 24 ml CM was dissolved with 800 ml tap water. This CM was orally administered 1 h prior to the CT exam and instilled rectally directly before the scan.
Since 2012, the new SOMATOM Definition Edge has used 100-140 care kVp, 120-160 reference mAs as the tube current, 128 mm × 0.6 mm collimation, 0.6 pitch and iterative reconstruction (SAFIRE, level 3). Transverse images were reconstructed at intervals of 5 and 1 mm. One hundred milliliters of Iomeron 400 mg/ml was injected intravenously with a flow rate of 3 mL/s. Data acquisition was started after 70 s. No oral or rectal CM was used anymore. Before 2012 the gold standard in emergency abdominal CT imaging was IV, oral and rectal CM application [22,23]. To save time for the critically ill patients our department changed the emergency CT protocols in 2012 to IV CM without oral or rectal CM application [24][25][26].
Image analysis
Images were reviewed by 4 board-certified radiologists (designated as 1, 2, 3, and 4) with 26, 33, 9, and 22 respective years of experience in abdominal imaging. Radiologists 1 and 2 analyzed 100 cases separately and radiologists 3 and 4 read 92 cases separately (different from the first 100 cases). All four readers were blinded from each other and to the perforation location. During the read out, a total of 20 patients had to be excluded (e.g., posttraumatic, postoperative and covert perforations, details in Flowchart of Fig. 1) leading to the remaining 172 study patients. The average volume of the paired readers was calculated for free air and ascites: Fig. 1 Flowchart for inclusion and exclusion of patients, preferred reporting items for systematic analyses (PRISMA). Covered perforations were excluded: the perforation is sealed off by adjacent organs, but the perforation location is obviously revealed by small air bubbles in close neighborhood of the perforation The amount of free air and ascites was rated by visual comparison to the volume of cooking or drinking units: teaspoon (5 ml), tablespoon (15 ml), shot glass (40 ml), small drinking glass (1 dl = 100 ml), soda can (333 ml), and PET bottle (500 ml/1000 ml/1500 ml/2000 ml; PET: PolyEthylene Terephthalate plastic bottle). Two additional medical technicians from our imaging laboratory performed a semiautomated computer-aided volume measurement of free air and ascites from 35 of the first 100 cases and 30 of the following 92 patients using syngo.via (Siemens Healthcare GmbH, Erlangen, Germany). Semiautomated means that the software uses a region-growing process that suggests the area of free air/ascites on the CT scan. The 3D annotation requires a time-consuming manual adjustment by the medical technician to differentiate air from feces and fat (or ascites from soft-tissue structures) (Fig. 2). On average the 3D analysis of one patient took 30 min. Because of that time constraint we planned the 3D analysis only in one third of the patients, randomly. The semiautomated measurement was used for detecting accuracy and agreement of radiologists in estimating the volume of ascites and free air.
The human raters determined if there was more gas than ascites to test our hypothesis for an easy applicable tool without performing measurements by the radiologists. Furthermore, they estimated the most likely location of the perforation by using radiological criteria (most extensive free gas or air bubble accumulation, bowel wall thickening or discontinuation and most extensive mesenteric fat imbibition). They classified the perforation location as follows: (1) stomach, (2) duodenum, (3) jejunum, (4) ileum, (5) ascending colon, coecum or appendix, (6) transverse colon, (7) descending colon and (8) sigmoid colon or proximal rectum. In addition, the radiologists had to indicate their confidence level in rating the perforation location: 1 = no confidence in localizing the perforation site; 2 = some confidence; 3 = 50%:50% confidence, 4 = reasonably sure, 5 = 100% sure of the location.
For each reader, a master read-out Excel file was compiled and saved daily with a traceable calendar date on a server drive with restricted access for radiologists only. For each reader, a personal encoded read-out Excel file was compiled, containing only the patient code and the read-out variables (volume and location). The Excel sheets were stored in SharePoint of our hospital domain.
Statistics
The label "air scenario" was attributed to cases with more free air than ascites, and the "ascites scenario" was attributed to patients with more ascites than free air. Perforation locations in the (1) stomach, (2) duodenum, (3) jejunum, (4) ileum, (5) ascending colon, coecum or appendix, (6) transverse colon, (7) descending colon and (8) sigmoid colon or proximal rectum were pooled into upper GIT (1-4) and lower GIT (5)(6)(7)(8) perforations. The prevalence of the perforation location was analyzed per GIT segment (1-8) by chi square testing. The median absolute volumes of air and ascites for perforations of each GIT segment were calculated by using the average volume estimates of both radiologists. Comparisons among the different segments were performed by using the rank sum test (Mann-Whitney independent testing). The volume of free air and ascites alone and the sum, the delta and the ratio of air and ascites were tested to classify the perforation location as upper or lower GIT using the rank sum test (Mann-Whitney independent testing). The "air scenario" and "ascites scenario" were analyzed by chi square testing as a sign of upper or lower GIT perforation. Receiver operating characteristic (ROC) statistics for the best volume cutoff for upper and lower GIT perforation were calculated, delivering individual sensitivities, specificities and areas under the curve (AUCs). Furthermore, interreader agreement was assessed for volume rating among the four radiologists, and the correlation coefficient and limits of agreement between the semiautomated and radiologist measurements were calculated. Due to the volume approach by drinking units used by the radiologists, the number of entries was limited, and the weighted kappa could be calculated. For the volume comparison with the machine (continuous volume spectrum), the correlation coefficient and the Bland-Altman agreement approach were used. For the following kappa classification of interrater agreement, κ < 0 = poor agreement, 0.0-0.20 = slight, 0.21-0.40 = fair, 0.41-0.60 = moderate, 0.61-0.80 = substantial, and 0.81-1.00 = (almost) perfect agreement was used. MedCalc (Version 7.6.0.0, Ostend, Belgium) was used for the statistical computation. A significance level of p < 0.05 was applied. The radiologists' correct perforation location was expressed as the sensitivity for segmental classification and for upper/lower GIT classification. Furthermore, the radiologists' experience was compared for correct perforation location prediction (chi square testing).
Results
Between 01.01.2003 and 01.01.2020, a total of 172 abdominal cases with GIT perforations were finally analyzed. The median age was 66.1 years (range 1.2 to 94.4 years), and the sex ratio was 93:79 = m:f. In 5% of our entire study population, CT scans were performed without IV contrast media. Thirty percent, 45% and 25% of the study patients received oral, both oral and rectal and no GIT contrast media, respectively.
Location of the GIT perforation (consecutive study)
A total of 54.1% (n = 93) of all perforations were found in the upper GIT, and 45.9% (n = 79) were found in the lower GIT. The top three locations were the sigmoid colon, stomach and ascending colon, with prevalences of 32.6% (n = 56), 20.3% (n = 35) and 14.5% (n = 25), respectively. The transverse colon demonstrated significantly fewer perforations than the sigmoid colon (2.9%, p < 0.0001, entire prevalence statistics shown in Fig. 3 and Table 1).
Dependency of the perforation location and the amount of free air
All of the patients demonstrated free air since this was an inclusion criterion. The median volume of free air was 174 ml (percentiles 25-75% = 40-417 ml) in perforations of the upper GIT, compared to 100 ml (percentiles 25-75% = 28-500 ml) in perforations of the lower GIT (p = 0.47). The highest volume of free air was found in perforations of the stomach, descending colon and sigmoid colon (333, 333, 143 ml, entire statistics in Table 1, Fig. 4 and 5). Significantly less free air was present in perforations
Dependency of perforation location and amount of ascites
All of the study patients exhibited a certain amount of ascites (100% prevalence). There was significantly more ascites in the abdomen when the perforation was located in the upper GIT (median: 333 ml, percentiles 25-75% = 70-750 ml) than when it was located in the lower GIT (median 100 ml, percentiles 25-75% = 31-333 ml, p = 0.022). Most ascites was found with perforations of the stomach, duodenum and ileum (median: 417, 333, 250 ml, entire statistics in Table 1 and Fig. 5, typical examples in Fig. 6). The presence of significantly less ascites indicated perforations of the large bowel (Fig. 7), e.g., a perforation in the ascending colon yielded an ascites volume of only 70 ml (p = 0.004, compared to a perforation of the stomach).
Relationship of free air and ascites dependent on the perforation location
The delta of air and ascites volumes (V air − V asc ) and the sum of both volumes also demonstrated significant differences between the upper and lower GITs (p = 0.005 and p = 0.037). However, the p value of the ascites difference in upper and lower GIT perforations alone was lower (p = 0.0023). On the scatterplot diagrams the relation between free air and ascites is shown dependent on the perforation location (Fig. 8). In addition, the volume ratio of air and ascites between the upper and lower GIT perforations or the fact that there was more ascites than air (ascites scenario) did not lead to significant differences (p = 0.18, p = 0.31).
ROC analysis of ascites and air
The AUC was 0.63 ± 0.04 using the amount of ascites (ml) for differentiating the perforation location (upper vs lower GIT, Fig. 9). A cutoff level of 333 ml ascites was the best criterion for location detection. When a perforation (free air) presented with 333 ml ascites or less, it was more likely a lower GIT perforation (large bowel). The odds ratio for a perforation in the lower GIT when demonstrating less than 333 ml ascites was 3.52 (95% CI 2.7-4.0). This threshold led to a sensitivity and specificity for large bowel perforation of 80.7% and 45.6%, respectively. Using the volume of free air, a cutoff value of 70 ml or less indicated a lower GIT perforation. However, the odds ratio for a perforation in the lower GIT with less than 70 ml free air was only 1.76 (95% CI 1.3-2.0); and the AUC was lower (0.58 ± 0.05) with a sensitivity and a specificity of 53.8% and 67.9%, respectively.
Sensitivity and confidence of radiologists for perforation location
Radiologists scored a higher sensitivity for estimating the perforation location when the upper GIT (stomach, duodenum, jejunum and ileum) and the lower GIT (ascending, transverse, descending and sigmoid colon with rectum) were pooled together (sensitivity = 0.91 ± 0.04). When they had to guess the individual parts of the GIT, the ). Leakage of the descending and sigmoid colon demonstrated more free air than ascites (median in orange). Only volumes smaller than 1000 ml are shown sensitivity dropped to 0.68 (± 0.09) at a relatively high confidence level of 3.6 (± 0.2), meaning that the radiologists were "reasonably sure" of their location prediction ( Table 2).
The interobserver agreement between rater 1 and rater 2 for the volume of free air was 0.69 ± 0.05, and their interobserver agreement for the volume of ascites was 0.55 ± 0.05. The agreement on the perforation location was 0.81 ± 0.04. For raters 3 and 4, the agreements for air volume, ascites volume and perforation location were 0.64 ± 0.05, 0.68 ± 0.04, and 0.81 ± 0.06, respectively. Overall, substantial agreement between air and ascites estimates by the naked eye was reached (0.64 ± 0.05), and for the perforation location, almost perfect agreement could be scored (0.81 ± 0.05). There was no difference in correct classification between the more experienced radiologists (1/2) and moderately experienced radiologists (3/4) (p = 0.4).
Fig. 10
Bland-Altman plots: volume differences between machine and radiologist for air (left side) and ascites (right side)
Discussion
Our results demonstrate that both ascites and free air volumes are larger in upper GIT perforations. Radiologists could very accurately name the location of the perforation using their experienced skills for detection of the smallest air bubbles around the perforation or the detection of wall defects. Experienced radiologists are advantageous, but for residents and fellows on night shifts in an emergency room, determining the amount of ascites may help to find the perforation location, which may be very useful for visceral surgeons.
Determining the volume of ascites might boost the confidence level of radiologists in finding the perforation location. With the application of a simple rule, perforation locations can be classified as upper or lower GIT. The rule says that one must estimate whether the ascites volume is more or less than 333 ml, which is the exact volume of a soda can. This approach helps radiologists image and compare liquid volumes. In comparison to the semiautomated volume measurement, the radiologists reached a high agreement with the machine that was comparable to the agreement of one radiologist with another. However, the radiologists estimated the volumes to be slightly higher (by 55-65 ml). Since the largest amount of free air is seen in perforations at the beginning and at the end of the GIT (stomach, descending and sigmoid colon) and ascites is found more in the presence of upper GIT perforations, the ratio of the sum or the delta of air and ascites is obviously not as helpful as the amount of ascites alone for localizing the perforation.
The two most frequent sites of perforation were sigmoid colon (32.6%) and stomach (20.3%), combined they represented more than 50% of all our study cases. With this information alone radiologists should know where to start looking for a leak in the GIT. The fact that perforations in the lower GIT demonstrated less amount of ascites, helps separating the two locations. It needs to be indicated that there was an overlap between air and fluid volume between the stomach and the sigmoid colon as shown in Figs. 4 and 5. Therefore, sensitivity and AUC of the proposed volume cutoffs were never 100% for classifying the perforations into upper and lower GIT perforations.
In the future, the following two-step algorithm needs to be investigated (Fig. 11): 1. The amount of free air determines whether the perforation is located in the middle of the GIT or not. 2. The volume of ascites then determines if the perforation location is in the upper or lower GIT.
When the classification of the radiologists was pooled into the 4 segments suggested by the 2 step algorithm ( Fig. 11), the four readers together misclassified the perforation location in 69 cases. In these cases the readers demonstrated low confidence, and the proposed algorithm detected 28 correct locations of perforation (= 40.6%, p value was 0.0458 compared to chance (25%)). Previous reports have emphasized CT manifestations of bowel perforation secondary to various causes. However, no previous reports have tried to quantify the most useful findings as free air and fluid. For example, Ongolo-Zogo et al. [27] reported on a series of 10 perforated gastroduodenal peptic ulcers in which two important CT findings were indicative of the site of perforation: discontinuity in the bowel wall in six patients and tiny extraluminal air bubbles in close proximity to the bowel wall in two patients. Miki et al. [28] also reported direct visualization of a ruptured colonic wall in four of six patients with colonic perforation. In the study of Hainaux et al. [21], the authors concentrated on free air bubbles in close proximity to the bowel wall and segmental bowel wall thickening as strong predictors of perforation at the site.
Our approach differs from that in the study of Seishima et al. [29], in which the author retrospectively concentrated on the CT attenuation values of ascites, demonstrating a higher density of ascites in patients with colorectal perforation than in those with perforations at other sites.
The study of Shanmuganathan et al. [30] shows that helical CT with administration of rectal, oral, and IV contrast material is highly accurate for evaluating patients with penetrating injuries to the thoracoabdominal region. Nevertheless, only 15% of patients with bowel injury showed oral or rectal contrast material extravasation. In our study, we focused mainly on patients with nontraumatic bowel perforations. In such patients, it is often difficult to obtain opacification of the ascites, future studies will have to investigate the density of ascites and perforation location in traumatic and nontraumatic patients. We halfway confirmed the hypothesis that upper GIT perforations yield more free air, but we had to reject the hypothesis that lower GIT perforations would result in more ascites. Our results showed the exact opposite. Our aim was to devise a simple rule that can always be applied without complicated time-consuming measurements at every emergency CT imaging unit, based on the fact that retrospective identification of the site of perforation helps the emergency department physician plan the appropriate treatment in a potentially unstable patient and assists the surgeon in planning the correct surgical approach.
Limitations
We did not consider the delay between the time of onset of symptoms and the time of CT examination. Potential peritonitis caused by either upper or lower GIT perforations may lead to more ascites over time, which could confound our results. Our results represent the measurements in a consecutive population of perforation patients in a tertiary care hospital center. Factors other than location are not included in this study such as size of the perforation, density of ascites or whether the location was intraperitoneal or extraperitoneal. We wanted to focus on the many cases where the location could not primarily be identified by a large interruption of the bowel wall or by small gas bubbles around an extraperitoneal or covered perforation.
Conclusions
Our results demonstrate that the amount of free air is larger in upper GI and distal lower GI perforations than in other sites of the gut, and upper GI perforations have a greater volume of ascites. Using a two-step process based on the volumes of free air and free fluid can accurately localize the site of perforation to the upper or lower GI tract. When used in conjunction with other CT findings, such as location of small extraluminal gas bubbles, these findings can increase the confidence of the radiologist in identification of the site of bowel perforation. This algorithm may be especially helpful for residents or junior attendings in diagnosing the perforation site.
Such information is of vital assistance to the visceral surgeon. | 2021-06-11T14:19:45.323Z | 2021-06-10T00:00:00.000 | {
"year": 2021,
"sha1": "b84059d3cdec6c0f61d51ea0259403b2f7f4b5e6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00261-021-03128-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b84059d3cdec6c0f61d51ea0259403b2f7f4b5e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231614263 | pes2o/s2orc | v3-fos-license | Electronic Health Record Algorithm Development for Research Subject Recruitment Using Colonoscopy Appointment Scheduling
Introduction: Electronic health records (EHRs) are often leveraged in medical research to recruit study participants efficiently. The purpose of this study was to validate and refine the logic of an EHR algorithm for identifying potentially eligible participants for a comparative effectiveness study of fecal immunochemical tests (FITs), using colonoscopy as the standard. Methods: An Epic report was built to identify patients who met the eligibility criteria to recruit patients having a screening or surveillance colonoscopy. With the goal of maximizing the number of potentially eligible patients that could be recruited, researchers, with the assistance of information technology and scheduling staff, developed the algorithm for identifying potential subjects in the EHR. Two validation methods, descriptive statistics and manual verification, were used. Results: The algorithm was refined over 3 iterations leading to the following criteria being used for generating the report: Age, Appointment Made On/Cancel Date, Appointment Procedure, Contact Type, Date Range, Encounter Departments, ICD-10 codes, and Patient Type. Appointment Serial Number/ Contact Serial Number were output fields that allowed the tracking of cancellations and reschedules. Conclusion: Development of an EHR algorithm saved time in that most individuals ineligible for the study were excluded before patient medical record review. Running daily reports that included cancel-lations and rescheduled appointments allowed for maximum recruitment in a time frame appropriate for the use of the FITs. This algorithm demonstrates that refining the algorithm iteratively and adding cancellations and reschedules of colonoscopies increased the accuracy of reaching all potential patients for recruitment. ( J Am Board Fam Med 2021;34:49–60.)
Introduction
Subject recruitment methods vary according to the research topic and population being studied, with advertisements, invitation letters, review of patient appointment lists, and/or electronic health records (EHRs) as commonly used methods. EHRs are often leveraged in medical research to recruit study participants efficiently affording cost containment and study success. 1,2 Medical diagnoses using ICD-10 codes are commonly used for subject recruitment via EHRs. More involved methods of identifying patients include searching medication lists, prescription data, or unstructured (ie, freetext) data when structured data elements do not exist. 3,4 Well-programmed EHR algorithms have been found to enhance subject recruitment. In one study, researchers found recruitment of subjects was faster and more cost-efficient using the EHR patient portal when they used an algorithm to search for diagnosis codes and the medication list. 2 In another study, researchers compared the efficiency and enrollment rate of manual chart review versus an automated prescreening method using an algorithm for the recruitment of patients with the presence of the following: a diagnosis of diabetes, high glucose levels, and a recent insulin order. 5 Their algorithm pulled from their data warehouse and not from their live electronic medical record, but it still significantly increased the number of subjects screened and enrolled. 5 Yet other researchers studying use of appropriate medications for individuals with asthma recruited patients through scheduled clinic visits noted in the EHR. 4 Throughout the recruitment process, they encountered problems with their algorithm, such as difficulty detecting same-day visits and clinic appointments that were cancelled or no-shows. Unable to remedy the problem, staff was assigned to review the EHR every day to look for appropriate appointments. 4 Investigators using the EHR for subject recruitment tend to use diseaseoriented data; it is less common to use administrative data such as scheduling events. 4 Our study was funded by the National Cancer Institute to address the knowledge gap in test characteristics of fecal immunochemical tests (FITs) using colonoscopy as the gold standard (referred to as the BestFIT study). Subjects were required to complete all the steps of the BestFIT study before their colonoscopy. A major problem with patients scheduled for a colonoscopy is that they cancel the procedure, may or may not reschedule, and are noshows. This is different from a cancellation or noshow in a primary care clinic where patients are overbooked, since procedure suites can only schedule a finite number of patients. Patient no-show rates vary by study-for predominately African American populations, no-show rates have been 20% 6 and 23%; 7 in 23 small and large urban primary care physician offices a no-show rate of 38% was found for first-time colonoscopies; 8 in a large safety net health care system nonattendance was 42%. 9 To complete recruitment for the study in a timely manner and reduce the cost spent on recruiting patients to only have them not complete their colonoscopy, researchers needed to find a way to track patients' colonoscopy appointment statuses. An EHR algorithm could do this and generate a list of patients scheduled for a colonoscopy meeting specific inclusion and exclusion criteria. The purpose of this study was 2fold: 1) to validate and refine the logic of a newly created EHR algorithm for identifying potentially eligible patients for the BestFIT study recruitment, and 2) to illuminate the possible pitfalls in constructing an EHR algorithm.
Methods
Institutional Review Board approval was obtained from the University of Iowa. This study was conducted in the Department of Family Medicine in an academic center with one clinic on campus and 4 clinics off-campus serving 112,000 individual patients of all ages. Through the Digestive Health Center, approximately 3600 screening and surveillance colonoscopies are performed each year. With the goal of maximizing the number of patients that could be recruited, researchers and information technology (IT) staff developed the rules for identifying potential subjects in the Epic EHR software, using the Epic Reporting Workbench, which allows users to pull data in real-time (Epic Systems, Verona, WI). The process with IT staff started during a pilot study 7 months before when the BestFIT study was funded.
To be eligible for the BestFIT study, patients had to be 50 to 85 years of age scheduled for a screening or surveillance colonoscopy. Those scheduled for a screening colonoscopy were asymptomatic patients testing for the presence of colorectal cancer or polyps having no history of colon cancer, polyps, and/or gastrointestinal disease. Those scheduled for a surveillance colonoscopy were asymptomatic patients at an interval less than the standard 10 years from the last colonoscopy, due to personal findings of cancer, polyps, or gastrointestinal disease on a previous examination. Seventy-one percent of the subjects were scheduled for screening colonoscopy and 29% for surveillance colonoscopy. Patients with familial polyposis syndromes, ulcerative colitis, Crohn's disease, personal history of colorectal cancer, or active rectal bleeding were to be excluded; patients with para-or quadriplegia, dementia, or severe psychiatric issues were also excluded. These requirements were expressed in the Epic algorithm through "criteria," which are the eligibility filters of the data query. In addition to the Age and ICD-10 code criteria described above, administrative criteria denoting appointment scheduling events, type of procedure, place of procedure, and date appointment was made were used to identify appropriate patients ( Table 1).
The second component of the Epic algorithm was a display of the search results, whose variables are exported as a Microsoft Excel comma-separated values file (referred to as Output). The Output included patient medical record number, demographic information, and information pertaining to the appointment. A full list of Output variables and their definitions can be found in Table 2.
The Epic algorithm was developed and refined through 3 iterations to retrieve a list of patients who met the eligibility criteria. The first iteration was developed for a pilot for the BestFIT study; consecutive iterations improved on this baseline. An Epic FIT Daily Report was developed to be manually triggered every business day to capture all colonoscopy appointments made on the previous business day. This FIT Daily Report served as a prescreening and provided a list of potentially eligible participants who then underwent a final manual chart review using additional criteria the Epic report could not accurately capture. Manual review included criteria such as reason for colonoscopy and time since last colonoscopy. See Appendix A for an in-depth discussion of the manual review process. *Address was a single field in the first iteration. Subsequent iterations imported discrete fields. † The ICD-10 codes associated with rectal bleeding (K62.5, K92.1, and R19.5) were found in the encounter diagnoses from encounters that occurred in the past 60 days and grouped together to create a single variable, Rectal Bleeding. These were distinct from the ICD-10 codes criterion, which was designed to search the problem list. ‡ In Epic, a unique Contact Serial Number (CSN) is generated for each new appointment that is made for an Order. The first CSN to be generated is designated as the Appointment Serial Number (ASN), which serves as the reference number connecting all subsequent appointment changes that occur for that Order.
As an accompaniment to the EHR algorithm, researchers with the help of another IT group developed a custom candidate and participant tracking application in File Maker Pro (Claris International Inc., Santa Clara, CA) to process the Output from the FIT Daily Report (referred to as Tracking Database). When Output from the FIT Daily Report was imported to the Tracking Database, it performed additional critical functions for enhancing subject recruitment such as marking candidate eligibility, prompting recruitment mailing to eligible patients when they fell within the appropriate recruitment time frame, and automatically updating the appointment information when appointment dates were changed. Without the Tracking Database, the volume of patient records generated on a daily basis would have resulted in a much more labor-intensive patient review and recruitment process as the scheduling status updates would have had to have been tracked manually in a spreadsheet before recruitment activities could resume.
When the EHR algorithm development was done, 3 distinct algorithms had been created in Epic for comprehensive coverage of the complex nature of tracking scheduling events throughout the duration of the 5-year study: a FIT Daily Report, a FIT Daily Cancellations Report, and an All Appointments Report. The FIT Daily Report and FIT Daily Cancellations were developed to be run every business day; the All Appointments Report captured all existing appointments for the upcoming 6 months and was run once at the start of the BestFIT study to provide a data bank to which the 2 Daily Reports would update. The FIT Daily Report, its criteria, definitions, and parameters are described in Table 1.
Data Analysis
Each criterion for the Epic algorithm was tested for accuracy through a validation process. The development of the FIT Daily Report went through 3 iterations with new iterations built when researchers identified mistakes with the existing version.
After the algorithm was built and run for the first iteration, the algorithm was run again subtracting each criterion one at a time to verify its accuracy.
Second and third iterations were tested by adding new or revised criteria to the algorithm one criterion at a time and testing its accuracy. Within each criterion, parameters were set using relationship commands to specify whether values were to be included or excluded. Examples of relationship commands for inclusion included "equal to," "greater than or equal to," or "less than or equal to." For exclusion criteria, relationship commands included "does not contain" or "does not exist." Results were verified using 2 methods. The first method was to calculate and review descriptive statistics to see whether the Output values fit the conditions set in the criteria (SPSS version 25, IBM, Armonk, NY). For example, Age ranged from 50 to 85 years of age when the Age criterion was included in the FIT Daily Report. However, when the Age criterion was removed from the algorithm, the range was 18 to 90 years of age, verifying that the Age criterion was working correctly. Verification using descriptive statistics was appropriate for all variables except ICD-10 codes. To test the accuracy of the ICD-10 codes criterion, problem lists were reviewed manually to verify that those who were ineligible were excluded. This was because the medical diagnosis problem list could have as little as zero or up to as many diagnoses (20 to 30) as required by a patient. In this case it was not pragmatic or possible to have the Output file list all the ICD-10 codes included in the problem list for each subject (contact the authors for a complete list of ICD-10 codes).
Before the start of the BestFIT study, the research team evaluated and verified the algorithm. The original runs (referred to as original runs) were conducted throughout this time period with 1 or 2 weeks spent on verifying each criterion. The 1 to 2 weeks spent on verification generated enough patients scheduled for a colonoscopy for a thorough validation. For this article, researchers recreated the steps that were taken for the original runs and retroactively ran the FIT Daily Reports, but in a much more condensed manner (referred to as retroactive runs). Instead of running the iterations consecutively with multiple days spent on each iteration, all iterations (first, second, and third) were run simultaneously. To ensure enough data were present to test that each of the criteria were working correctly, the retroactive runs were executed over the course of 3 days, referred to as days 1, 2, and 3, respectively (Table 3). Column numbers varied by day depending on the number of colonoscopies that had been scheduled in the gastrointestinal clinic; row numbers varied depending on the criteria being tested.
Results
Each of the 3 iterations successfully built on one another to identify the eligible patients for the BestFIT study. The first iteration served as the baseline where researchers identified the need to remove prisoners and patients living in assisted care facilities and improve the algorithm for generating eligible subjects for the study. The second iteration allowed for more accurate targeting of the desired population by applying filters for ICD-10 codes and expanding Appointment Procedures and Encounter Departments. Most importantly, the Appointment Made on Date filter introduced in the second iteration made it so that no scheduling changes would fall through the cracks while reducing the number of patients that required daily manual review in the Tracking Database. Further improvements were made in the third iteration to prevent any loss in potential candidates for the BestFIT study by extending the Date Range, developing a second identical Report that replaced Appointment Made on Date with Appointment Cancelled Date, and adding the Output elements Appointment Serial Number (ASN) and Contact Serial Number (CSN) to enable the Tracking Database to automatically update appointment changes.
The variation in the numbers within each iteration (ie, column) denote the changes brought on by the exclusion or inclusion of criterion, but there were cases where the numbers showed no change. Day 2 had 2 instances where the number of patient records did not vary between criterion tests. One case was in the first iteration of day 2. The numbers were identical when Appointment Status and Age criteria were removed from the algorithm, respectively (n = 276). Researchers were able to verify that this was coincidental. The second case was when Contact Type was revised in the third iteration of day 2. In this case, researchers could only surmise there had been no appointments made under the "Hospital Encounter" code, as adding this to the Contact Type did not change the numbers on this day (n = 14). Due to this second issue, only days 1 and 3 provide sufficient data to verify all steps of the iterations. For simplicity, we only discuss the outcome for day 1 shown in Table 3.
First Iteration
The first iteration included 5 criteria: Age, Appointment Procedure, Appointment Status, Encounter Finally, removing the Encounter Departments criterion resulted in the FIT Daily Report timing out and yielding an error with no results. The Report was timing out because the absence of an Encounter Department made the report take up too many resources in the Epic system. To verify the Encounter Department criterion, researchers substituted an unrelated department in the field instead of removing the criterion altogether. When the Encounter Department was changed to a noncolonoscopy procedure unit, there were zero results, as expected.
Second Iteration
The second iteration tested the following changes: Patient type was added to the report criteria to exclude individuals residing in prisons; Appointment Procedures and Encounter Departments were expanded to include more potential candidates; ICD-10 codes were added to exclude ineligible patients; Appointment Made on Date was included to improve the definition of the parameters of the Report. Appointment Status was removed as it was not capturing all the changes that could occur with an appointment. In the second and third iterations, criteria were tested one at a time, cumulatively.
With Appointment Status removed, 270 records comprised the baseline for the second iteration. Adding Patient type removed 9 records that were classified as residing in a correctional facility, resulting in 261 records. Adding an additional 10 Appointment Procedures increased the number to 415 (Table 4). It was important to discern the type of procedure being conducted during the colonoscopy as some colonoscopies were for stool transplant or completed through a stoma, which were not appropriate for this study. Using the Boolean operator "OR" between each of the ICD-10 codes resulted in no change in numbers but replacing it with the correct operator "AND" successfully reduced the number to 336 (see Appendix B for more detail). Expanding the number of Encounter Departments to 3 departments increased the resulting list to 657; restricting the Contact Type to just Appointments reduced the number to 530. This reduction in numbers was surprising as we expected the number to stay the same with the addition of the "Appointment" Contact Type. Upon further examination, we learned another value, "Hospital Encounters," met our criteria and needed to be added, which was done during iteration 3.
Finally, introducing Appointment Made on Date made the biggest impact on reducing the number of patients to review. With the algorithm limiting the output to just the patients who had made an appointment on the prior business day, the number went down to 15 indicating that was the number of colonoscopy appointments made on one day.
Third Iteration
After learning from gastroenterology scheduling staff that some of the appointments not being captured were because of an overlooked Contact Type value, researchers ran a third iteration where Contact Type "Hospital Encounter" was added. This revision yielded an increase from 15 to 21 records. Extending the Date Range from 12 weeks to 4 years increased the final output number to 38 patient records and thus captured appointment scheduled, cancelled, and rescheduled.
In summary, the algorithm was refined through 3 iterations. This was done sometimes by adding additional criteria values that were missed in previous iterations that led to an increase in potentially eligible patients (e.g., second iteration of Appointment Procedures and Encounter Departments) while at other times it was done by filtering out unwanted characteristics (e.g., Patient Type), thereby increasing the sensitivity of the algorithm (Table 3). By the third and final iteration, the sensitivity and specificity of the algorithm had been maximized. A total of 14 reports were generated each day (not counting the first iteration Encounter Departments report that timed out), with the number of patients per report being listed in the rows. All patient records in every report were reviewed manually for accuracy of the individual criterion being tested.
Discussion
In our literature review, no studies were found where EHR reports were combined with patient appointment information and perfected to where reports were built specifically around appointment cancellations and reschedules to aid in study recruitment. This study is unique in that while administrative data and patient data are routinely used for research recruitment, adding the appointment date, and automatically tracking the cancellation and rescheduling of appointments is novel. Other researchers have attempted it but through manual reviews of the appointment updates, rather than through an algorithm including appointment rescheduling and cancelling. 4 Capturing and up-dating appointment cancellations and reschedules optimizes subject recruitment compared with recruitment based solely on scheduled appointments. The appointment date and cancelling or rescheduling of the appointment date were extremely important to include in the algorithm as the BestFIT study invited patients to participate in the study anywhere from 28 to 56 days out from the scheduled procedure. This time frame allowed for mailing of the informed consent, receipt of the signed consent, mailing of the occult blood tests, and receipt of the completed tests before the colonoscopy.
Having an accompanying Tracking Database was crucial to using the appointment change information. The Tracking Database processed the ASN and CSN to update changes to appointments so that the BestFIT study could dynamically adjust timeframes for subject contact. This ability to track the changes to appointments is especially useful for studies where the timing of the medical appointment is critical to the study, as it was in the BestFIT study, since FITs needed to be collected ahead of the colonoscopy prep and at most 4 months out from the colonoscopy. It also allows researchers to recruit more subjects in a shorter time period.
Researchers went through multiple cycles of communications with an expert IT application developer to create the queries for identifying potential participants for the study. This was timeconsuming and, similar to what has been found by other researchers, the queries were prone to errors which were found in the validation process. 1 Generally, researchers will most likely know the demographic and diagnostic parameters of the data they seek from the EHR, but may be less familiar with the many administrative criteria, such as appointment scheduling, identifying the departments in which the procedures take place, and setting correct date ranges-all which need careful consideration to recruit the desired population. Such was the case for this study and errors generally occurred due to lack of knowledge of administrative criteria and how they were used in the hospital setting (e.g., Appointment Procedures, Encounter Department, and Contact Type). For instance, if we had not pre-emptively restricted the parameters of Contact Type in the second iteration to include only "Appointments," the wider net would have captured the "Hospital Encounters." We blindly added Contact Type to imitate the function of Appointment Status that was being replaced, and the significance of "Hospital Encounters" was not apparent to us until we noticed that some rescheduled appointments were missing.
EHRs capture data on an open cohort of patients that have entered the hospital system. Part of the rigor of this study was the investigation of each data field to ascertain accuracy and comprehensively capture the data that was necessary for recruitment. Since EHR data are not standardized nor intuitively structured, we found, similar to other researchers, the need to work closely with IT staff to carefully review and validate the criteria chosen for algorithm development. 10 In fact, it was IT staff who discovered the ASN and CSN that allowed the tracking of appointment schedule status-researchers would not have known to look for such a variable. Future researchers may have a more streamlined experience developing their algorithms if they take time to consult and plan carefully with appropriate IT and administrative staff to gain more than a cursory understanding of criteria and their values.
Many studies suffer from difficulty with patient recruitment or recruitment delays. 4,[11][12][13] Patient recruitment is essential for study success. Without it, studies can be delayed with the potential loss of funding. 14 The time and effort invested in the recruitment algorithm for the BestFIT study was time-efficient in that nearly all potentially eligible patients were invited to participate, as indicated by careful validation of the iterative runs. This due diligence enhanced the actual recruitment process by excluding patients who were ineligible or who had changed their colonoscopy appointment date to be beyond the desired time frame.
Intuitively, it would follow that a thorough algorithm would make manual review of each eligible record unnecessary. However, it was impossible to build the algorithm to meet all the eligibility specifications due to lack of uniformity, completeness, and accuracy of the patient care data captured in the EHR. For instance, there was lack of uniformity in the terminology used by providers in noting the type of colonoscopy, some patients did not have records of past colonoscopies, and some clinic notes did not reference the most recent colonoscopy reports.
An inexperienced programmer resulted in initial errors in excluding ICD-10 codes for ineligible patients. This resulted in us diligently checking each criterion.
Conclusion
Although labor intensive, the time and effort put into the development of an EHR algorithm proved successful for recruitment with little effort to run each day. Once the BestFIT study was underway, it saved researchers' time by excluding many ineligible individuals before the manual review. With the help of IT, we refined, tested, and validated the algorithm. By including the capture of cancelled and rescheduled colonoscopy appointments, we ensured we were not missing potential participants. Fortunately, time was available for this development as the research team found out that the BestFIT study was likely to be funded 8 months before the actual start date. Different skill sets of the research team (nurse, physician, Epic database expert, and software developers) facilitated the pursuit and success of this endeavor. This work expands on what has been done in the literature, demonstrating that adding cancellations and reschedules of colonoscopies will optimize the potential number of eligible patients for recruitment. | 2021-01-16T14:07:36.762Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "8decf8b0c870d781516f4d8682604b1b0018761c",
"oa_license": null,
"oa_url": "https://www.jabfm.org/content/jabfp/34/1/49.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "75f8dafab0728eefe3a650d9251b613d2448b981",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248103903 | pes2o/s2orc | v3-fos-license | Experimental investigation on the hydrogen embrittlement characteristics and mechanism of natural gas-hydrogen transportation pipeline steels
Hydrogen blended with natural gas is one of the best ways for large-scale hydrogen transportation; however, pipeline steels exploited for transferring natural gas have the risk of hydrogen embrittlement. Therefore, the hydrogen damage mechanism and resistance property of different steel pipelines should be carefully examined to select suitable materials for the task mentioned above. The common X42, X52, X70, and AISI 1020 are taken into account as research objects. Their mechanical properties and hydrogen absorption properties in a hydrogen environment are investigated to explore further factors affecting the hydrogen embrittlement of material. Dynamic slow strain rate tensile test results show that these materials exhibit varying hydrogen embrittlement sensitivity in a hydrogen environment. AISI 1020 has the highest hydrogen embrittlement susceptibility, then X70, and X42 presents the lowest one. Generally, hydrogen embrittlement behaviours are strengthened by increasing the current density. As the current density grows, the fracture mode of pipeline steels transforms from the ductile fracture to the quasi-cleavage fracture and finally turns into the cleavage fracture. The hydrogen embrittlement fracture of the tensile specimen results from the action of the HEDE and HELP in various zones. TDS test results indicates that the content of C and Mn significantly influence on the hydrogen solubility in metal materials.
Introduction
Hydrogen has the advantages of high combustion value and no pollution, making it an excellent energy carrier for energy regeneration [1]; nevertheless, hydrogen is difficult to store, and it is flammable and explosive. Therefore, how to realize large-scale safe hydrogen transportation has become an urgent problem to be solved. Some scholars have proposed that hydrogen-compressed natural gas (HCNG) can be directly transported by blending hydrogen into existing natural gas pipelines. The method saves cost and has the immediate benefit in decreasing the amount of CO 2 produced by burning the gas mixture. So, it is currently considered the best way to achieve large-scale hydrogen transportation [2]. Besides, it is also an effective way to solve the problem of largescale wind power and solar power consumption. Current research show that, the volume fraction of hydrogen in HCNG transportation pipelines is usually controlled within 20% around the world, and the corresponding operating pressure of the transportation pipelines is lower than 5.38 MPa. For most of the in-serviced HCNG transportation pipelines, the volume fraction of hydrogen is less than 10%, and the operating pressure is lower than 7.7 MPa. The reason why there are strict requirements on the hydrogen volume fraction and the pipeline operating pressure in the HCNG transportation pipeline is that the presence of hydrogen may increase the risk of premature failure of natural gas pipelines from hydrogen damage. In the hydrogen-containing environment, mechanical properties of steel, such as ductility and toughness, would gradually deteriorate, even leading to the sudden failure of materials [3,4].
Pipeline steels for long-distance transportation of natural gas require the use of low-alloy high-strength steel with high strength, high toughness, excellent workability, and weldability. It is generally believed that the higher the strength of steel, the greater the possibility of hydrogen embrittlement [5]. Due to its excellent mechanical properties, steels such as X70, X42 and X52 are widely used in long-distance natural gas pipelines. Among them, X70 advanced pipelines steel are commonly exploited in China and Australia as long-distance natural gas pipelines [6], and AISI 1020 steel is usually utilized for urban natural gas pipeline material. In the United States and Europe, API 5L X42 and X52 are recommended in the ASME B31.12 code according to their high experiences in natural gas and hydrogen mixed transportation [7]. After years of research, there are some theories to explain the phenomenon of hydrogen embrittlement in materials, including the theory of hydrogen enhanced decohesion mechanism (HEDE) [8] and the theory of hydrogen enhanced local plasticity (HELP) [9]. However, due to the lack of sufficient test and operating data, the performance degradation mechanism of steel pipelines in contact with the hydrogen environment is still unclear; therefore, accidents may occur if existing pipelines are directly employed to transport HCNG.
At present, in order to simulate the hydrogen environment during the service process of materials, gaseous hydrogen charging method and electrochemical hydrogen charging method are often used in the hydrogen embrittlement studies of materials. Although the method of gaseous hydrogen charging under high pressure is closer to the service state of pipeline steel, the experiment is very difficult to conduct because of the expensive experiment cost and high experimental safety requirements. As a comparison, electrochemical hydrogen charging method is usually adopted to simulate a hydrogen environment and test resistance to hydrogen embrittlement of materials due to the high hydrogen charging efficiency and convenient operation [10][11][12]. As long as the hydrogen fugacity of electrochemical and gaseous hydrogen charging is the same, electrochemical hydrogen charging would be equivalent to gaseous hydrogen charging at a particular temperature and pressure [13].
Many scholars have scrutinized the hydrogen embrittlement characteristics of low carbon steel, austenitic stainless steel, and steel pipelines by employing the Slow strain rate tensile (SSRT) test and electrochemical permeation method. Hu et al [14] conducted SSRT test to the samples of 2.25Cr-1Mo steel used for hydrogenation reactor and found the temper embrittlement gives rise to the reduction of the ductility of the material. Depover et al [15] investigated the effect of hydrogen on the mechanical properties of generic Fe-C alloys by tensile tests, and presented that ductility of bainitic and martensitic materials both decreased by 20%, whereas the ductility of pearlitic and ferritic materials reduced about 50%. Martin et al [16] studied the effect of the alloy elements on the material's susceptibility to hydrogen embrittlement through tensile test, magnetic response measurements and thermodynamic calculations. Their results showed that the presence of hydrogen has little effect on the strength of the tested materials, while the decreasing austenite stability would cause the increasing of ductility loss. Yang et al [17] researched the effect of hydrogen on the fracture behaviors of Incoloy alloy 825 by means of SSRT. Their results indicated that the ductility of the Incoloy alloy significantly decreased as the hydrogen charging current density increases in the process of hydrogen precharging. In the above tests, samples are all pre-charged with hydrogen, then the SSRT is carried out. These findings can only reflect the effect of internal hydrogen on pipeline steel. In order to study the effect of ambient hydrogen, it is necessary to use the dynamic hydrogen charging SSRT method. This method couples the permeation process of ambient hydrogen with the stress field, which is closer to the actual service conditions of the pipeline steels.
Thermal desorption spectroscopy (TDS) method is another important approach to examine the hydrogen embrittlement mechanism of materials. By analyzing the thermal desorption process and characteristics of hydrogen in the material, the distribution of hydrogen and hydrogen content in the material can be calculated, and the interaction law between hydrogen and the hydrogen trap of the material can be determined. TAKAHAGI et al [18] used TDS method to study the hydrogen-induced surface free bond breakage on the semiconductor surface. Hadam et al [19] employed TDS method to compare the distribution of hydrogen in the material after pre-charging with industrial pure iron and high carbon steel. Their results showed that the distribution of hydrogen along the thickness direction of the material is not uniform. By means of TDS test, Escobar et al [20] analyzed the S550MC samples after electrochemical hydrogen charging, and found that the hydrogen charging samples had two desorption peaks, and the peak temperatures were concentrated at 70°C and 140°C, respectively. With the increase of hydrogen release time, the peak height of the curve of the low temperature peak gradually decreased. Lemus et al [21] investigated the microstructure influence on hydrogen trapping in a Cr-Mo type steels throuth electrochemical permeation test, TDS, SEM and TEM analysis. Their results demonstrated that there is an extra-peak below 200°C , which was attributed to the hydrogen trapping of vanadium carbide. Wang et al [22] studied effect of quenching-tempering treatment on the hydrogen embrittlement resistance of a reactor pressure vessel steel and found that two hydrogen states were identified in the hydrogen desorption profiles and the high temperature peak is the irreversible hydrogen. Although the TDS method has been used to study the mechanism of hydrogen embrittlement, due to the dissimilarities in the microstructure and chemical compositions of various materials, hydrogen absorption properties, including the hydrogen solubility and hydrogen permeability of each steel, are not the same [23,24]. That means that the damage theory of hydrogen to pipelines steel is not perfect, and further research and verification studies are still required.
To further reveal the hydrogen embrittlement mechanism of materials and to find the suitable hydrogenresistant steel pipelines for conveying natural gas blended with hydrogen, four typical pipeline steels, including X42, X52, X70, and AISI 1020 steel, are chosen as the research objects to inspect their mechanical properties variations in a hydrogen environment. The dynamic hydrogen charging SSRT method is employed to test the hydrogen resistance of materials comprehensively, and the mechanical performance degradation of each material is then carefully derived and displayed. Additionally, the TDS is exploited to describe the materials' capabilities in capturing hydrogen. Combined with the SEM pictures of fractured tensile samples, the hydrogen embrittlement susceptibility of these steel pipelines is deeply discussed. The results of this paper reveal the performance degradation mechanism of pipeline steel in hydrogen environment more realistically, and provide a technical basis for the rational selection of HCNG pipeline steel.
Test facility and method
2.1. Test equipment and method Figure 1 shows the SSRT test system in a dynamic hydrogen charging environment. It is mainly composed of three major parts: an SSRT machine, an electrochemical workstation, and a hydrogen environment box. The tensile sample and the platinum electrode are exploited as the cathode and anode, respectively. The test method adopts constant current polarization. The specimen is charged with hydrogen using CS 350 electrochemical workstation, and the properties of the specimen are tested with MFDL 100 slow strain rate tensile stress corrosion tester. During performing a tensile test on the hydrogen-charged sample, the hydrogen diffusion rate should be similar to the material strain rate to allow hydrogen to interact with dislocations completely. Hence, the slow strain rate tensile test is commonly implemented. The moving rate of the crosshead is 0.1 mm min −1 , and the corresponding strain rate is 5.4×10 −5 /s. After conducting the tensile test, the fractured zone of the specimen is cut off and observed with the SEM. The hydrogen desorption curve is measured with a TDS analyzer, and the hydrogen concentration of each steel is then calculated.
Both the SSRT and TDS specimens are taken from the pipe's axial direction. Figure 2 shows the geometry of the SSRT specimen, the thickness of the sample is 3 mm. The surfaces of the SSRT specimen are polished with 1000 grade SiC papers and then washed with acetone. The tensile specimens are sealed with silicone rubber except for the working surface to charge hydrogen.
In the SSRT experiment, it does not establish equilibrium conditions throughout the specimen. Hydrogen is charged while stretching (applied load), and the two are basically synchronized. Hydrogen charging continued until the sample is broken. This is also called the dynamic hydrogen charging process. Dynamic hydrogen charging is adopted to investigate the influence of external hydrogen on the properties of tensile specimens. The speed of the crosshead is set to 0.1 mm min −1 . The composition of the hydrogen charging electrolyte is 0.5 mol L −1 H 2 SO 4 +1.85 mmol l −1 Na 4 P 2 O 7 . Researches shown that metal materials have a critical current density that produces irreversible hydrogen damage under the conditions of electrochemical hydrogen charging. For pipeline steel, the critical value is between 10 and 30 mA cm −2 [14]. In order to ensure the integrity of the experimental results, the current density of hydrogen charging in the test is set in the range of 0∼20 mA cm −2 , which include 0, 1, 2.5, 5, 10 and 20 mA cm −2 . The zero current density corresponds to the sample subjected to a tensile test with a slow strain rate in the air environment at room temperature, as the uncharged control group. To improve the accuracy of the test results, three tests are carried out at each current density, and the average value of the performed three tests is taken when calculating the tensile properties of the material. The original data gotten from the tensile test is the change of the tensile load in terms of the crosshead displacement. In order to make the material performance display more universal, load-displacement curve is converted into the nominal stress-strain curve of each material through calculations and corrections.
The TDS experimental device is mainly composed of four parts: a vacuum high temperature test environment box, a mass spectrometer, a pump system, and a data acquisition system. Among them, the ultrahigh vacuum high temperature test environment box is composed of a vacuum chamber, a sample loading chamber and a working platform integrated with each component of the system. With the employment of resistance wire heating, the heating rate of the sample stage in the vacuum chamber can be automatically controlled by the program. In the TDS test, the specimen is a smooth round bar of dimensions 25 mm × Φ5 mm. Considering the TDS test has high requirements for the finished surface of materials, the sample hence should be polished step by step with 200# to 2000# sandpapers. After polishing, TDS specimens are immersed in 0.5 mol L −1 H 2 SO 4 +1.85 mmol l −1 Na 4 P 2 O 7 aqueous solutions while hydrogen is charged at the current density of 1 mA cm −2 with the charging time of 48 h. To prevent hydrogen from escaping, the TDS test is conducted immediately after hydrogen charging. The samples are heated from room temperature to 700°C in the TDS vacuum chamber with a heating rate of 100°C/h. The hydrogen escape rate and its concentration are then measured by a mass spectrometer.
Test materials
The test materials are taken from different gas transmission pipes, including X42, X52, X70, and AISI 1020. In order to reveal the fundamental reasons for the resulted discrepancy in the hydrogen resistance of various types of steel pipelines, chemical composition and metallographic structure of pipeline consisting materials are systematically tested. Chemical compositions of all steels are measured by spark source atomic spectrum based on the Chinese standard GB/T 9711-2017, and the specific values of constituent parts of materials have been provided in table 1. After cutting from the pipe base material, the metallographic specimens are prepared through grinding, polishing, and finally etching with 4% nitric acid alcohol at the polished surface. The microstructure of each material is then observed under the optical microscope (OM), as demonstrated in figure 3.
In figure 3, the length direction (horizontal) of each sub-picture corresponds to the circumferential direction of the pipeline, and the width direction (longitudinal) of each sub-picture corresponds to the thickness direction of the pipeline. It can be seen from figure 3 that the microstructures of X42, X52 and AISI 1020 steel are both composed of pearlite (P) and ferrite (F). Among them, AISI 1020 steel has the largest ferrite grain size, ferrite grain size of X52 steel is in the middle, and X42 steel has the smallest ferrite grain size. This is due to the difference in alloying element content in each steel. The metallographic structure of X70 is composed of granular bainite (B) and martensite (M), which is displayed in figure 3(C).
Dynamic hydrogen charging SSRT analysis
The tensile test is widely employed for testing the mechanical performance of materials. In the present scrutiny, the strength and plasticity indexes of each material, including ultimate tensile strength (UTS), yield strength (YS), elongation after rupture (EL), and reduction of area (RA), are measured by dynamic hydrogen charging SSRT. During the SSRT test, the raw data obtained from the test is the curve of tensile load as a function of crosshead displacement. However, in order to make the material performance display more universal, the relationship between load and displacement has been transformed into the relationship between stress and strain in SSRT experimental research. In addition, in order to ensure that the obtained engineering strain of the tested material during the tensile process is reliable, some corrections are performed in the process of transforming the load-displacement curve to the stress-strain curve, which focusing on stripping the influence of the elastic deformation (displacement) of the fixture on the total deformation (displacement) of the specimen. For materials without obvious yield steps (such as low alloy steel X70), in order to measure the yield characteristics of the material, it is specified that the stress value corresponding to 0.2% plastic strain is used as the material yield strength.
The effect of hydrogen charging on the tensile behavior of the material has been presented in figure 4. It can be seen that, except in the case of X70 whose results are given in figure 4(c), the total elongation pertinent to the fractured state of the tensile samples after dynamic hydrogen charging is significantly lower than that of the samples without hydrogen charging. For instance, the total elongation associated with the X42 fracture state is about 45% during stretching in the air, while the total elongation at the fractured state after hydrogen charging at various current densities is in the range of 25%-35%. This issue clearly proves that X42 has good plasticity without hydrogen charging, and the hydrogen charging would significantly reduce the total elongation, particularly at the fracture mode of the material. For the case of X70, the total elongation at the fractured state during stretching in the air is only about 20%, mainly attributed to the poor plasticity of the material. The hydrogen concentration in the material is more likely to reach the saturation level after hydrogen charging. As a result, the total elongation associated with the fracture of X70 does not change significantly under various current densities of the SSRT test.
Additionally, it can be observed from the demonstrated plots in figure 4 that the tensile curves of each material under various current densities would basically overlap in the elastic deformation stage. This issue indicates that dynamic hydrogen charging has almost a trivial influence on the mechanical properties of the material in the elastic stage. Nevertheless, by arriving at the plastic stage, the plotted curves show a particular discrepancy, which shows that the dynamic hydrogen charging significantly influences the plasticity and strength of the material. Figure 5 shows the effect of the current density on the plasticity and strength of various steel pipelines. As can be seen, compared with the YS of the material before hydrogen charging, the YS of these four materials are usually enhanced after hydrogen charging, which is consistent with previous studies [15,25]. This fact is mainly due to the interaction of dissolved hydrogen in the steel gap with dislocation to form the Cottrell atmosphere, which increases dislocation resistance [26]. After hydrogen charging, the UTS of X42, X52, and X70 are all enhanced because the solution strengthens the role of dissolved hydrogen. In contrast, the UTS of AISI 1020 is significantly reduced. This is because hydrogen reduces the grain boundary binding energy. The grain boundary of AISI 1020 is particularly sensitive to hydrogen; therefore, its ability to withstand deformation would be dramatically lessened, causing dislocation to slip early. From macroscopic points of view, such a state is initiated by entering the material's tensile curve to the necking stage in advance. From the results in figure 5, it can be seen that although the tensile strength of different pipeline steels tends to increase or decrease in the hydrogen environment, the overall change is not large. Within the range of hydrogen charging current density tested in this paper (0∼20 mA cm −2) , the maximum difference between the tensile strength of X42, X52, X70 and AISI 1020 under hydrogen charging conditions and the tensile strength under air environment is 1.7%, 6.4%, 3.7% and 9.8%, respectively. Therefore, it can be considered that the effect of hydrogen charging on the strength of pipeline steel is small. The influence of hydrogen charging on the plasticity index of each material is somehow similar. As the current density rises, the EL and RA reduce. For each considered material, as the current density increases, the plasticity significantly reduces first and then tends to level off. This fact reveals that when the hydrogen concentration in the material inclines to be saturated, the degradation effect of hydrogen on the material also tends to stabilize. Further observations display that the variation range of the RA is more pronounced than that of the EL after hydrogen charging. For a more systematic investigation of the engineering problem, hydrogen embrittlement (HE) index I A is defined as follows: where A 0 and A H in order are the RA of the uncharged and charged specimen. This index can be effectively exploited for sensitivity analysis of materials to the hydrogen embrittlement.
The change of the HE index for each material in terms of the current density has been demonstrated in figure 6. It can be observed that the HE index of each material generally grows by an increase in the current density. Among the four steel pipelines mentioned above, the lowest and highest HE indexes are detectable for the cases of X42 and AISI 1020, respectively.
In order to further reveal the performance degradation mechanism of different pipeline steels in the presence of hydrogen, in this paper, on the basis of obtaining the law of the mechanical properties of pipeline steels with the hydrogen charging current density, the scanning electron microscope (SEM) is used to further observe and analyze the tensile fracture morphology of various pipeline steels materials under dynamic charging conditions. The essence of fracture is that the bonding force between atoms is destroyed, and plastic fracture is microporous aggregation fracture. When the material is stretched to produce plastic deformation, stress concentration occurs locally in the material. The stress destroys the bonding force between atoms, and firstly forms micropores, that is, the source of cracks. As the plastic deformation continues, the micropores continue to expand and merge due to the presence of three-dimensional stress concentration, and finally form dimples. Therefore, the larger the dimples, the stronger the local anti-instability of the material and the better the plasticity. Figure 7 presents the fracture morphology of specimens exposed to the air using the SEM. It can be seen that the fracture of different materials exposed to the air are all specified by dimples. The number, size, and depth of dimples for each material are different, determined by the material's composition and structure. 'cleavage-like' facets, micropores and tearing edges. This fracture mode is called 'quasi-cleavage' fracture. Compared with the central zone, the fracture morphology of the marginal zone (figures 8(b), 9(b), 10(b), and 11(b)) shows a more noticeable brittleness. The appeared fractures in the X42 and X52 are lamellar cleavage and stepped cleavage, the cleavage plane of the X70 is relatively flat, while AISI 1020 presents the typical cleavage feature with a river-like pattern. The caused fracture in AISI 1020 consists of staggered cleavage planes with large secondary cracks and almost no dimples, representing that a significant brittle fracture occurs for the case of AISI 1020 at the current density of 1 mA cm −2 . When the current density increases to 10 mA cm −2 , the central zone of each material (figures 8(c), 9(c), 10(c), and 11(c)) appears as quasi-cleavage morphology composed of the mixture of dimples and cleavage planes. For all considered materials, the marginal zone at 10 mA cm −2 (figures 8(d), 9(d), 10(d), and 11(d)) presents a more severe brittle fracture than that at 1 mA cm −2 .
It is clear from the SEM image that the fracture mode of materials significantly changes after hydrogen charging. In the present study, the whole fracture of the specimen can be divided into the central and marginal zones, which are mainly distinguished by various fracture morphologies. Hydrogen generates on the surface of the material during charging, and it takes a particular time for hydrogen to permeate from outside to inside. Therefore, the hydrogen concentration gradually decreases from outside to inside. Brittle fracture characteristics can be apparently observed in the marginal zones, such as secondary cracks ( figure 8(b)) or river-like pattern ( figure 11(b)). Due to the low hydrogen concentration in the central zone, it reveals certain ductile fracture characteristics, such as shallow dimples ( figure 8(a)) and quasi cleavage features ( figure 9(a)). The essence of the dimple is the cavity formed by the dislocation in the discontinuity of the matrix during the stretching process. In the process of cavity nucleation and growth, transverse shear force accelerates the aggregation of cavities, thereby forming shallow dimples, so the fracture in the central area presents a quasi-cleavage morphology with a certain residual toughness. Figure 12 shows the hydrogen desorption curves of the understudy steel pipelines. According to the plotted results, there exist two distinct hydrogen desorption peaks for each material. For low temperatures, the peaks of the plots associated with AISI 1020, X52, and X42 are the highest, lower, and lowest levels. The high-temperature peaks of X42, X52, and X70 approximately take place at the same temperature, while the temperature corresponds to the high-temperature peak of AISI 1020 is significantly lower than that of other steel pipelines. Subsequently, the accumulation of hydrogen can be readily calculated by integration. By this view, the hydrogen concentration for X42, X52, X70, and AISI 1020 are obtained as 0.28, 0.65, 0.65, and 1.63 ppm, respectively.
TDS analysis
The peaks of hydrogen desorption curves in the TDS test are generated from hydrogen traps of various binding energies. If the microstructure of the material becomes complicated, the solubility of hydrogen at each hydrogen trap would be significantly different, yielding multiple peaks on the curve. It can be seen from figure 12, the curve associated with each carbon steel exploited in this test presents two peaks. According to the literature [27], the hydrogen absorption peak at the ferrite-pearlite interface in carbon steel occurs at 116°C; hence, the demonstrated low-temperature peak in figure 12 corresponds to the reversible hydrogen trap at the ferrite-pearlite interface. The area enclosed by the curve and the horizontal axis characterizes the level of hydrogen concentration, indicating that the grain boundaries of these kinds of steel pipelines have significant differences in the solubility of hydrogen. The hydrogen solubility decreases as one moves from AISI 1020 to X52, then X70, and finally X42. This finding confirms that the hydrogen concentration at the low-temperature peak is strictly related to the grain boundary between ferrite and pearlite. For low-carbon-based steels, the higher the carbon content, the higher the pearlite content. As a result, the grain boundary area between pearlite and ferrite also increases. This conclusion can also be proved from the chemical composition and microstructure of tested steel pipelines. According to table 1, the carbon content of materials reduces as one shifts from AISI 1020 (0.19%) to X52 (0.13%), and then X42 (0.063%). Figures 3(a), (b), and (d) present that from X42 to AISI 1020, pearlite content does increase, the corresponding grain volume increases, and thereby, the solubility of hydrogen at the grain boundary magnifies. The metallographic structure of the X70 is somehow different from that of the other three materials, and its grain boundary cannot be clearly defined; hence, X70 does not participate in the comparison study mentioned above. However, it is detectable from figure 12 that the corresponding temperature to the low-temperature peak of X70 is significantly higher than that of other materials. This issue is mainly attributed to the fact that X70 contains a large amount of martensite, resulting in higher dislocation density. According to previous investigations [28], the binding energy of dislocations is commonly higher than that of the ferrite-pearlite interface. Therefore, the hydrogen desorption temperature of dislocations is higher than that of grain boundaries.
According to figure 12, the high-temperature peak corresponds to the irreversible hydrogen trap at particles or impurities in the second phase, commonly caused by hydrogen desorption of irreversible hydrogen traps with large binding energy [28]. The irreversible hydrogen traps in steel are mainly alloy element compounds. According to the literature [29], MnS is the main harmful impurity in low-carbon steel. The MnS is a strong irreversible hydrogen trap. As seen from table 1, Mn is the main strengthening element in steel pipelines. The Mn content is about 1.26% in X42, 1.06% in X52, 1.47% in X70, but only 0.54% in AISI 1020 steel. Figure 10 presents that the temperature pertinent to the high-temperature peaks of X42, X52, and X70 are similar, while the temperature of the high-temperature peak of AISI 1020 is seemingly lower. Therefore, it can be concluded that the high-temperature peaks of X42, X52, and X70 are essentially caused by Mn. The high-temperature peak of X70 is the highest among others because it contains the largest Mn. Although the Mn content in X42 is slightly higher than that in X52, the S content in X52 is twice that in X42, resulting in a little discrepancy between the high-temperature peaks of these two materials. Comparing with the other three materials, the Cu content (0.14%) in AISI 1020 steel is higher. According to the studies from Shi et al [30], dispersed fine Cu-rich phases in matrix can act as the beneficial hydrogen traps, which helps to avoid the localized high concentration of hydrogen. Therefore, the high-temperature peak of AISI 1020 steel may be caused by the hydrogen trap formed by Cu.
Discussion
Comprehensive analysis on the dynamic hydrogen charging SSRT results and SEM of fracture morphology in section 3.1 show that, due to the high concentration of hydrogen in the marginal zone, dislocations lead to the formation of cracks (for instance, see secondary cracks in figure 8(d)). Under action of the hydrogen enhanced decohesion mechanism (HEDE) [8], brittle cleavage fracture occurs in the marginal zone. Then the crack gradually propagates inward, resulting in the material's capability to withstand significantly reduced loads. Under action of the hydrogen enhanced local plasticity (HELP) mechanism [9], rapid tearing occurs in the central zone, forming a mixed morphology with both toughness and brittleness. As the current density grows, the embrittlement degree of fracture enlarges. It implies that the general trend of the morphological transformation is shifted from the ductile fracture to the transitional quasi-cleavage fracture, and finally to the cleavage fracture.
TDS results in section 3.2 show that, there are differences in the solubility of hydrogen for different types of pipeline steels, and this difference is related to the factors including microstructure, element composition of the materials, et al For the three ferritic steels of X42, X52 and AISI 1020, the low temperature peaks on the TDS curve correspond to the reversible hydrogen traps formed at the interface between ferrite and pearlite, and the higher the C content in the steel, the higher the hydrogen concentration in the grain boundaries. For the martensitic steel of X70, the high-density dislocation martensitic structure will form irreversible hydrogen traps, causing the hydrogen concentration to increase. For X42, X52 and X70, the harmful impurity MnS formed by Mn and S elements is a strong irreversible hydrogen trap, which can also cause hydrogen aggregation. These results mean that we can preliminarily screen out the pipes that may have good hydrogen resistance by examining the chemical composition, metallographic structure and grain size of the candidate pipeline steels.
Considering that X52 steel has been widely used in hydrogen transmission systems, from the application point of view, it can be considered that X52 has sufficient resistance to hydrogen-induced deterioration. It is therefore reasonable to select X52 as a benchmark to screen other materials. Therefore, the C content in the candidate pipeline steel should not exceed the C content in X52 (0.13%). In order to reduce strong irreversible hydrogen traps such as MnS, the S content and Mn content in the steel should not exceed the S content (0.0068%) and Mn content (1.06%) in X52 steel. The high-density dislocations formed by martensite are strong hydrogen traps, which are highly sensitive to hydrogen, and the plasticity of martensitic steel is poor, so it is not suitable for natural gas mixed hydrogen transportation pipes. Ferrite is the most common structure in pipeline steel, and its susceptibility to hydrogen embrittlement depends on the ferrite grain size in addition to the content of alloying elements. The grain sizes of the ferritin pipeline steel used in this study are 8.6 μm (X42), 16.2 μm (X52) and 35.6 μm (AISI 1020). The results of the dynamic hydrogen charging SSRT test show that the HEI of AISI 1020 is much higher than that of X42 and X52, so the HEI of the material has a certain correlation with the grain size, that is, the larger the grain size, the higher the HEI, the worse the hydrogen resistance of the material. Therefore, when selecting pipe steel, it should be ensured that the ferrite grain size is at least less than 35 μm. If conditions permit, the ferrite grain size can be further refined to the level of X52 steel (16.2 μm). The dynamic hydrogen charging SSRT test can be used to evaluate the deteriorating effect of environmental hydrogen on the plasticity of the material. The hydrogen embrittlement index of X52 at a current density of 1 mA cm −2 is 50%, when the HEI of the candidate material is less than 50%, its resistance to hydrogen embrittlement is considered qualified.
Conclusions
In this paper, the mechanical properties of X42, X52, X70, and AISI 1020 in the presence of a hydrogen environment were obtained through dynamic hydrogen charging SSRT test. The fracture patterns were then investigated by SEM, and the relationship between microstructure and macro-mechanical properties was established. Moreover, the vital role of the alloy elements in hydrogen embrittlement was also revealed by the TDS analysis. The crucially obtained findings from this research work are as follows: 1. Both values of EL and RA were reduced by increasing the current density, and the reduction of RA is more pronounced than that of the EL. The yield strength and tensile strength of X42, X52, and X70 have a slight increase after hydrogen charging. The YS of AISI 1020 strengthens after hydrogen charging, while the UTS gradually decreases with the growth of the current density. YS and UTS variation range of the four steels before and after hydrogen charging is within 10%. The HE index indicates that AISI 1020 has the highest hydrogen embrittlement susceptibility, then X70, and X42 has the lowest one.
2. Fracture of tensioned specimens indicates that the cleavage area of the marginal zone is larger than that of the central zone. As the current density grows, the fracture mode transforms from the ductile fracture to the quasi-cleavage fracture and finally turns into the cleavage fracture. The hydrogen embrittlement fracture of the tensile specimen results from the action of the HEDE and HELP in various zones.
3. For the three ferritic steels of X42, X52 and AISI 1020, the higher the C content, the higher the hydrogen concentration in grain boundaries. For the martensitic steel of X70, the accumulation of dissolved hydrogen originates from the high density of dislocations formed by the martensitic structure. For X42, X52, and X70, the high temperature peak on the TDS curve corresponds to the strong irreversible hydrogen traps formed by Mn compounds, and the higher the Mn content in the steel, the higher the hydrogen concentration in the irreversible hydrogen traps.
4.
A simple evaluation method for the selection and evaluation of natural gas-hydrogen mixed pipes is proposed. The C content in the candidate pipeline steel should not exceed 0.13%, and the corresponding S content and Mn content should not exceed 0.0068% and 1.06%, respectively. Martensitic steel is not suitable for HCNG pipes. For ferritin pipeline steel, the HEI of the material has a certain correlation with the grain size, and it should be ensured that the ferrite grain size is at least less than 35 μm. Furthermore, in order to ensure that the hydrogen embrittlement resistance of the material is qualified, the HEI of the candidate material should less than 50%. | 2022-04-13T15:12:35.715Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "0941701c6ff73f2ed4b8646d7ea3be5e259d8337",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ac6654",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "79dceecd2a5dff580715426e7c211e46f52a4b0b",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
182375111 | pes2o/s2orc | v3-fos-license | Dose‐linearity of the pharmacokinetics of an intravenous [14C]midazolam microdose in children
Aims Drug disposition in children may vary from adults due to age‐related variation in drug metabolism. Microdose studies present an innovation to study pharmacokinetics (PK) in paediatrics; however, they should be used only when the PK is dose linear. We aimed to assess dose linearity of a [14C]midazolam microdose, by comparing the PK of an intravenous (IV) microtracer (a microdose given simultaneously with a therapeutic midazolam dose), with the PK of a single isolated microdose. Methods Preterm to 2‐year‐old infants admitted to the intensive care unit received [14C]midazolam IV as a microtracer or microdose, followed by dense blood sampling up to 36 hours. Plasma concentrations of [14C]midazolam and [14C]1‐hydroxy‐midazolam were determined by accelerator mass spectrometry. Noncompartmental PK analysis was performed and a population PK model was developed. Results Of 15 infants (median gestational age 39.4 [range 23.9–41.4] weeks, postnatal age 11.4 [0.6–49.1] weeks), 6 received a microtracer and 9 a microdose of [14C]midazolam (111 Bq kg−1; 37.6 ng kg−1). In a 2‐compartment PK model, bodyweight was the most significant covariate for volume of distribution. There was no statistically significant difference in any PK parameter between the microdose and microtracer, nor in the area under curve ratio [14C]1‐OH‐midazolam/[14C]midazolam, showing the PK of midazolam to be linear within the range of the therapeutic and microdoses. Conclusion Our data support the dose linearity of the PK of an IV [14C]midazolam microdose in children. Hence, a [14C]midazolam microdosing approach may be used as an alternative to a therapeutic dose of midazolam to study developmental changes in hepatic CYP3A activity in young children.
arity may occur for example, when a therapeutic dose saturates drug metabolism pathways, plasma protein binding and/or active transporters, which may result in altered PK when studying a microdose. 15 A very elegant approach to study dose linearity is by comparing the PK parameters of an isolated [ 14 C]microdose with the PK parameters of a [ 14 C]microtracer, where the labelled microdose is administered concurrently or even mixed with a therapeutic drug dose. 12 Cytochrome P450 (CYP) 3A is a developmentally regulated drug metabolizing enzyme that is abundant in the liver and accounts for nearly 46% of the oxidative metabolism of clinically relevant drugs. 1,2,[16][17][18][19][20][21] As midazolam is a well-established model substrate for CYP3A activity, this drug may be used for phenotyping studies using a microdosing approach to elucidate developmental changes in CYP3A. 5,[22][23][24][25] To the best of our knowledge, dose linearity of the PK of a microdose to those of a therapeutic dose of midazolam has been established in adults, 14,26,27 but not in children. However, the results in adults cannot simply be extrapolated to children due to the development of drug metabolism, hepatic blood flow, protein binding and drug transport.
We therefore aimed to study the dose linearity of the PK of a [ 14 C]midazolam microdose in children, by studying the PK parameters of midazolam when given as an intravenous (IV) [
| Subjects
Children were eligible to be included in this study from birth up to age 2 years, when they had intravenous lines in place for intravenous administration, and had suitable vascular access for blood sampling. of calculated circulating blood volume). 29 The blood samples were centrifuged and plasma was stored at −80°C until analysed. Engineering Europe B.V., Amersfoort, The Netherlands) 32 was used.
| Radiopharmaceutical preparation
The lower limit of quantification was 0.31 mBq mL −1 . and standard goodness of fit plots. For the OFV, a drop of more than 3.84 points between nested models was considered statistically significant, which corresponds to P < .05 assuming a χ 2 distribution. 34,35 For the structural and error models, a decrease in OFV of
| Nomenclature of targets and ligands
Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org, the common portal for data from the IUPHAR/BPS Guide to PHARMACOLOGY, 37 Inclusion of the covariate treatment (e.g. microtracer or microdose) upon inclusion on any of the PK parameters was found to not statistically significantly influence the model fit (ΔOFV >0.01).
| Nonlinear mixed effects modelling
The PK parameter estimates of the final model and the bootstrap results are presented in Table 3. Previously, studies have reported the midazolam PK in paediatrics after a single IV administration. [40][41][42] Clearance in our study was found to be 2.06 L h −1 for an infant of 4 kg (equal to 8.6 mL kg −1 min −1 ). In preterm infants, the clearance was reported to be lower (median 1.8 [range 0.7-6.7] mL kg −1 min −1 ) 40 reflecting that CYP3A activity is less mature in preterm infants than in an infant of 4 kg. A study with critically ill children reported a clearance of 1.11 L h −1 for an infant of 5 kg (equal to 3.7 mL kg −1 min −1 ), 43 which is lower than in our population.
This paper concludes that inflammation (reflected by high C-reactive protein concentrations) and/or number of failing organs influenced midazolam clearance, possibly as a result of reduced CYP3A activity. 43 The lower clearance can probably be explained by the fact that this study included patients with a higher inflammation-state and/or more failing organs, as subjects in the current study were only eligible when renal-or hepatic failure was absent. This is further evidenced by 2 studies investigating a 0.15 mg kg −1 dose in healthy children, where clearance was found to be similar (3-10 year old, clearance mean ± SD 9.11 ± 1.21 mL kg −1 min −1 ) 42 and slightly higher (0.5-2 year, clearance 11.3 ± 6.3 mL kg −1 min −1 ) 41 than in our population.
Regulatory authorities have indicated that microdose studies with radioactive labelled compounds are an acceptable component of drug development. 7,44 However, to the best of our knowledge, this approach has not been used during paediatric drug development, despite this study and previous other studies illustrating feasibility and ethical acceptance in that population. [11][12][13] For paracetamol, the dose linearity of an oral and IV microdose was successfully assessed in paediatrics. 12 A slightly different approach was taken to study developmental changes in oral disposition of paracetamol and | 2019-07-05T13:14:58.899Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "cff0689c64db3189b733247581458311ebe69687",
"oa_license": "CCBYNCND",
"oa_url": "https://bpspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bcp.14047",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "48cccbfbe4f163368185ece234ec90425327708e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84717709 | pes2o/s2orc | v3-fos-license | Inhibition of kinase and endoribonuclease activity of ERN1/IRE1α affects expression of proliferation related genes in U87 glioma cells
Abstract Inhibition of ERN1/IRE1α (endoplasmic reticulum to nucleus signaling 1/inositol requiring enzyme-1α), the major signaling pathway of endoplasmic reticulum stress, significantly decreases tumor growth. We have studied the expression of transcription factors such as E2F8 (E2F transcription factor 8), EPAS1 (endothelial PAS domain protein 1), TBX3 (T-box 3), ATF3 (activating transcription factor 3), FOXF1 (forkhead box F1), and HOXC6 (homeobox C6) in U87 glioma cells overexpressing dominant-negative ERN1/IRE1α defective in endoribonuclease (dnr-ERN1) as well as defective in both kinase and endonuclease (dn-ERN1) activity of ERN1/IRE1α. We have demonstrated that the expression of all studied genes is decreased at the mRNA level in cells with modified ERN1/IRE1α; TBX3, however, is increased in these cells as compared to control glioma cells. Changes in protein levels of E2F8, HOXC6, ATF3, and TBX3 corresponded to changes in mRNAs levels. We also found that two mutated ERN1/IRE1α have differential effects on the expression of studied transcripts. The presence of kinase and endonuclease deficient ERN1/IRE1α in glioma cells had a less profound effect on the expression of E2F8, HOXC6, and TBX3 genes than the blockade of the endoribonuclease activity of ERN1/IRE1α alone. Kinase and endonuclease deficient ERN1/IRE1α suppresses ATF3 and FOXF1 gene expressions, while inhibition of only endoribonuclease of ERN1/IRE1α leads to the up-regulation of these gene transcripts. The present study demonstrates that fine-tuning of the expression of proliferation related genes is regulated by ERN1/IRE1α an effector of endoplasmic reticulum stress. Inhibition of ERN1/IRE1α, especially its endoribonuclease activity, correlates with deregulation of proliferation related genes and thus slower tumor growth.
Introduction
The endoplasmic reticulum (ER) is the primary organelle able to activate a distinct cellular stress response, termed the Unfolded Protein Response (UPR) in which a moiety of factors (typically aggregates of misfolded proteins) triggers activation of a complex set of signaling pathways to execute a resolution to the causative stress. Malignant tumors utilize the endoplasmic reticulum stress response to adapt to stressful, environmental conditions [1][2][3]. The rapid growth of solid tumors generates microenvironmental changes in association to hypoxia, nutrient deprivation and acidosis, which induce new blood vessels formation and cell proliferation. Those processes rely on the activation of endoplasmic reticulum stress signalling pathways [2,3]. UPR is mediated by three interconnected, endoplasmic reticulum-resident sensors. ERN1/IRE1α (endoplasmic reticulum to nucleus signaling 1/inositol Abstract: Inhibition of ERN1/IRE1α (endoplasmic reticulum to nucleus signaling 1/inositol requiring enzyme-1α), the major signaling pathway of endoplasmic reticulum stress, significantly decreases tumor growth. We have studied the expression of transcription factors such as E2F8 (E2F transcription factor 8), EPAS1 (endothelial PAS domain protein 1), TBX3 (T-box 3), ATF3 (activating transcription factor 3), FOXF1 (forkhead box F1), and HOXC6 (homeobox C6) in U87 glioma cells overexpressing dominant-negative ERN1/IRE1α defective in endoribonuclease (dnr-ERN1) as well as defective in both kinase and endonuclease (dn-ERN1) activity of ERN1/IRE1α. We have demonstrated that the expression of all studied genes is decreased at the mRNA level in cells with modified ERN1/IRE1α; TBX3, however, is increased in these cells as compared to control glioma cells. Changes in protein levels of E2F8, HOXC6, ATF3, and TBX3 corresponded to changes in mRNAs levels. We also found that two mutated ERN1/IRE1α have differential effects on the expression of studied transcripts. The presence of kinase and endonuclease deficient ERN1/ IRE1α in glioma cells had a less profound effect on the expression of E2F8, HOXC6, and TBX3 genes than the blockade of the endoribonuclease activity of ERN1/IRE1α alone. Kinase and endonuclease deficient ERN1/IRE1α requiring enzyme-1alpha) is the most evolutionary conserved sensor that responds to protein misfolding with a highly tuned program aimed to either resolve the stress or direct the cell towards apoptosis in the case rectification is not viable; thus making it a key regulator of life and death processes [1,[4][5][6][7].
The ERN1/IRE1α enzyme contains two distinct catalytic domains: serine/threonine kinase and endoribonuclease. Endoribonuclease activity is involved in the degradation of a specific subset of mRNA targeted to the ER to lessen the load of protein synthesis on the already stressed ER. Endonuclease activity also initiates the cytosolic splicing of the pre-XBP1 (X-box binding protein 1) mRNA, whose mature transcript encodes for a transcription factor that stimulates the expression of numerous UPRspecific genes, namely other key transcription factors [8][9][10][11]. Moreover, activation of the ERN1/IRE1α branch of the endoplasmic reticulum stress response is intimately linked to apoptosis. Ablation of this sensor's function by a dominant-negative construct of ERN1/IRE1α (dn-ERN1) has been shown to result in a significant antiproliferative effect in glioma growth [2,12]. This is due to down-regulation of prevalent pro-angiogenic factors and up-regulation of anti-angiogenic genes, both in vitro and in the CAM (chorio-allantoic membrane) model, as well as in mice engrafted intracerebrally with U87 glioma cell clones [13][14][15]. The executive mechanism of the exhibited anti-proliferative effects is not yet known. We propose that anti-proliferative effect is realized through mediation by transcription factors, which are integrated into the UPR signaling pathways to regulate cell cycle, apoptosis and senescence [16][17][18][19][20][21]. Possible involvement of such transcription factors such as E2F, HIF, TBX, ATF, FOX, and HOX families was made evidently pertinent through transcriptomic analysis of U87 glioma cells expressing the dominant-negative mutant of ERN1/IRE1α [14].
E2F transcription factors, such as E2F8, are essential for orchestrating expression of genes required for cell cycle progression and proliferation, they promote angiogenesis through transcriptional activation of VEGFA in cooperation with HIF1 and are strongly up-regulated in human hepatocellular carcinoma [17,22,23]. The T-box transcription factor, TBX3 is a transcriptional repressor that plays multiple roles in both normal development and disease either by transcription repression or activation of target genes in a context-dependent manner. It controls the rate of cell proliferation as well as mediates cellular signaling pathways and the anti-proliferative role of TGF-β1 [16,24]. The transcription factor HOXC6 plays an important role in both proliferation and metastasis by regulating genes with both oncogenic and tumor suppressor activities and it may also contribute to the progression of gastric carcinogenesis [20,25,26]. The TP53 family, which we previously demonstrated to be modulated by ERN1/IRE1α, targets the forkhead box transcription factor, FOXF1. Ectopic expression of FOXF1 inhibits cancer cell invasion and migration, whereas the inactivation of FOXF1 stimulates both of these processes [21,27]. Consequently, FOXF1 overexpression is associated with epithelial-to-mesenchymal transition, the process mediated by chronic activation of UPR as well, in breast cancer, making it potentially a worthy player in the mechanistic progression of UPR mediated cancers [21].
Cyclic AMP-dependent activating transcription factor 3 (ATF3) is a cell-death regulator that is strongly induced during necrosis and can suppress the oncogenic function of mutant TP53, thereby contributing to tumor suppression. Although its suppressive function is speculated to be tumor specific, it has also demonstrated the promotion of other cancers [28][29][30][31]. The last transcription factor of interest, Endothelial PAS domain protein 1 (EPAS1), which is also known as hypoxia-inducible transcription factor-2alpha (HIF-2α), has been shown to correlate with tumor size, invasion and necrosis as well as with VEGF gene expression, which supports the correlation of EPAS1 up-regulation with tumor angiogenesis [32,33].
Therefore, based on the amalgamation of evidence listed above, the aim of this study was to investigate the possible roles of genes encoding for transcription factors: E2F8, EPAS1, HOXC6, ATF3, TBX3, and FOXF1 as they apply to the suppression of glioma cell proliferation via inhibition of the endoplasmic reticulum stress sensor ERN1/IRE1α with hopes of elucidating its mechanistic part in the development and progression of certain cancers and the contribution of UPR.
Cell lines. In this work we used sublines of U87 glioma cells, which were described previously [13][14][15]35]. One subline was obtained by selection of stable transfected clones overexpressing vector (pcDNA3.1), which was used for creation of dominant-negative constructs of ERN1/ IRE1α (dn-ERN1 and dnr-ERN1). This untreated subline of glioma cells was used as a control (control glioma cells) in the study of the effects of inhibition of ERN1, in regards to the expression of the transcription factors of interest ( Table 1). The second sub-line was obtained by the selection of stable transfected clones overexpressing dn-ERN1, having suppression of both the protein kinase and endoribonuclease activities of ERN1/IRE1α [14]. The third sub-line was obtained by the selection of stable transfected clones with the overexpression of dominantnegative ERN1/IRE1α endoribonuclease mutant (dnr-ERN1), which was obtained by truncation of the carboxyterminal 78 amino acids of ERN1 [15]. It has recently been shown that these cells have a low rate of proliferation and do not express spliced XBP1, a key transcription factor in ERN1/IRE1α signaling, after induction of endoplasmic reticulum stress by tunicamycin [15]. For experiments with GRP78/HSPA5 we have also used U87 cells stable transfected by wild-type ERN1. The expression of the studied genes was compared with cells transfected with the previously mentioned, empty vector (control glioma cells, pcDNA3.1).
Proliferation assay. The rate of proliferation control glioma cells and ERN1 knockdown cells was measured via cell counter (Coultronics, Margency, France). Cell number was measured in triplicates after 3 days.
RNA isolation. Total RNA was extracted from both glioma and normal human astrocyte cells using Trizol reagent according to manufacturer protocols (Invitrogen, USA). The RNA pellets were washed with 75 % ethanol and dissolved in nuclease-free water. For additional purification RNA samples were re-precipitated with 95 % ethanol and re-dissolved again in nuclease-free water.
Reverse transcription and qPCR analysis. QuaniTect Reverse Transcription Kit (QIAGEN, Germany) was used for cDNA synthesis according to manufacturer protocol. The expression level of E2F8, TBX3, EPAS1, ATF3, FOXF1, HOXC6, and ACTB mRNA were measured in U87 glioma cells and normal human astrocyte cells by real-time quantitative polymerase chain reaction using Mx 3000P QPCR (Stratagene, USA) and Absolute qPCR SYBRGreen Mix (Thermo Fisher Scientific, ABgene House, UK). Polymerase chain reaction was performed in triplicate using specific primers, which were received from Sigma-Aldrich, USA (Table 1).
An analysis of quantitative PCR was performed using special computer program Differential Expression Calculator. The expression values of E2F8, TBX3, EPAS1, ATF3, FOXF1, HOXC6, and ACTB mRNA were normalized to beta-actin and represent a percent control (100 %).
Western blot analysis. E2F8, HOXC6, TBX3, and ATF3 proteins were measured in glioma cells by Western blot analysis using mouse monoclonal anti-E2F8 antibody (H00079733-M01) from NOVUS Biologicals, mouse monoclonal anti-HOXC6 antibody (sc-376330), goat polyclonal anti-TBX3 antibody (sc-31657), rabbit polyclonal anti-ATF3 antibody (sc-188), and mouse monoclonal anti-ACTB (beta-actin; sc-47778) from Santa Cruz Biotechnology. ACTB was used as control of analyzed protein quantity. Western blot analysis was performed as described previously [36,37]. Statistical analysis was performed using OriginPro 7.5 software. All values are expressed as mean ± SEM from triplicate measurements performed in 4 independent experiments. Comparison of two means was performed by the use of two-tailed Student's t-test as described previously [38]. P < 0.05 was considered significant in all cases.
Ethical approval: The conducted research is not related to either human or animals use.
ERN1/IRE1α modulates expression of E2F8, HOXC6, EPAS1, ATF3, FOXF1, and TBX3 transcription factor genes in glioma cells
The expression of these genes was studied by quantitative PCR and Western blot analysis. To test the effect of ERN1/ IRE1α on expression levels of the transcription factor of interest in relation to the control of cell proliferation, we used U87 glioma cell sub-lines, constitutively expressing the dominant-negative mutant of ERN1/IRE1α, dnr-ERN1 or dnERN1, inhibiting endoribonuclease activity of endogenous ERN1/IRE1α or both endoribonuclease and kinase activities, respectively [14,15]. Figure 1A and Table 2 demonstrate that inhibition of ERN1/IRE1α gene function in U87 glioma cells by dn-ERN1 leads to the down-regulation of E2F8 mRNA (17fold). The inhibition of the endoribonuclease activity of ERN1/IRE1α alone by dnr-ERN1 has an even more robust effect on the expression of E2F8 (50 fold; Figure 1A and Table 2). The resultantly low level of E2F8 mRNA in cells expressing dnr-ERN1 is comparable to that of the normal human astrocyte cells line (NHA/TS) and not to immortalize U87 cells. Notably, normal human astrocyte cells are grown without the addition of geneticin (G418) while U87 cells carrying dn-ERN1 and dnr-ERN1 are grown in it presence. Therefore, the possibility exists that the differences in growing conditions could affect E2F8 expression; however, there were no significant differences found in the expression of E2F8 mRNA in glioma cells overexpressing the empty vector and wild-type U87 glioma cells. ( Figure 1A). Therefore, inhibition of ERN1/IRE1α endoribonuclease affects the regulation of glioma cell growth by mediating E2F8 mRNA expression and lowering it approximately to a level normally observed in human astrocytes.
Thereafter, we tested how ERN1/IRE1α inhibition modulates the expression of transcription factors, HOXC6 and EPAS1. As shown in Figure 1B and Table 2, inhibition of ERN1/IRE1α endoribonuclease alone by dnr-ERN1 has a robust suppressive effect on HOXC6 expression (3.7 fold down-regulation); however, there is a slightly lesser effect on expression in the double mutant (dn-ERN1), which exhibits inhibition of both endonuclease and kinase activities. We found that the expression of HOXC6 mRNA in normal human astrocytes is very low (twelvefold less versus control glioma cells) and that in U87 glioma cells, inhibition of ERN1/IRE1α, especially its endoribonuclease activity, leads to more dramatized decrease in HOXC6 gene expression, making its expression akin to that of non-malignant, NHA/TS cells ( Figure 1B). mRNA expression of transcription factor EPAS1/ HIF-2α (which mediates numerous hypoxia-induced processes including proliferation in cell-specific manner) is also affected by modulation of ERN1/IRE1α activity ( Figure 1C, Table 2). Similar to the trend found with HOXC6 expression, EPAS1 mRNA was found to be significantly lower in normal human astrocyte cells and in malignant cells harboring mutated ERN1/IRE1α than in control glioma cells. Thus, inhibition of ERN1/IRE1α modifies HOXC6 and EPAS1 expression in U87 glioma cells in an anti-proliferative manner. Specifically, inhibition of ERN1/IRE1α endoribonuclease alone is important for the suppression of HOXC6, while inhibition of both kinase and endonuclease activities is needed to block the expression of EPAS1. We next tested whether ERN1 also participates in the regulation of ATF3 and FOXF1 genes. We found that expression of ATF3, the transcription factor that regulates transcription of numerous proliferative and apoptosisrelated genes, is significantly decreased (sevenfold) in glioma cells stably transfected with dn-ERN1, (kinase and endonuclease deficient ERN1/IRE1α) ( Figure 1D, Table 2), making its expression comparable to that of ATF3 mRNA in normal human astrocyte cells. Interestingly, inhibition of only the endoribonuclease activity of ERN1/ IRE1α leads to a significantly lessened down-regulation of ATF3 gene expression in glioma cells (-21 %; Figure1D, Table 2). We concluded, therefore, that ERN1/IRE1α has a strong effect on ATF3 expression in glioma cells with its activation is being needed for the up-regulation of ATF3 expression. Thus, the effect of ERN1/IRE1α on ATF3 strongly depends upon the type of ERN1/IRE1α inactivation.
The transcription factor FOXF1 controls expression of some growth factors and can repress cell growth; however, tumor suppressor functions of FOXO transcription factors are lost in most cancer cells as a result of chromosomal translocation, deletion, miRNAmediated repression, AKT-mediated cytoplasmic sequestration or ubiquitination-mediated proteosomal degradation [21,27,39]. FOXF1 mRNA, as expected, was found to be significantly higher in normal human astrocyte cells than in control glioma cells ( Figure 1E, Table 2). The blocking of ERN1/IRE1α (kinase and endoribonuclease activities) by dn-ERN1 resulted in a down-regulation of FOXF1 mRNA, while inhibition of just the endoribonuclease activity of ERN1/IRE1α alone lead to a strong up-regulation of FOXF1 mRNA expression (in 23fold) ( Figure 1E, Table 2), thus indicating the control of FOXF1 expression by ERN1/IRE1α. Interestingly, FOXF1 mRNA was found to be significantly higher in NHA/TS cells than in control glioma cells, being more similar to glioma cells harboring dnr-ERN1 ( Figure 1E).
TBX3 is a transcription repressor that plays multiple roles in normal development and diseases by either repressing or activating transcription of target genes in a context-dependent manner as well as controlling the rate of cell proliferation and mediating cellular signaling pathways [16,24]. It may mediate the antiproliferative role of TGF-β1, but is overexpressed in several cancers [24]. At the same time, normal human astrocyte cells have significantly higher amounts of mRNA for this transcription factor in comparison to control glioma cells. The expression of TBX3 mRNA is strongly induced in glioma cells harboring dn-ERN1, indicating that activation of ERN1/IRE1α has a negative effect on TBX3 gene expression. Inhibition of endoribonuclease activity of ERN1/IRE1α leads to an even more robust induction of TBX3, most likely due to further transcriptional activation of this gene or/and stabilization of its mRNA (more than in nine fold) in glioma cells ( Figure 1F, Table 2). Moreover, normal human astrocytes have a significantly higher level of TBX3 transcription repressor as compared to control and glioma cells overexpressing dnr-ERN1 ( Figure 1F, Table 2). Thus, inhibition of ERN1/IRE1α endoribonuclease affects growth regulation, enhancing TBX3 gene expression and making its levels closer to that of non-malignant NHA/TS cells. Please note, different growing conditions were used for NHA/TS (-G418) and dnr-ERN1 glioma cells (+/-G418) as explained above. Therefore, these results may benefit from further validation; although, Figure 1A demonstrates that this difference in growing conditions does not affect E2F8 gene but we did not studied TBX3 gene.
Additionally, we studied the effect of ERN1/IRE1α inhibition on the expression of chaperone HSPA5/ GRP78. As shown in Figure 1G, inhibition of ERN1/IRE1α by dn-ERN1 or dnr-ERN1 suppresses the expression of GRP78 gene in glioma cells, supporting the notion of "stippling" the endoplasmic reticulum stress response. At the same time, over-expression of wild-type ERN1 in glioma cells leads to up-regulation of this gene expression. We have also shown that the levels of GRP78/ HSPA5 mRNA are similar in wild type glioma cells and in cells harboring the empty vector. Therefore, the presence of the empty vector or addition of G418 to the medium in glioma cells does not significantly change the expression level of GRP78 mRNA and consequently, the endoplasmic reticulum stress response.
In conclusion, we have demonstrated that the ERN1/ IRE1α participates in the fine-tuning of mRNA levels of a subset of transcription factor genes important for control of proliferation. Tumor growth suppression in glioma cells with mutated ERN1/IRE1α maybe mediated by such transcription factors.
ERN1/IRE1α modulates protein levels of E2F8, HOXC6, ATF3, and TBX3 transcription factors in U87 glioma cells
To test whether changes in mRNA levels caused by inhibition of ERN1/IRE1α by dn-ERN1 or dnr-ERN1 correspond to the changes in the level of protein of E2F8, HOXC6, ATF3, and TBX3, we measured levels of these transcription factors by Western blot analysis. As shown in Figure 2A, the level of E2F8 strongly decreased in glioma cells harboring dn-ERN1 as well as dnr-ERN1, but inhibition of only the endoribonuclease activity of ERN1 leads to even more significant down-regulation of this transcription factor, and correlate to the changes in the mRNA level ( Figure 1A). As shown in Figure 2B, the inhibition of only the endoribonuclease of ERN1/IRE1α by dnr-ERN1 in glioma cells has a strikingly suppressive effect on the level of HOXC6 protein, with a slightly lesser effect on this gene expression in double mutants of ERN1/ IRE1α (dn-ERN1), which has both endonuclease and kinase activities affected. Thus, the changes in protein and mRNA levels are similar in glioma cells harboring dn-ERN1 and dnr-ERN1 as compared to control glioma cells (Figures 1B and 2B).
Western blot analysis of TBX3 protein demonstrates that it is induced in glioma cells harboring dn-ERN1 ( Figure 2C), indicating that activation of ERN1/IRE1α also has a negative effect on TBX3 gene expression at proteinaceous level. Simultaneously, inhibition of the endoribonuclease activity of ERN1/IRE1α leads to a more heightened induction of TBX3 protein level in glioma cells ( Figure 2C), making TBX3 protein levels comparable to the expression level of mRNA for same transcription factor in these experimental conditions. We also found that the protein level of transcription factor ATF3 is significantly lower in glioma cells stably transfected with dn-ERN1 ( Figure 2D). These results are comparable to the expression of ATF3 mRNA in glioma cells. Interestingly, inhibition of only the endoribonuclease activity of ERN1/IRE1α leads to a significantly lower down-regulation of ATF3 protein level in glioma cells harboring dnr-ERN1 ( Figure 2D), which also correlates to changes in ATF3 mRNA expression.
Induction of endoplasmic reticulum stress in U87 glioma cells, which constitutively expresses dnr-ERN1, modulates expression in most of transcription factor genes
To determine if endoplasmic reticulum stress regulates the genes tested above through the kinase activity of ERN1/IRE1α or other branches of ERSR, we investigated the effect of tunicamycin on the expression of E2F8, EPAS1, TBX3, ATF3, FOXF1 and HOXC6 genes. As shown in Figure 3A, induction of endoplasmic reticulum stress by tunicamycin in glioma cells containing dnr-ERN1 leads to strong suppression of E2F8 mRNA expression (more than sevenfold) and to a threefold up-regulation of ATF3 mRNA expression. We also found that expression of genes encoding for transcription factors FOXF1, HOXC6, and TBX3 are decreased in glioma cells lacking the endoribonuclease activity of ERN1 treated by tunicamycin (-57 %, -47 %, and -34 %, correspondingly) ( Figure 3B, 3C). These results demonstrate that all of the studied genes are responsive to endoplasmic reticulum stress, but the mechanisms of its activation or deactivation are variable. Congruently, the regulation of FOXF1 and TBX3 mRNA expression in the circumstance of endoplasmic reticulum stress is realized through different signaling pathways and inhibition of ERN1/IRE1α endoribonuclease does not eliminate additional regulation of these gene expressions by tunicamycin (Figure 2B and 2C).
Contrary to the above, tunicamycin experiments have demonstrated that inhibition of endoribonuclease activity of ERN1/IRE1α by dnr-ERN1 ( Figure 3C) leads to tunicamycin-resistance in the expression of EPAS1/HIF-2α mRNA. EPAS1/HIF-2α is an endoplasmic reticulum stress responsible gene, expression of which is increased in most malignant tumors [33], including U87 glioma cells ( Figure 1C). Our results point out that induction of EPAS1/HIF-2alpha mRNA expression during endoplasmic reticulum stress is realized solely through ERN1/IRE1α and inhibition of its endoribonuclease leads to tunicamycin resistance.
In addition, Figure 3D demonstrates that treatment of glioma cells harboring dnr-ERN1 by 2 and 8 hours of tunicamycin significantly induces (2.6 and 2.4fold, correspondingly) expression of GRP78/HSPA5 mRNA. Therefore, the presence of the empty vector or addition of G418 to the medium in glioma cells does not significantly change the expression level of GRP78 mRNA and consequently the endoplasmic reticulum stress response.
In conclusion, inhibition of the endoribonuclease activity of ERN1/IRE1α does not eliminate stress dependent regulation of all studied transcription factors, with the exception of EPAS1, by protein kinase of ERN1/IRE1α or by other branches of the endoplasmic reticulum stress response. regulation of its target genes, of the cell cycle, apoptosis, and angiogenesis (including transcriptional activation of VEGFA in cooperation with HIF1) [17,22]. We have also demonstrated that ablation of only the endoribonuclease activity of ERN1/IRE1α in U87 glioma cells has a more pronounced suppressive effect on the expression of the E2F8 gene. Very low levels of E2F8 gene expression and protein observed in glioma cells with modulated ERN1/ IRE1α are close to that in NHA/TS ( Table 2, compare columns 3 and 2, and Figure 1A). Therefore, inhibition of ERN1/IRE1α endoribonuclease affects U87 glioma cell growth by regulating E2F8 mRNA expression and lowering it to the level observed in normal human astrocytes.
Similarly, but significantly less profound, were changes in HOXC6 gene expression ( Table 2 and Figure 1B), which are consistent with its pro-proliferative role. Moreover, the expression of HOXC6 mRNA in glioma cells is significantly higher than compared to that of NHA/TS cells. Inhibition of ERN1/IRE1α, especially its endoribonuclease activity via dnr-ERN1, leads to a significant decrease in HOXC6 gene expression in U87 glioma cells, making its levels closer to non-malignant NHA/TS cells ( Figure 1B). The HOXC6 gene encodes for a transcription factor that may contribute to the progression of gastric carcinogenesis as a pro-proliferative regulator because it plays an important role in proliferation, morphogenesis and metastasis. This is due to the fact that it regulates genes with both oncogenic and tumor suppressor activities [20,25,26]. Thus, down-regulation of HOXC6 mRNA in glioma cells harboring dnr-ERN1 ( Figure 1B) conforms to the suppression of proliferation in these cells ( Figure 4).
Inhibition of endoribonuclease activity of ERN1/IRE1α strongly suppresses proliferation
We showed that inhibition of both kinase and endoribonuclease activities of ERN1/IRE1α results in changes in the expression of different genes which encode transcription factors related to control of proliferation and apoptosis. To see whether those changes affect cell proliferation, cells with and without ERN1/IRE1α were grown under normal conditions. Figure 4 demonstrates that the proliferation rate of glioma cells expressing dn-ERN1 is only twofold lower after 3 days in culture as compared to control U87 glioma cells; with a fourfold inhibition of proliferation in cells expressing dnr-ERN1. Therefore, inhibition of the endoribonuclease activity of ERN1 can result in a more substantial suppression of malignant cell proliferation, possibly though a stronger deregulation of key transcription factors responsible for the control of proliferation and apoptosis.
Discussion
This study has demonstrated that inhibition of the endoribonuclease activity alone or both endonuclease and kinase activities of ERN1/IRE1α together in U87 glioma cells, causes a strong decrease in the levels of E2F8 mRNA and protein ( Figure 1A and 2A). Low amounts of E2F8 affect the composition and levels of its complexes with other members of E2F family of transcription factors and influence various cellular functions through the Therefore, inhibition of the endoribonuclease activity of ERN1/IRE1α alone leads to a more robust suppression of malignant cell proliferation as compared to cells lacking both kinase and endoribonuclease activity (Figure 4). It is possible that the effect of ERN1/IRE1α inhibition on cell proliferation and gene expressions is mediated through its endoribonuclease activity and that a more robust effect of inhibition of the endoribonuclease activity of ERN1/IRE1α versus the inhibition of both its kinase and endoribonuclease activities is due to the additional role of ERN1/IRE1α kinase. We have also found that expression of E2F8 and HOXC6 mRNA in normal human astrocytes is very low compared to control glioma cells and that inhibition of ERN1/IRE1α, especially its endoribonuclease activity, changes the expression of both E2F8 and HOXC6 gene expressions in the direction of normalization (Table 3 and Figure 1A and B). Thus, down-regulation of E2F8 and HOXC6 gene expression nearing the levels seen in normal human astrocytes may contribute to suppression of glioma cell proliferation upon inhibition of ERN1/IRE-1 α endoribonuclease.
We have demonstrated that the expression of T-box transcription factor TBX3 is elevated in glioma cells when ERN1/IRE1α function is inhibited; with the effect being more pronounced when only the endoribonuclease activity of ERN1/IRE1α was suppressed (Table 2 and Figure 6). This increase may contribute to the suppression of cell proliferation and glioma growth because TBX3 is a transcriptional repressor that mediates cellular signaling pathways and controls the rate of cell proliferation [16,24]. This gene was found to be highly expressed in normal human astrocytes as compared to control glioma cells (Table 3 and Figure 1F). In tandem, the transcription factor TBX3 plays multiple roles in normal development and diseases by either repressing or activating transcription of its target genes in a context-dependent manner and it may mediate the anti-proliferative and pro-migratory role of TGF-β1 in breast epithelial and skin keratinocytes; however, its overexpression is associated with several cancers [24]. Thus, increased expression of TBX3 can mediate inhibition of cell growth as well as contribute to the suppression of glioma cell proliferation after inhibition of ERN1/IRE1α, since it has pleiotropic functions.
We have demonstrated that expression of E2F8, EPAS1, HOXC6, FOXF1, and ATF3 genes is decreased in glioma cells with inhibited ERN1/IRE1α via overexpressing dn-ERN1 (Table 2 and Figure 1). This decrease may also contribute to the suppression of cell proliferation ( Figure 4) and tumor growth [14], because proteins encoded by these genes have predominantly pro-proliferative functions [18,19,28,29,30,33] and expression in normal human astrocytes are significantly less than in control glioma cells, with the exception of the FOXF1 gene.
Our results demonstrate that all of the genes studied are endoplasmic reticulum stress responsive, but the mechanisms of activation or suppression of expression, upon inhibition of ERN1/IRE1α differs. Tunicamycin experiments helped to clarify some aspects of these regulatory mechanisms. It is possible that in response to endoplasmic reticulum stress the up-regulation of E2F8 and HOXC6 mRNA is realized through the signaling pathway mediated by ERN1/IRE1α and inhibition of endoribonuclease does not lead to the up-regulation of E2F8 and HOXC6 by tunicamycin, which is significantly higher in U87 glioma cells compared to normal human astrocytes ( Figures 1A and B). At the same time, the regulation of FOXF1 and TBX3 mRNA due to endoplasmic reticulum stress is likely mediated through different signaling pathways. Inhibition of ERN1/IRE1α endoribonuclease does not eliminate the down-regulation of these gene expressions by tunicamycin ( Figure 2B and 2C), which is in agreement with the functional role of these transcription factors [16,24,39]. EPAS1/HIF-2α is an endoplasmic reticulum stress responsible gene, expression of which is increased in U87 glioma cells ( Figure 1C) as well as in most malignant tumors [33]. EPAS1/HIF-2α expression in glioma cells harboring dnr-ERN1 is resistant to the induction of endoplasmic reticulum stress by tunicamycin (Figure3C). It is possible that induction of EPAS1/HIF-2α expression, in the case of endoplasmic reticulum stress, is achieved through a ERN1/ IRE1α mediated signaling pathway where inhibition of endoribonuclease leads to tunicamycin resistance.
Notably, endoplasmic reticulum stress modulates the function of various chaperones in the cell, including a central player in the unfolded protein response and a major endoplasmic reticulum chaperone, HSPA5/GRP78. HSPA5/GRP78 is overexpressed in many cancers, and implicated in cancer cell survival, since it has Ca(2+)binding and anti-apoptotic properties and promotes tumor proliferation, survival, metastasis, and resistance to a wide variety of therapies. Therefore, selective destruction of HSPA5/GRP78 could be potentially utilized as a novel anticancer strategy [40][41][42]. Our results demonstrating down-regulation of HSPA5/GRP78 expression in cells with inhibited ERN1/IRE1α ( Figure 1G) and its correlation with a growth inhibition support this supposition. Moreover, sarcoendoplasmic reticulum Ca(2+) ATPase type 2 is downregulated in some human cell carcinomas and its inhibition induces endoplasmic reticulum stress response, and exerts toxicity in glioma cells [43,44]. Endoplasmic reticulum stress also mediates both apoptosis and autophagy induced by cyclosporine A in malignant glioma cells via mTOR/ p70S6K1 pathway [45].
In conclusion, inhibiting ERN1/IRE1α endoribonuclease affects tumor growth by lowering expression of transcription factors: E2F8, HOXC6, EPAS1, and ATF3, all of which have preferentially pro-proliferative properties and up-regulate expression of FOXF1 and transcriptional repressor TBX, helping to return the glioma cell to levels seen in normal human astrocytes (NHA/TS). Moreover, inhibition of the endoribonuclease activity of ERN1/IRE1α does not eliminate UPR-dependent regulation of transcription factors FOXF1, TBX3, E2F8, and HOXC6 by the kinase activity of ERN1/IRE1α or by other branches of endoplasmic reticulum stress response. Thus, the changes observed in the above studied transcription factors correlate well with slower cell proliferation in cells harboring dn-ERN1 or dnr-ERN1, attesting to the fact that endoplasmic reticulum stress is a necessary component of malignant tumor growth and cell survival [2,3,6,11]. | 2019-03-21T13:13:50.113Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "cdb1e163c3f8318bc1316b7942f8e1052fac0a23",
"oa_license": null,
"oa_url": "https://doi.org/10.1515/ersc-2015-0002",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "cdb1e163c3f8318bc1316b7942f8e1052fac0a23",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
155971 | pes2o/s2orc | v3-fos-license | Potential harmful correlation between homocysteine and low-density lipoprotein cholesterol in patients with hypothyroidism
Abstract Objective: Hypothyroidism (HO) can induce metabolic dysfunctions related to insulin resistance and dyslipidemia. Our previous studies showed that homocysteine (Hcy) impaired the coronary endothelial function and that Hcy can promote chemokine expression and insulin resistance (IR) by inducing endoplasmic reticulum stress in human adipose tissue and hypothyroid patients. The aim of this study was to investigate the potential harmful correlation between plasma Hcy and low-density lipoprotein cholesterol (LDL-C) in patients with HO. Methods: A total of 286 subjects were enrolled. All subjects were divided into the following 3 groups: HO group, subclinical hypothyroidism (SHO) group, and control group. Statistical analyses were carried out to evaluate the correlation between the plasma levels of Hcy and LDL-C in HO patients. The changes in the plasma Hcy levels and other metabolic parameters were measured before and after levothyroxine (L-T4) treatment. The relationship between the changes in the plasma Hcy level and the LDL-C level was also evaluated after L-T4 treatment. Results: In the patients with HO, both the plasma Hcy and LDL-C levels were significantly higher than those of the controls. The plasma levels of Hcy were positively correlated with the LDL-C level in the HO group. L-T4 treatment resulted in a significant decrease in the BMI, total cholesterol (TC), LDL-C, triglycerides (TG), apolipoprotein B (ApoB), and Hcy levels. Moreover, the decrease in Hcy (ΔHcy) was positively correlated with decreased LDL-C (ΔLDL-C) levels after L-T4 treatment in HO patients. Conclusion: Our results suggest that the increased Hcy level was positively correlated with the LDL-C in the HO group. A potential harmful interaction may exist between Hcy and LDL-C under the HO condition. In addition to reducing the plasma levels of Hcy, L-T4 treatment exerts beneficial effects on patients with HO by improving dyslipidemia, including a decrease in the LDL-C level.
Introduction
Hypothyroidism (HO) is a clinical syndrome caused by thyroid hormone deficiency that is characterized by a decreased metabolic rate. [1] Subclinical hypothyroidism (SHO) is defined as an elevated thyroid stimulating hormone (TSH) level with normal free unboundthyroxine (FT4) and free triiodothyronine (FT3) levels. [2] HO and SHO, the 2 most common endocrine disorders, are associated with an increased risk for atherosclerosis and a cluster of metabolic disorders. [3,4] Dyslipidemia is a common metabolic abnormality in patients with HO and SHO, which may be partially responsible for the high risk of cardiovascular disease. [5] Some scholars have suggested that HO leads to a decreased level of low-density lipoprotein (LDL) receptor expression on fibroblasts and hepatocytes, decreased LDL-cholesterol (LDL-C) uptake, and a consequent increase in the serum LDL-C levels. [6,7] Many studies have demonstrated that the association of thyroid disease with atherosclerotic cardiovascular disease may be partly explained by the regulation of lipid metabolism by thyroid hormone. [8] Homocysteine (Hcy), a type of amino acid that is naturally found in blood plasma, is not harmful at normal levels, but when its levels are too high, it can result in health problems. Recent studies have shown that hyperhomocysteinemia (HHcy) is an independent risk factor for cardiovascular disease and accelerated atherosclerosis. [9,10] Elevated serum Hcy concentrations are common in patients with HO. [5] HHcy, together with hypercholesterolemia, may explain the accelerated atherosclerosis in HO. Our previous study demonstrated that there was significantly higher secretion of the chemokine monocyte chemoattractant protein-1 from monocytes in response to lipopolysaccharide in patients with HHcy. [10] Our studies also showed that HHcy could impair the coronary artery endothelial function in hyperhomocysteinemic patients. [11,12] Studies of humans and animals have shown that LDL-C causes vascular endothelial dysfunction that leads to coronary artery disease (CAD) mainly by increasing oxidative stress, impairing endothelial nitric oxide synthase (NOS) activity, and attenuating the bioavailability of NO. [13,14] In the present work, we performed a cross-sectional study to investigate the potential harmful interaction between Hcy and LDL-C in HO patients. Furthermore, we investigated the effects of levothyroxine (L-T4) on the changes in the Hcy and LDL-C level in HO patients.
Subjects
A total of 286 participants were recruited from the Endocrinology Department of the Beijing Chao-yang Hospital during the period from January 2013 to December 2013. SHO is characterized by a serum TSH above the upper reference limit in combination with a normal FT4. This designation is only applicable when the thyroid function has been stable for weeks or more and the hypothalamic-pituitary-thyroid axis is normal. An elevated TSH level, usually above 10 mIU/L, in combination with a subnormal FT4 level, characterizes overt HO. [15] Patients were excluded from the study if they had a history of diabetes mellitus or impaired glucose tolerance, hypertension, acute or chronic hepatic and renal diseases, severe anemia, acute myocardial infarction or stoke. Seventy-three patients were excluded. The final study cohort included 177 patients of those initially enrolled, which included 75 patients with HO and 102 patients with SHO. The control group included 109 age-and sex-matched health subjects who were recruited from the Endocrinology Department of the Beijing Chao-yang Hospital during the same period. No participants were undergoing treatment. HO patients received an appropriate dose of L-T4 based on the drug label. The study protocol was designed according to the guidelines of the Declaration of Helsinki and was approved by the Medical Ethics Committee of Beijing Chao-yang Hospital. All subjects gave their written informed consent.
Sample collection
All subjects underwent a screening assessment for basic demographic information (i.e., age, sex, body height and weight). The body mass index (BMI) was calculated as the height (kg)/ weight 2 (m 2 ). After an overnight fast, blood sample was collected from the peripheral vein of subjects. A routine analysis consisting of FT3, FT4, TSH, Hcy, total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), LDL-C, triglycerides (TG), and apolipoprotein B (ApoB) were determined. Hcy was determined by enzymatic cycling assay-based quantification using the corresponding kits from Baiding Biotech (Beijing, China), and normal reference value was <15 mmol/L. [16] FPG, TC, HDL-C, LDL-C, and TG were determined by Dade-Behring Dimension RXL Autoanalyzer (Dade Behring Diagnostics, Marburg, Germany). The reference intervals were 3.62 to 5.7 mmol/L (TC), 1.03 to 1.55 mmol/L (HDL-C), 1.81 to 3.36 mmol/L (LDL-C), and 0.56 to 2.26 mmol/L (TG), respectively. FT3, FT4, and TSH were examined by electrochemiluminescence immunoassay (ECLIA) of Abbott Architect i2000 (Abbott Diagnostics, Abbott Park, IL). The reference intervals were 1.71 to 3.71 pg/mL (FT3), 0.7 to 1.48 ng/dL (FT4), and 0.35 to 4.94 mIU/mL (TSH). The subjects with HO were divided into 2 groups based on the Hcy value: normal Hcy (15 mmol/L) and HHcy (>15 mmol/L). We compared the changes in the LDL-C under different Hcy conditions. Furthermore, we compared the changes in the Hcy level under different LDL-C conditions (3.36 or >3.36 mmol/L).
Statistical analyses
The data were analyzed using the SPSS 21.0 software program (SPSS, Inc., Chicago, IL) to identify significant effects between the patient groups and corresponding controls. Continuous data, such as the age, BMI, TC, LDL-C, HDL-C, FPG, Hcy, and ApoB, were expressed as the means ± standard deviation (SD). Nonnormally distributed variables, such as the TG, were expressed as medians (25th and 75th percentiles). The differences between groups were analyzed by ANOVA. Normally distributed data were analyzed by Student t test and a paired-sample T test. Nonnormally distributed data were analyzed by the Mann--Whitney U test and the Wilcoxon test. Pearson rank correlation was used to assess the relationship between the decrease in Hcy and the decrease in LDL-C. Spearman rank correlation was used to assess the relationship between Hcy and the LDL-C index. Comparisons between groups at baseline and after L-T4 treatment were performed with independent sample t tests. All tests were 2 tailed, and P < 0.05 was considered statistically significant.
Baseline characteristics of the HO and SHO patients and healthy controls
The baseline characteristics of the subjects are listed in Table 1. The age, BMI, and FPG were similar among the 3 groups. The HO group had significant higher levels of TC, HDL-C, LDL-C, and TG than SHO and control groups. But between the SHO and control groups, the difference of plasma lipid was not significant. The Hcy levels were also significantly higher in the HO group than other 2 groups (17.93 ± 6.86 mmol/L vs 14.81 ± 4.57 mmol/L vs 13.51 ± 3.75 mmol/L, all P < 0.05) (Fig. 1). In the HO group, with the Hcy increasing (Hcy > 15 mmol/L), the prevalence of dyslipidemia (LDL-C > 3.36 mmol/L) increased (P < 0.01). Similarly, the LDL-C was also increased (LDL-C > 3.36 mmol/L) in HHcy (Hcy > 15 mmol/L) patients (Figs. 2 and 3).
The changes in metabolic parameters after L-T4 treatment
The values obtained after treatment with L-T4 are shown in Table 2 After adjusting for sex, BMI, FT4, and FBG, a significant positive correlation was observed between the Hcy and LDL-C levels in the HO group (r = 0.632, P < 0.001) (Fig. 6). However, there was no significant correlation observed in the SHO group (r = 0.095, P = 0.32). We found that a decrease in the Hcy (DHcy) was positively correlated with a decreased in the LDL-C (DLDL-C, r = 0.412, P < 0.05). The results are shown in Fig. 7.
Discussion
In the present study, we found that TC, HDL-C, LDL-C, and TG values in the HO group were significantly higher than those in the SHO and control groups. The plasma Hcy levels were also significantly higher in the HO group than in the SHO group and controls. After adjusting for sex, BMI, FT4, and FBG, a significant positive correlation was observed between the Hcy and LDL-C levels in the HO group. We found that a decrease in the Hcy was positively correlated with a decreased in the LDL-C.
In patients with HO, increased Hcy levels may result from 2 mechanisms; increased Hcy formation or decreased renal Hcy clearance due to the direct effects of thyroid hormones on the Hcy metabolism in the liver and clearance by the kidney. [17] Many studies have proven that the plasma Hcy level is an independent risk factor for CAD because it induces endothelial injury, oxidative stress, smooth muscle hypertrophy, and oxidation of LDL-C. [18,19] Our previous study also demonstrated that Hcy might act as an atherogenic factor by promoting the production of chemokines, reactive oxygen species, and oxidized LDL-C, thus enhancing the progression of cardiovascular disease. [20] In HO patients, the plasma Hcy levels were 17.93 ± 6.86 mmol/L. We have previously shown that coronary flow velocity reserve were impaired when Hcy > 15 mmol/L. [21] A population-based HO patients had a significantly higher TC, HDL-C, LDL-C, TG, Hcy, and ApoB compared with SHO and control groups. Age, BMI, TC, LDL-C, HDL-C, FPG, Hcy, and ApoB were expressed as the mean ± SD. TG was expressed as median (IQR). P < 0.05 was considered statistically significant. ApoB = apolipoprotein B, BMI = body mass index, FPG = fasting plasma glucose, Hcy = homocysteine, HDL-C = high-density lipoprotein cholesterol, HO = hypothyroidism, LDL-C = low-density lipoprotein cholesterol, SHO = subclinical hypothyroidism, TC = total cholesterol, TG = triglyceride. * P < 0.001 significantly different compared with control group. † P < 0.001 significantly different compared with SHO group. ‡ P < 0.01 significantly different compared with SHO group.
x P < 0.05 significantly different compared with SHO group. jj P < 0.05 significantly different compared with control group. prospective cohort study (mean follow-up, 5.3 years) was conducted by Nurk et al, [22] which showed that Hcy was a strong predictor of cardiovascular disease (CVD) in elderly individuals. The study also demonstrated that at baseline, participants with preexisting had higher mean Hcy values than individuals without CVD. Furthermore, multiple risk factoradjusted CVD hospitalization rate ratios in 5 Hcy categories (<9, 9-11.9, 12-14.9, 15-19.9, and ≥20 mmol/L) were as follows: 1 (reference level), 1.00, 1.34, 1.67, and 1.94, respectively (P < 0.001). The study by Nakano et al suggested that an elevated plasma Hcy level might promote LDL-C nitration and increased scavenger receptor uptake, providing a molecular mechanism that may contribute to CAD. [7] Elevated LDL-C levels may be partly responsible for the high risk of cardiovascular disease associated with HO. This suggests that the total serum Hcy levels might be correlated with the LDL-C level in patients with HO.
In this study, we provide data showing a positive correlation between Hcy and LDL-C in patients with HO.
In the present study, the serum TC, HDL-C, LDL-C, and TG levels were significantly higher in patients with HO compared with the SHO and control groups ( Table 1, P < 0.05). In our observations, the subjects in the HO group had higher Hcy and LDL-C levels than the subjects in the SHO and control groups. Our results were consistent with the findings of previous studies, [23][24][25][26] but were not in agreement with those reported by Orzechowska-Pawilojc et al, [27] who observed that the Hcy levels were nonsignificantly higher in patients with HO compared to healthy subjects. In HO patients, we found that the Hcy levels were positively correlated with the LDL-C level after adjustment for sex, BMI, FT4, and FBG. We observed a higher prevalence of dyslipidemia in HHcy patients. These data are consistent with previous studies that have reported that the enhanced atherosclerosis in hyperhomocysteinemic patients might be partly attributable to Hcy-related LDL-C atherogenicity. [28] We acknowledged the statistical limitations of the study due to the small sample size. All HO should be treated. After the exclusion of 30 patients who were lost to follow-up, 45 of the HO patients experienced L-T4 treatment. L-T4 treatment significantly reduced the BMI, TC, HDL-C, LDL-C, TG, Hcy, and ApoB in our patients. In accordance with our results, Orzechowska-Pawilojc et al [27] also reported a significant decrease in the Hcy levels following L-T4 treatment in women with HO. Thyroid hormone replacement is a routine and conventional clinical practice for patients with HO and has been shown to ameliorate Table 2 Clinical characteristics and Hcy levels before and after levothyroxine (L-T4) treatment. the lipid profiles in patients with atherosclerosis. [29,30] In addition, we found that the decreased Hcy levels positively correlated with decreased LDL-C. The significant improvements in the Hcy and LDL-C levels might be due to the presence of positive mutual interactions. Dyslipidemia, consisting of high levels of total and LDL cholesterol, is a common finding in patients with HO and SHO. [5] Hcy stimulates the production and secretion of cholesterol by hepatic cells; this may contribute to the association between cholesterol and Hcy observed in the present study. [31] Elevated Hcy levels promote the synthesis of several proinflammatory cytokines in the arterial walls and circulating cells. Our previous studies indicated that the coronary artery endothelial function might be impaired in essential hypertensive patients with HHcy; furthermore, chronic HHcy might contribute to CAD by inducing dysfunction of the coronary artery endothelium. [32] The uncoupling of endothelial nitric oxide synthase (eNOS) induced by HHcy might at least partly explain this adverse effect. [33] Our previous studies also found that the LDL-C level inversely correlated with the coronary flow velocity reserve in patients with Type 2 diabetes. [34] In agreement with the observations made by Engin et al, [33] HO was associated with both systemic oxidative stress and with specific morphological changes in endothelial cells, which are believed to represent very early stages of atherosclerosis.
Our present results showed a positive correlation between the LDL-C and Hcy levels in HO patients, which was not significant in the SHO patients and controls. This is consistent with a previous study by Saleh [35] that documented a strong relationship between the serum Hcy levels and lipid concentrations especially the concentrations of cholesterol in hypothyroid rats. Conversely, several other investigations demonstrated that Hcy was nonsignificantly correlated with the TC level in overt hypothyroid patients (r = 0.288, P = 0.12). [36] However, HHcy and LDL-C were both associated with endothelial dysfunction and cardiovascular disease. In our study, elevated plasma levels of Hcy promoted LDL-C, and the LDL-C level was also increased in HHcy patients. The discovery of aggregation of LDL-C by Hcy thiolactone and production of foam cells from cultured human macrophages helped to clarify the connection between Hcy and the cholesterol of LDL-C. [37] In the view of Ravnskov, cholesterol participates in atherogenesis by binding to the lipid constituents of microorganisms, forming aggregates. Hcy exacerbates the process of trapping of LDL-C aggregates by narrowing of arterial lumens by causing endothelial dysfunction, further facilitating aggregation of LDL-C by forming homocysteinyl groups attached to LDL-C, and by antibody formation to homocysteinylated LDL-C and to oxidized LDL-C (oxLDL-C), impeding the passage of LDL aggregates. [38] HHcy may contribute to cardiovascular risk by increasing the LDL-C level and promoting LDL-C recruitment into atherosclerotic plaques. In addition to reducing the plasma levels of Hcy, L-T4 treatment exerts beneficial effects on patients with HO by improving dyslipidemia, such as by decreasing the LDL-C level. On the contrary, studies of subjects with cerebrovascular disease in the Vitamin Intervention for Stroke Prevention (VISP) trial, [39] cardiovascular disease in the Norwegian Vitamin Trial (NOR-VIT) and Heart Outcomes Prevention Evaluation (HOPE2) trials, [40,41] and vascular disease from chronic renal failure in the HOST trial [42] showed no reduction in stroke or heart attack or improvement in mortality from B vitamin intervention.
Conclusion
Our results suggest that an increased Hcy level is positively correlated with the LDL-C level in HO patients. A potential harmful correlation may exist between Hcy and LDL-C under the condition of HO. In addition to reducing the plasma levels of Hcy, L-T4 treatment exerts beneficial effects on patients with HO by improving dyslipidemia, such as by decreasing the LDL-C level. Figure 6. Correlation between the Hcy and LDL-C in the HO groups. In 75 HO patients, after adjustment for sex, BMI, FT4, and FBG, the Hcy was correlated with the LDL-C (r = 0.632, P < 0.001). www.md-journal.com | 2018-04-03T03:47:20.158Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "4436d1668f2e48aea475d898feff37bb585adc35",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000004291",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4436d1668f2e48aea475d898feff37bb585adc35",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119137770 | pes2o/s2orc | v3-fos-license | Einstein manifolds with torsion and nonmetricity
Manifolds endowed with torsion and nonmetricity are interesting both from the physical and the mathematical points of view. In this paper, we generalize some results presented in the literature. We study Einstein manifolds (i.e., manifolds whose symmetrized Ricci tensor is proportional to the metric) in d dimensions with nonvanishing torsion that has both a trace and a traceless part, and analyze invariance under extended conformal transformations of the corresponding field equations. Then, we compare our results to the case of Einstein manifolds with zero torsion and nonvanishing nonmetricity, where the latter is given in terms of the Weyl vector (Einstein-Weyl spaces). We find that the trace part of the torsion can alternatively be interpreted as the trace part of the nonmetricity. The analysis is subsequently extended to Einstein spaces with both torsion and nonmetricity, where we also discuss the general setting in which the nonmetricity tensor has both a trace and a traceless part. Moreover, we consider and investigate actions involving scalar curvatures obtained from torsionful or nonmetric connections, analyzing their relations with other gravitational theories that appeared previously in the literature. In particular, we show that the Einstein-Cartan action and the scale invariant gravity (also known as conformal gravity) action describe the same dynamics. Then, we consider the Einstein-Hilbert action coupled to a three-form field strength and show that its equations of motion imply that the manifold is Einstein with totally antisymmetric torsion.
Introduction
within the framework of Weyl geometry (and especially of integrable Weyl geometry), in particular concerning scale invariant general relativity and higher symmetry approaches to gravity involving conformal invariance [25]. Always in Weyl's perspective, conformal (higher curvature) gravity theories were constructed and studied in detail in [30][31][32]. Furthermore, in [33] an observational constraint to the non-integrability of lengths in the original Weyl theory was placed for the first time.
A Weyl manifold is a conformal manifold equipped with a torsionless but nonmetric connection, called Weyl connection, preserving the conformal structure. Then, it is said to be Einstein-Weyl if the symmetric, trace-free part of the Ricci tensor of this connection vanishes (and the symmetric part of the Ricci tensor of the Weyl connection is proportional to the metric). Thus, Einstein-Weyl manifolds represent the analog of Einstein spaces in Weyl geometry and are less trivial than the latter, which have necessarily constant curvature in three dimensions.
Einstein-Weyl spaces were studied in [34][35][36][37][38][39][40][41][42][43][44][45][46], and they are also relevant in the context of (fake) supersymmetric supergravity solutions [47][48][49][50][51][52]. Einstein-Weyl geometry is particularly rich in three dimensions [34,35], where it has an equivalent formulation in twistor theory [53], which provides a tool for constructing selfdual four-dimensional geometries. Selfdual conformal four-manifolds play a central role in low-dimensional differential geometry, and a key tool in this context is provided by the so-called Jones-Tod correspondence [54], in which the reduction of the self-duality equation by a conformal vector field is given by the Einstein-Weyl equation together with the linear equation for an abelian monopole (in other words, the Jones-Tod correspondence is a correspondence between a self-dual space with symmetry and an Einstein-Weyl space with a monopole). Einstein-Weyl structures are also related to certain integrable systems, like the SU(∞) Toda field equations [55] or the dispersionless Kadomtsev-Petviashvili equation [56].
On the other hand, as already mentioned, another generalization of Riemannian geometry is given by the introduction of a nonvanishing torsion, which is the case for the Einstein-Cartan theory [57][58][59][60][61], where the geometrical structure of the manifold is modified by allowing for an antisymmetric part of the affine connection (see also [62] for a recent review on torsional constructions and metric affine gauge theories). Cartan suggested that spacetime torsion is related to the intrinsic angular momentum, before the concept of spin was introduced. Cartan's theory was then reinterpreted as a theory of gravitation with spin and torsion [63][64][65]. Subsequently, the introduction of a non-vanishing torsion has been widely analyzed in general relativity and in the setting of teleparallel gravities [66][67][68][69][70][71][72], as well as in other contexts. In particular, in [73,74] the torsion tensor was related to the Kalb-Ramond field [75]. Furthermore, the relation between torsion and conformal symmetry was studied by several authors, and it turned out that torsion plays an important role in conformal invariance of the action and behaves like an effective gauge field [76,77]. Subsequently it was shown that in the nonminimally coupled metric-scalar-torsion theory, for some special choice of the action, torsion acts as a compensating field and the full theory is conformally equivalent to general relativity at a classical level [78,79]. More recently, in [80] the metric-torsional conformal curvature of four-dimensional spacetime was constructed, and in [81] different types of torsion were investigated, together with their effect on the dynamics and conformal properties of fields. Conformal invariance was also analyzed in generalizations of Einstein-Cartan spaces including nonmetricity [82][83][84][85], and in [86] an exhaustive classification of metric affine theories according to their scale symmetries was presented (see also [87]). Finally, in a cosmological context, it was proposed in [88,89] that a nonvanishing torsion can serve as an origin for dark energy. Let us also mention, here, that a generic theory (without matter) involving terms quadratic in torsion and nonmetricity will be classically equivalent at low energy to Einstein's theory, as discussed in [90] and references therein. From a mathematical point of view, Einstein manifolds with skew-symmetric torsion (i.e., totally antisymmetric torsion) were analyzed in [91,92].
Motivated by the fact that nonmetric and torsionful connections are interesting both from the physical and the mathematical point of view, in this paper we generalize some results presented previously in the literature. In particular, we study Einstein manifolds in d dimensions with nonvanishing torsion that has both a trace and a traceless part, and we analyze invariance under extended conformal transformations (see Refs. [78,82], where these transformations are defined for metric affine spaces) in this context. Then, we compare our results to the case of Einstein spaces with zero torsion and nonvanishing nonmetricity, where the latter is given in terms of the Weyl vector. We find that the trace part of the torsion can alternatively be interpreted as the trace part of the nonmetricity. Subsequently, we extend our analysis to the case of Einstein manifolds with both torsion and nonmetricity (Einstein-Cartan-Weyl spaces), where we allow for both a trace and a traceless part of the nonmetricity tensor. Finally, we construct and investigate actions involving scalar curvatures obtained from torsionful or nonmetric connections, and analyze their relations with other gravitational theories known in the literature. In particular, we consider the Einstein-Cartan action and discuss its relationship with scale invariant gravity (also known as conformal gravity, which is invariant under Weyl transformations) [93][94][95][96][97][98][99][100][101][102], showing that they describe the same dynamics. Then, we study the Einstein-Hilbert action coupled to a three-form H µνρ and shew that its equations of motion imply that the manifold is Einstein with skew-symmetric torsion. Furthermore, it turns out that the equations of motion of Einstein gravity coupled to a three-form may also be retrieved from a constrained action that contains the scalar curvature of a connection with torsion. Let us specify that in this work we will focus on the vacuum, without considering matter.
The remainder of this paper is organized as follows: In section 2, we consider Einstein spaces with torsion that has both a trace and a traceless part. In particular, we find the field equations satisfied by an Einstein-Cartan space. Then, the invariance under extended conformal (Weyl) transformations of the latter is studied and the results are compared to the case of Einstein-Weyl manifolds, which have nonvanishing nonmetricity but zero torsion. In section 3, we extend the analysis to Einstein-Cartan-Weyl manifolds, and add thereby also a traceless part to the nonmetricity tensor. In section 4, the Weyl invariant Einstein-Cartan action is studied and shown to be equivalent to scale invariant gravity (i.e., conformal gravity), which involves the presence of a scalar field φ. Subsequently, in section 5 we consider the Einstein-Hilbert action coupled to a three-form, and show that the resulting field equations imply that the space is Einstein with torsion, where the latter is proportional to H µνρ . We conclude our work with some comments and possible future developments. In the appendix we collect some technical details.
Einstein manifolds with torsion
We first consider a d-dimensional Einstein manifold with metric g µν and nonvanishing torsion (i.e., a so-called Einstein-Cartan manifold) 2 . The connection Γ λ µν can be decomposed as whereΓ λ µν are the connection coefficients of the Levi-Civita connection (i.e., the Christoffel symbols) and N λ µν is called the distortion. Here, the latter can be written as 3 where T λ µν = e λ a T a µν is the torsion 4 , antisymmetric in the last two indices, Let us also introduce the contorsion (or contortion), antisymmetric in the first two indices, Observe that the distortion (2.2) can then be written as In [91,92], Einstein manifolds with skew-symmetric torsion were analyzed. Below, we shall consider a general decomposition of the torsion tensor, which can be decomposed in a traceless and a trace part as In particular, we haveT ν µν = 0 and T µ ≡ T ν µν . Notice that 2N λ [µν] = T λ µν . The distortion (2.5) becomes then and thus (2.1) reads The explicit expression for the Riemann tensorR λ ρµν = ∂ µ Γ λ νρ − ∂ ν Γ λ µρ + Γ λ µσ Γ σ νρ − Γ λ νσ Γ σ µρ of the Einstein-Cartan connection Γ λ µν is given in the appendix (see eq. (A.1)). There as well as in the following, ∇ denotes the covariant derivative of the Levi-Civita connection. The corresponding Ricci tensor R ρν = R µ ρµν is given by (A.2). In particular, one gets (2.9) 3 As we will see in sec. 3, in the case of torsionful, nonmetric connections the distortion is generally defined as where Q λµν is the nonmetricity tensor (we will introduce and define it later). In the present section we first restrict ourselves to the case of vanishing nonmetricity, namely we consider a metric, torsionful connection. The nonmetric torsion-free case (where N λµν = 1 2 (Q λµν + Q λνµ − Q µλν )) will be discussed at the end of the current section when we will explore Einstein-Weyl spaces. 4 e λ a denotes the inverse vielbein and early latin indices a, b, . . . refer to the tangent space. The torsion 2-form is defined as Note that if we set the traceless part of the torsion to zero,T λ µν = 0, we are left with In general, one has thus One can also construct another Ricci tensor by contracting the second and the third index of the Riemann tensor. However, the Ricci tensor obtained in this way coincides with (A.2), since R λρµν = −R ρλµν is still valid (while it fails to be for nonmetric connections). The Ricci scalar reads Let us now define an Einstein space with torsion by for some function λ. Using (A.2), this becomes and thus Hence, in terms of Riemannian data, (2.14) becomes which is a set of nonlinear partial differential equations characterizing an Einstein manifold with torsion, henceforth termed Einstein-Cartan space.
Extended conformal invariance in Einstein-Cartan manifolds
We will now show that (2.14) is invariant under extended conformal transformations discussed in [78]. Thus, let us consider the extended conformal (Weyl) transformations where ω = ω(x) is an arbitrary scalar field. Therefore, we have Moreover, (2.19) leads to the following transformation for the connection: which is called, specifically, a special projective transformation of the connection (see, for instance, Refs. [86,87]), also known as λ transformation. Let us observe that, actually, the combination of the conformal metric transformation in (2.19) plus the special projective transformation (2.21) of the affine connection is called a frame rescaling (see Refs. [86,87], where frame rescalings have been considered in metric affine spaces, also including Einstein-Cartan ones). For the Riemann tensor, the Ricci tensor and the scalar curvature, we get respectively Now, (2.14) implies R = λd, so that (2.14) is equivalent to
Comparison with Einstein-Weyl spaces
A Weyl structure on a manifold Σ consists of a conformal structure [g] = {f g|f : Σ → R + }, and a torsion-free connection∇ fulfilling∇ for some one-form Θ on Σ (the Weyl vector). The condition (2.24) is invariant under the transformation One can then define the nonmetricity tensor, which reads In this case the distortion is given by A Weyl structure is said to be Einstein-Weyl [20] if the symmetrized Ricci tensor W ρν of∇ is proportional to some metric g ∈ [g], where W is the scalar curvature of the Weyl connection∇. It is given by 5 The condition (2.28) can be rewritten in terms of Riemannian data as The scope of this subsection is to compare the field equations for Einstein manifolds with torsion, (2.18), with the Einstein-Weyl equations (2.30). To this end, let us define such that, under the first transformation in (2.20), we have Using (2.31) in (2.18), one gets µτ σT τ σµ . (2.33) Thus, forT λ µν = 0, (2.33) exactly coincides with (2.30) if we identify A µ with Θ µ , i.e., T µ → (d − 1)Θ µ . This is actually not surprising, since forT λ µν = 0 the torsion two-form is given by Then, the first Cartan structure equation gives We can then define a new connectionω ab aŝ Finally, note that a duality between torsion and nonmetricity has also been discussed in [103] in a slightly different context.
Einstein manifolds with torsion and nonmetricity
Let us now consider Einstein spaces with both torsion and nonmetricity (we will call these Einstein-Cartan-Weyl manifolds), and study the Weyl invariance of the corresponding field equations.
With respect to section 2, we will in addition allow for a nonmetricity tensor of the form (2.26), where∇ has also torsion. We are thus considering only the trace part of the nonmetricity. The consequences of adding a traceless part will be analyzed at the end of this section. The connection Γ λ µν of the Einstein-Cartan-Weyl manifold is given bŷ where theΓ λ µν are the Christoffel symbols, and the distortion N λ µν reads that is, in the present context, The Ricci tensor of∇, that isR ρν =R µ ρµν , is given in the appendix (see eq. (A.3)). Note that one can also construct another Ricci tensor R ρν =R µ µρν (commonly referred to as the homothetic curvature), since for nonmetric connections the Riemann tensor is not necessarily antisymmetric in the first two indices. In our case we have and thus the Ricci scalar associated with the homothetic curvature is identically zero. On the other hand, the nonvanishing Ricci scalar is given bŷ Observe that, if we defineŤ the Ricci scalar (3.5) becomeŝ which corresponds to the Ricci scalar of a metric connection with torsion (cf. eq. (2.13)), whose trace part is given byŤ µ .
We define an Einstein-Cartan-Weyl space bŷ for some function λ. Using (A.3), this can be rewritten in the equivalent form which is a system of nonlinear partial differential equations characterizing an Einstein-Cartan-Weyl manifold.
Extended conformal invariance of the Einstein-Cartan-Weyl equations
Let us now discuss the extended conformal invariance of (3.8). In an affine manifold such as an Einstein-Cartan-Weyl one, the most general extended conformal (Weyl) transformations involving an arbitrary scalar field ω = ω(x) which leave the curvature tensor invariant are given by (see [82]) where ξ denotes an arbitrary parameter that we are free to include [82,85] 6 . In particular, for the one-forms Θ and T and forT λ µν we find
11)
6 Note that (3.10) implies that∇µgνρ = 2Θµgνρ transforms covariantly. and the connectionΓ transforms according tô This ensures the invariance of the curvature tensor due to its special projective invariance (see, for instance, Refs. [86,87]). Thus, for the Riemann tensor, the Ricci tensor and the scalar curvature one obtains respectivelŷ Eq. (3.8) impliesR = λd, so that (3.8) is equivalent tô which is clearly invariant under the extended conformal transformations written above. Let us finally make some comments on two particular cases, namely ξ = 1 and ξ = 0.
• For ξ = 1 one has Observe that (3.15) corresponds to the transformation (2.32), for A µ = Θ µ , discussed in section 2 in the context of a Weyl structure (that is with nonmetricity and zero torsion). Moreover, note that this is the only case in which the connection is also invariant,Γ ρ µν →Γ ρ µν . In fact, setting ξ = 1 into (3.10) and (3.11) leads to a conformal transformation of the metric in an affine space, namely a transformation under which the metric tensor picks up a conformal factor e 2ω while the affine connection is left unchanged (see Refs. [86,87]).
• For ξ = 0 we get the extended conformal transformation discussed in [78] in the context of a torsion theory which leads to a special projective transformation for the connection. In particular, in this case we have which reproduces exactly the transformation in (2.20) for T µ discussed in section 2 for manifolds with torsion and vanishing nonmetricity, together witĥ which is a special projective transformation (3.17) for the connection. On the other hand, let us observe that the combination of the conformal metric transformation in (3.10) plus the special projective transformation (3.17) is called, according to [86,87], a frame rescaling.
We can conclude that there are two unique transformations which single out torsion or nonmetricity. This is in agreement with [82]. Note that the same results could have been obtained by considering (3.7), together with the definition (3.6), that is by reabsorbing the nonmetricity and exploiting the transformations of sec. 2 for an Einstein-Cartan manifold with torsion and vanishing nonmetricity.
Adding a traceless part to the nonmetricity tensor
In the following we extend the above analysis to include a traceless part of the nonmetricity as well. Interestingly, in the case where the latter is totally symmetric, it can be viewed as representing a massless spin-3 field [104,105].
Thus, we decompose whereQ ν µν = 0. Using (2.6) and (3.18) in (3.2), the distortion becomes where we defined the so-called disformation (also known as deflection tensor) which is symmetric in the last two indices.K νλµ andM νλµ are respectively the traceless part of K νλµ and M νλµ , From (3.1) one obtains for the connection The explicit expression for the Ricci tensorR ρν of∇ is given in the appendix (see (A.4)), and it contains extra contributions from the traceless tensorQ λµν . The homothetic curvature is still given by (3.4), while the Ricci scalar iŝ (3.23) Observe that, by definingŤ whereŤ ν µν = 0, and using the fact that the symmetries ofT µνρ andQ µνρ imply one can shew that the Ricci scalar (3.23) can be written aŝ which corresponds to the Ricci scalar of a metric connection with nonvanishing torsion, whose trace and traceless parts are now respectively given byŤ µ andŤ µνρ . This is analogous to the case in which one does not include a traceless contribution for the nonmetricity, cf. eq. (3.7). As before, we define an Einstein-Cartan-Weyl space by eq. (3.8), which becomes in the present context which represents a system of nonlinear partial differential equations characterizing an Einstein-Cartan-Weyl manifold with the most general form of torsion and nonmetricity. Finally, we can consider the transformations (3.10). In particular, we havȇ For the curvature tensors one still has the transformation laws given in (3.13), so that the Einstein-Cartan-Weyl equations (3.8) are again invariant under extended conformal transformations for arbitrary parameter ξ.
Einstein-Cartan action and scale invariant gravity
Let us consider the action where R is the Ricci scalar (2.13) of a torsionful but metric connection, φ denotes a scalar field, and κ is a constant. Along the same lines of [85], (4.1) can be rewritten as withR the scalar curvature of the Levi-Civita connection. One easily shows that (4.2) is invariant under Using the traceless part of the contorsion defined in (3.21), the action (4.2) becomes (4.4) and its variation w.r.t. T µ andK νρµ yields respectively Notice that T µ can be eliminated by an extended conformal transformation and is thus pure gauge. Using the definition (3.21) and the fact that the traceless part of the torsion is antisymmetric in the last two indices, we getT µνρ = 2K µ[νρ] = 0, and therefore alsoK µνρ = 0, in agreement with [79,85]. Varying the action (4.4) w.r.t. g µν and φ leads to where we have used the expression for T ν in (4.5) as well asK µνρ = 0. Observe that the trace of (4.6a) implies (4.6b), which can be understood as a consequence of φ being pure gauge. Let us now consider the action which is called scale invariant (also known as conformal gravity). It turns out that the equations of motion following from (4.7) are precisely (4.6a) and (4.6b) obtained from (4.4) after having used the expressions for the torsion. The actions (4.1) and (4.7) describe thus the same dynamics.
Notice also that, plugging T µ (cf. (4.5)) andK µνρ = 0 into (4.4), one gets, up to a surface term 7 , the conformal gravity action (4.7) (see also [85]). One can also show that the action (4.1) implies that the spacetime is Einstein with torsion, which is a completely new result. To see this, observe that eq. (4.6a) can be rewritten as Using also (4.6b), this can be cast into the form On the other hand, consider the system (2.18) characterizing an Einstein-Cartan manifold, and use the result (4.5) for the trace part of the torsion as well asT µνρ = 0. Then (2.18) boils down precisely to (4.9). Let us also observe that, as already mentioned in [85], conformal (Weyl) invariance allows to rescale φ → e 2−d d ω φ. One can use this freedom to gauge fix φ = 1/(4 √ πG), where G is Newton's constant. Then the action (4.7) becomes where we chose κ = 2Λ(16πG) 2/(d−2) . The Einstein-Hilbert action with cosmological constant can thus be viewed as a gauge fixed version of the action (4.7). Finally, let us recall that the trace part of the torsion can also be interpreted as the trace part of the nonmetricity (cf. sec. 2.2). If we set the traceless part of the torsion to zero, this leads to the action (4.11) which is invariant under The variation of (4.11) w.r.t. Θ µ yields Again, one can easily show that the actions (4.11) and (4.7) describe the same dynamics. (4.11) implies that the spacetime is Einstein-Weyl, where the Weyl vector is given by (4.13), and is thus pure gauge. Notice in this context that there is no known action principle that leads to the Einstein-Weyl equations with non-exact Weyl vector. 7 The surface term is
Einstein-Hilbert action coupled to a 3-form as Einstein-Cartan gravity
The Einstein-Hilbert action coupled to a 3-form field strength reads where H µνρ is given in terms of a gauge potential B µν , The variation of (5.1) w.r.t. B µν leads to On the other hand, consider the system (2.18) satisfied by an Einstein manifold with torsion. Assume that T µ = 0 and takeT µνρ to be completely antisymmetric. Then (2.18) boils down tõ We would like to compare this with (5.4). To this end, take the trace of (5.4), which leads tõ Now subtract its trace part from (5.4) to obtaiñ which coincides precisely with (5.5) if we identify H µνρ =T µνρ . The equations of motion following from (5.1) can thus be interpreted as implying that the spacetime is Einstein with skew-symmetric torsion H µνρ satisfying (5.3). Notice however that the equations (5.4) are more restrictive than (5.5), since they contain in addition the trace part (5.6), while (5.5) is traceless. This is somehow reminiscent of hyper Cauchy-Riemann (hyper-CR, or Gauduchon-Tod) spaces [106], where on top of the (trace-free) Einstein-Weyl equations there is a constraint on the scalar curvature. Quite remarkably, the equations (5.3), (5.4) can also be retrieved from the constrained action where R denotes the scalar curvature of a torsionful but metric connection (cf. (2.13)), λ µνρ is a Lagrange multiplier, and B µν is antisymmetric. The variation of (5.8) w.r.t. T µ , B µν , λ µνρ ,T µνρ and g µν gives respectively T µ = 0, ∇ µ λ [µνρ] = 0, (5.9) where we already used T µ = 0 in (5.12). (5.10) implies that the traceless part of the torsion is completely antisymmetric, and thus (5.11) reduces to Plugging this into the last eq. of (5.9) leads to ∇ µT µνρ = 0. (5.14) Finally, using (5.10) in (5.14) and (5.12), one gets precisely (5.3) and (5.4). The actions S 1 and S 2 describe therefore the same dynamics.
Discussion
Motivated by the interest in connections with torsion and nonmetricity both from the physical and the mathematical point of view, we first generalized here some results that appeared previously in the literature. In particular, we considered Einstein spaces with nonvanishing torsion that has both a trace and a traceless part (Einstein-Cartan manifolds), and showed that the resulting field equations are invariant under extended conformal transformations. We then compared our results to Einstein manifolds with zero torsion but nonvanishing nonmetricity, where the latter is given in terms of the Weyl vector Θ µ (Einstein-Weyl spaces). We saw that, if the traceless part of the torsion is set to zero, then the system of partial differential equations characterizing Einstein-Cartan spaces exactly coincides with the Einstein-Weyl equations if the torsion trace T µ is replaced by (d − 1)Θ µ . Subsequently, we extended our analysis to the case of Einstein manifolds with both torsion and nonmetricity (Einstein-Cartan-Weyl spaces), allowing for both a trace and a traceless part of the nonmetricity tensor.
Moreover, we considered actions involving scalar curvatures obtained from torsionful or nonmetric connections, and investigated their relations with other gravitational theories, obtaining completely new results in this context. In particular, we analyzed a conformally (Weyl) invariant action with torsion and its relation with scale invariant gravity, which involves a scalar φ, and found that they reproduce the same dynamics. Furthermore, we have shown that the action (4.1) implies that the spacetime is Einstein with torsion. Then, the Einstein-Hilbert action coupled to a three-form field strength H µνρ was considered, and it was shown that its equations of motion imply that the manifold is Einstein with skew-symmetric torsion. Furthermore, it turned out that the equations of motion of Einstein gravity coupled to a three-form may also be retrieved from a constrained action that contains the scalar curvature of a connection with torsion. Let us stress that in this paper we concentrated on the vacuum, without considering the presence of matter.
Among the solutions to Einstein's field equations, Einstein spaces are of particular relevance in physics, think for instance of the Kerr-(A)dS solution or of string compactifications on e.g. Sasaki-Einstein manifolds. Since nature could accommodate for torsion and nonmetricity, it seems reasonable to generalize the concept of Einstein spaces to torsionful and nonmetric connections.
The manifolds analyzed in this paper may also have applications in the classification and physical study of (fake) supersymmetric supergravity solutions in the same way as Einstein-Weyl manifolds provide the base space for fake supersymmetric solutions in de Sitter supergravity [47][48][49][50][51][52]. Under the physical point of view, this analysis is particularly relevant in higher dimensions, since, in d > 4, it is highly nontrivial to determine whether a given near-horizon geometry can be extended to a full black hole solution (due to the fact that the strong uniqueness theorems that hold in four dimensions [107][108][109][110][111][112] break down and there exist different black holes with the same asymptotic charges and different black hole solutions with the same near-horizon geometry). Progress in classifying near-horizon geometries can help to face this problem, as it was proven in [51], where the authors, after having showed that a class of solutions of minimal supergravity in five dimensions is given by lifts of three-dimensional Einstein-Weyl structures of hyper-CR type, considered the task of reconstructing all supersymmetric solutions from such near-horizon geometry, demonstrating that the moduli space of infinitesimal supersymmetric transverse deformations of the near-horizon data is finite-dimensional if the spatial section of the horizon is compact.
Always in this context, a new result has recently been obtained in [113], where it has been shown that the horizon geometry for supersymmetric black hole solutions of minimal five-dimensional gauged supergravity is that of a particular Einstein-Cartan-Weyl structure in three dimensions, involving the trace and traceless part of both torsion and nonmetricity, and obeying some precise constraint; in the limit of zero cosmological constant, the set of nonlinear partial differential equations characterizing this Einstein-Cartan-Weyl structure reduces to that of a hyper-CR Einstein-Weyl structure in the Gauduchon gauge, which was shown in [51] to be the horizon geometry in the ungauged BPS (Bogomol'nyi-Prasad-Sommefield) case.
The analysis of this paper might also be extended in other directions. In particular, it would be interesting to generalize the construction of [88] concerning the Chern-Simons formulation of threedimensional gravity involving torsion and nonmetricity, and the recent results presented in [114] in the context of double field theory. One could also investigate possible generalizations of [104,105].
On the other hand, a future development of our work may consist in possible generalizations of the Jones-Tod correspondence [54] between selfdual conformal four-manifolds with a conformal vector field and abelian monopoles on Einstein-Weyl spaces in three dimensions. Especially one could ask whether Einstein-Cartan-Weyl manifolds can arise in a similar way by symmetry reduction from higher dimensions.
Finally, a further direction for future research would be a geometrical investigation of the results on unconventional supersymmetry presented recently in [115], where torsion plays a fundamental role, under the perspective developed here.
A Riemann and Ricci tensors
The Riemann tensor of the Einstein-Cartan connection Γ λ µν introduced in section 2 reads whereR λ ρµν and ∇ denote respectively the Riemann tensor and the covariant derivative of the Levi-Civita connection. The first line of (A.1) follows from the definition [D µ , D ν ] ω ρ + T σ µν D σ ω ρ = −R λ ρµν ω λ , where D denotes the connection with coefficients Γ. The corresponding Ricci tensor is given by On the other hand, the Ricci tensor of the Einstein-Cartan-Weyl connectionΓ λ µν introduced in section 3 iŝ where ∇ denotes again the Levi-Civita connection. Finally, adding a traceless part to the nonmetricity tensor, we have that the Ricci tensor of∇ reads, explicitly, | 2018-11-28T09:24:05.000Z | 2018-11-28T00:00:00.000 | {
"year": 2018,
"sha1": "0a54ca5f2979f5cb1ee638937ab7727da734261c",
"oa_license": null,
"oa_url": "https://air.unimi.it/bitstream/2434/828685/2/PhysRevD.101.044011.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9757616d0feca6e776bb3bf583d0636383b8f177",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
12025856 | pes2o/s2orc | v3-fos-license | Myelin oligodendrocyte glycoprotein (MOG35-55)-induced experimental autoimmune encephalomyelitis is ameliorated in interleukin-32 alpha transgenic mice
Multiple sclerosis (MS), also known as disseminated sclerosis or encephalomyelitis disseminate, is an inflammatory disease in which myelin in the spinal cord and brain are damaged. IL-32α is known as a critical molecule in the pathophysiology of immune-mediated chronic inflammatory disease such as rheumatoid arthritis, chronic pulmonary disease, and cancers. However, the role of IL-32α on spinal cord injuries and demyelination is poorly understood. Recently, we reported that the release of proinflammatory cytokines were reduced in IL-32α-overexpressing transgenic mice. In this study, we investigated whether IL-32α plays a role on MS using experimental autoimmune encephalomyelitis (EAE), an experimental mouse model of MS, in human IL-32α Tg mice. The Tg mice were immunized with MOG35-55 suspended in CFA emulsion followed by pertussis toxin, and then EAE paralysis of mice was scored. We observed that the paralytic severity and neuropathology of EAE in IL-32α Tg mice were significantly decreased compared with that of non-Tg mice. The immune cells infiltration, astrocytes/microglials activation, and pro-inflammatory cytokines (IL-1β and IL-6) levels in spinal cord were suppressed in IL-32α Tg mice. Furthermore, NG2 and O4 were decreased in IL-32α Tg mice, indicating that spinal cord damaging was suppressed. In addition, in vitro assay also revealed that IL-32α has a preventive role against Con A stimulation which is evidenced by decrease in T cell proliferation and inflammatory cytokine levels in IL-32α overexpressed Jurkat cell. Taken together, our findings suggested that IL-32α may play a protective role in EAE by suppressing neuroinflammation in spinal cord.
IntroductIon
MS is the most common inflammatory demyelinating disease of the central nervous system [1]. The disease typically presents between the ages of 20 and 40 and impacts approximately 35,000 individuals in the United States alone [1][2][3]. MS can lead to substantial disability with deficits seen in sensory, motor, autonomic, and neurocognitive function [2]. Its pathology is characterized by leukocyte infiltration, Oncotarget 40453 www.impactjournals.com/oncotarget demyelination, oligodendrocyte loss, axonal transection, and a reactive astrogliosis [4,5]. It is believed that early neurologic disability in MS is affected by conduction block in demyelinated axons, whereas axonal transection underlies the more permanent deficits observed later in the disease [6]. For study of MS, the most common animal model of experimental autoimmune encephalomyelitis (EAE) induction is currently based on the injection of an encephalitogenic peptide, MOG 35-55 as well as proteolipid protein and myelin basic protein. The MOG 35-55 peptide triggers chronic-progressive EAE in C57BL mice.
Interleukin-32 (IL-32) is a novel cytokine which was reported originally as a natural killer (NK) transcript 4 that was expressed in various human tissues and organs, such as spleen, thymus, leukocyte, lung, small intestine, colon, prostate, heart, placenta, liver, muscle, kidney, pancreas, and brain [7,8]. The expression of IL-32 mRNA is more prominent in immune cells than in non-immune tissues [8][9][10]. IL-32 not only participates in host responses by induction of proinflammatory cytokines but also directly affects specific immunities differentiating monocytes into macrophage-like cell. It has been also reported that the increase of IL-32 expression correlates with clinical and histological markers of diseases such as rheumatoid arthritis (RA), suggesting the reduction of IL-32 activity may provide benefit to patients with RA [11]. In another study, IL-32 induced increase in constitutive IL-10 release, but a decrease in TNF-α and IL-6 [12]. IL-32 itself accounts for less inflammation in humans with ulcerative colitis (UC) [13]. Our previous study showed that IL-32 acts in decrease of pro-inflammatory cytokines and an increase of anti-inflammatory cytokines [14]. Thus, it is possible that IL-32 could act as both pro-inflammatory and anti-inflammatory cytokine under differential condition of disease.
In this work, we describe studies investigating the inflammatory role of IL-32 in an EAE mouse model, a widely accepted animal model of MS. Using IL-32 transgenic (Tg) mice, we demonstrate that IL-32 Tg mice displays reduced clinical and pathologic results.
Generation of IL-32α transgenic mice that display an decreased susceptibility to MOG 35-55 -induced eAe
We generated human IL-32α-overexpressing transgenic mice (IL-32α mice) by subcloning IL-32α cDNA into the mammalian expression vector pCAGGS Figure 1A). The success of procedure was confirmed by PCR of mouse tail genomic DNA using allele-specific primers ( Figure 1B). The transgene was successfully transmitted to 50 % of pups from each littermate, as evaluated by genotyping. These founder mice were each back-crossed into the C57BL6/J background for eight generations. The male/female ratio was 50% for IL-32α transgenic and nontransgenic littermates.
Non-Tg and IL-32α mice sensitized at 8 weeks of age developed clinical signs of MOG 35-55 peptide-induced EAE. The first paralysis appeared on day 10 in non-Tg group and day 11 in IL-32α group. Noteworthy differences were in the course of disease; non-Tg group's symptom is greater than that of IL-32α group ( Figure 1C and 1D). Changes (between day 0 and day 28) of mean body weight (%) in non-Tg group were higher than IL-32α group ( Figure 1E).
EAE-induced spinal cord injury was reduced in IL-32α mice
To compare the lesion formation in the spinal cords of IL-32α mice group to that of non-Tg mice group, we used Hematoxylin and Eosin (H&E) staining for observing cell infiltration and Luxol Fast Blue (LFB) staining for observing demyelination. Using H&E staining in spinal cord sections, we found that mononuclear cell infiltration into the injured area during MOG 35-55 -induced EAE decreased in IL-32α mice group as compared to non-Tg mice group (Figure 2A). Furthermore, we observed that in the IL-32α mice group spinal cord sections significantly less reduction in LFB staining as compared to that of non-Tg mice group spinal cord sections, indicating that less demyelination occurred in the IL-32α mice spinal cord ( Figure 2B).
EAE-induced infiltration of immune cells was decreased in IL-32α mice
To analyse the profile of the infiltrating immune cells causing inflammation in the lesion area, we detected the expression of CD3 + (a maker of T cells), CD4 + (a maker of helper T cells), CD8b + (a maker of cytotoxic T cells), CD11b + (a maker of macrophages and microglials), F4/80 + (a maker of macrophages), CD16 + (a maker of NK cells) and CD19 + (a maker of B cells) by IF staining in the spinal Oncotarget 40455 www.impactjournals.com/oncotarget cord sections. We found a massive elevation of immune cells (T cell, helper T cell, cytotoxic T cell, macrophage, NK cell and B cell) in the spinal cords of non-Tg mice ( Figure 3 and Figure 8). However, IL-32α mice showed significant reduction in these immune cell numbers except CD4 + cells. These results show that MOG 35-55 -induced immune response is reduced in IL-32α mice group.
EAE-induced increase in inflammatory cytokine levels was inhibited in IL-32α mice
To measure cytokine levels related with EAE, we used ELISA kit by using spinal cord tissue lysis from control, non-Tg and IL-32α mice group. We observed that the level of inflammatory cytokines such as IL-1β, IFN-γ, and IL-6 was increased by MOG 35-55 treatment in non-Tg, however, there is a significant difference between non-Tg + MOG and Tg + MOG groups ( Figure 4B, 4C, and 4D). While, MOG-induced increase in TNF-alpha, and IL-10 levels were not affected in IL-32α mice group ( Figure 4A and 4E). From this result, we suggest that the lower levels of cytokines in the spinal cords of the IL-32α EAE mice could simply be related to the reduced CNS immune cell infiltrates.
EAE-induced increase in oligodendrocytes progenitor cell markers levels was attenuated in IL-32α mice
To observe the expression of oligodendrocytes and myelin, we also used the immunofluorescence staining to detect the expression of CNPase (myelinating oligodendrocytes marker), myelin basic protein (MBP), NG2 (a marker of oligodendrocyte progenitor cells;OPCs) and O4 (a marker of oligodendrocyte). CNPase and MBP were decreased in non-Tg mice by MOG treatment, however reduced CNPase and MBP levels were not rescued in IL-32α mice. In contrast, MOG-induced increase in NG2 and O4 levels were attenuated in IL-32α mice group ( Figure 5, Figure 9A, and Figure 9B). Furthermore, we demonstrated that the expression levels of GFAP (a marker of astrocyte activation) and IBA-1 (a marker of microglia cell activation) were reduced in IL-32α mice group ( Figure 9C and 9D).
Figure 3: T cell infiltration in spinal cord of MOG-induced EAE in non Tg and IL-32α mice. MOG induced a prominent A.
CD3 and C. CD8b positive cell infiltration in the parenchyma adjacent to the pia mater in non Tg mice, but which was reduced in IL-32α mice. Data are shown as mean and standard error of the mean (n = 3). B. CD4 + cell infiltration was increased in non Tg mice, but there is no significant difference between non Tg and IL-32α mice. **p < 0.01 vs. non-Tg control, ##p < 0.01 vs. non-Tg + MOG (Twoway ANOVA followed by Bonferroni's test). Scale bar: 100 μm.
Con A-induced increase in cytokines levels was attenuated in IL-32α overexpressed Jurkat cells
To confirm the activation of T cells, we used the MTT assay for measuring cell viability and BrdU assay for measuring cell proliferation in Jurkat cells. We treated the con A (4 μg/mL) to stimulate the Jurkat cells. Cell viability was increased by treatment of con A and this effect was attenuated by IL-32α-overexpression ( Figure 6A). Similarly, IL-32α-overexpression also reduced cell proliferation induced by con A ( Figure 6B). To measure the level of inflammatory cytokine mRNAs, Jurkat cells were stimulated by same condition as cell viability and proliferation experiments, and then inflammatory cytokine levels were determined by real-time PCR. TNF-α, IFN-γ and IL-6 mRNA levels was increased in con A treated Jurkat cells, however those increase in cytokine levels was attenuated by IL-32α-overexpression. The significant changes of IL-1β mRNA levels were absent in all groups ( Figure 6C -6F).
EAE-induced increase in cyclooxygenase 2 (COX-2) and inducible nitric oxide (iNOS) levels was attenuated in IL-32α mice
To measure the inflammatory response of EAE mice spinal cord, COX-2 and iNOS expression levels were visualized. We observed that the levels of COX-2 and iNOS were increased by MOG 35-55 treatment in non-Tg, however, there is a significant difference between non-Tg + MOG and Tg + MOG groups ( Figure 7A and 7B). We also demonstrated a similar result in Con A treated IL-32α
dIscussIon
Various physiological and pathophysiological roles of IL-32 in immune response have been reported. In this study, EAE scores for paralysis is significantly decreased in parallel with reduced spinal injuries and infiltration of immune cells in IL-32α mice. We asked whether IL-32α mice showed reduction in proinflammatory cytokines production and demyelination of spinal cord. We demonstrated that cytokines were down regulated in IL-32α mice in comparison with those of non-Tg mice induced by EAE. Furthermore, NG2 and O4 were decreased in IL-32α mice, indicating that spinal cord damaging was suppressed. In addition, although the reduction of MBP and CNPase was not rescued in IL-32α mice, the demyelination was attenuated in IL-32α mice. These results suggest that reduction in inflammatory cytokines and demyelination may explain low EAE scores and spinal injuries in IL-32α mice. Meanwhile, IL-32α suppressed the infiltration of immune cells in spinal cord, as evidence by reduced number of CD3 + , CD8 + , B cell, NK cell and macrophages. Unexpectedly, CD4 + counts was not reduced in IL-32α mice, therefore, helper T cell may not be associated with the role of IL-32α role in EAE. This non-relationship between IL-32α and CD4 + may be further supported by previous study [15]. Furthermore, we demonstrated that the reduction of the astrocytes and microglials activation is related with the reduced inflammation in IL-32α mice. The expression levels of inflammatory marker COX-2 and iNOS in spinal lesion was also significantly reduced in IL-32α mice group. These anti-inflammatory-like actions of IL-32α are also observed in in vitro studies which employed IL-32α overexpressed Jurkat cells. Con A-induced increase in cell viability and proliferation was reduced by IL-32α overexpression. In addition, Con A-induced increase in TNF-α, IFN-γ, and IL-6 levels was decreased in IL-32α overexpressed Jurkat cells. Otherwise, the IL-1β production pattern was different in in vivo and in vitro. Even though the exact mechanism underlying those IBA-1 expression levels were increased in non Tg mice (second to left), but which was attenuated in IL-32α mice (right). Scale bar: 100 μm. www.impactjournals.com/oncotarget differences is not clear, MOG-induced T cell activation seems to be related with IL-1, but a case of Con-A is not [16]. These results suggest that the reduced infiltration of immune cells and inflammatory response may explain less severity of EAE in IL-32α mice.
In fact, IL-32 has been shown to exhibit properties typical of a proinflammatory cytokine and to drive the induction of other proinflammatory cytokines and chemokines, such as tumor necrosis factor-alpha (TNFα) and IL-1, IL-6, and IL-8. However, dual action of cytokines in autoimmune inflammatory demyelination is well known. For example, IFN-γ showed a paradoxical effect in IFN-γ-deficient mice [17]. Interestingly, we previously demonstrated that IL-32α transgenic mice showed the activation of signal transducer and activator of transcription 3 (STAT3) [18] which reduced inflammatory properties of type I IFNs [19]. Therefore, we can assume that IL-32α has protective effects via the downregulated IFN levels induced by STAT3 activation. In addition, IL-32 isoforms showed variable potency of cell death and cytokine production. Among IL-32 isoforms, IL-32γ has the most potent proinflammatory properties and it can be spliced into less active isoforms, IL-32β and IL-32α [20]. IL-32α is considered the least potent isoform in the process cell death and cell activation. Nonetheless, IL-32α has a benefit activity in cancer development. Our recent study [21] suggested that IL-32α suppressed colorectal cancer development in accordance with other study [22]. We did not elucidated a possible mechanism underlying the connection of IL-32α and other cytokines, IL-32α itself had no significant effects on immune cells infiltration and cytokines levels, rather reduced in inflammatory and severity of immune response models such as EAE and Con A treatment. These results suggest that IL-32α reduces inflammatory responses in elevated immune status of host. IL-32 also directly affects specific immunities differentiating monocytes into macrophage-like cells, therefore we could not exclude a possibility of direct action of IL-32α on immune cells activation. In rodent, the receptor for IL-32α is not clearly identified yet and IL-32α is considered an intracellular protein and may be released only after cell death. Therefore, we can assume that intracellular IL-32α in T cells interfere the immune activation/proliferation induced by elevated cytokines in EAE model. Intracellular signaling of IL-32α associated with immune cell activation is not clear, but tumor necrosis factor receptor 1 (TNFR-1) signaling was suggested as a target of IL-32α action by our previous work [21]. Because TNFR-1 stimulates dendritic cell maturation and CD8 T cell response [23], we suggest that IL-32α may reduce the immune cells activation via at least TNFR-1 signaling. Additional in vivo and in vitro models that are designed to study possible mechanism of IL-32α associated immune cell activation could be considered that might better reveal the function of IL-32α in EAE. In conclusion, our results suggested that IL-32α may suppress EAE by inhibition of neuroinflammation in spinal cord.
MATERIALS AND METhODS
Animals IL-32α-transgenic mice were prepared as according to our previous report [18]. Animals were maintained under conventional housing conditions at 23 ± 2 o C with a controlled 12 h light/dark cycle, and drinking water and rodent chow diet were provided ad libitum throughout the experiment. All experiments were approved and carried out according to the Guidelines for the Care and Use of Animals [Animal Care Committee of Chungbuk National University, Korea (CBNUA-436-12-02)]. All efforts were made to minimize animal suffering, to reduce the number of animals used.
Induction and clinical evaluation of EAE
Non-Tg and IL-32α female mice (8 week old) were immunized with MOG 35-55 peptide emulsified with complete Freund's adjuvant (CFA) using Hooke kits (Hooke laboratories, EK-0115, Lawrence, MA, USA) according to the manufacturer's instructions. In brief, 1 mg/mL of MOG 35-55 /CFA emulsion was injected subcutaneously into upper back and lower back of each 0.1 mL/animal (total 0.2 mL/animal). Two hours later, 2 μg/mL pertussis toxin (PTX) was intraperitoneally (i.p.) injected of each 0.1 mL/animal. Twenty four hours later, boosting shot of 2 μg/mL of PTX (0.1 ml/animal, i.p.) were given. Normal saline administrated non-Tg and IL-32α mice were used as vehicle control group. Mice were examined and scored daily for clinical signs of neurological deficit by a blinded investigator according to previous reports [24][25][26]. All other analyses were carried out on the 29th day.
Jurkat cells culture
Jurkat cells (human prototypical CD28 + T cell leukemia, ATCC, Manassas, VA) were maintained with serum-supplemented culture media of Roswell Park Memorial Institute (RPMI) 1640 supplemented with FBS (10%) and penicillin (100 units/ml). The Jurkat cells were incubated in the culture medium in a humidified incubator at 37 °C and 5% CO2. The cultured cells were treated simultaneously with concanavalin A (con A; 4 μg/mL) dissolved in distilled water.
Measurement of cytokines
Lysates of spinal cord tissue were obtained through protein extraction buffer containing protease inhibitor. TNF-α, IFN-γ, IL-1β and IL-6 levels were determined according to the user's manual of ELISA Kit (R&D Systems, Minneapolis, MN, USA). The resulting color was assayed at 450 nm using a microplate absorbance reader (VersaMax ELISA, Molecular Devices, California, USA) after adding stop solution within 30 minutes.
BrdU assay
Mock vector expressed Jurkat cells and IL-32α overexpressed Jurkat cells were plated at a density of 1 X 10 4 cells/well in 96-well plates per 200 µL medium and stimulated by con A (4 µg/mL) for 12 h. Detection of BrdU incorporation was performed by ELISA (BrdU | 2018-04-03T02:23:40.293Z | 2015-11-11T00:00:00.000 | {
"year": 2015,
"sha1": "4c4cd817dfe4618444580b57b3b61c69c348cd97",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=16868&path[]=6306",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c4cd817dfe4618444580b57b3b61c69c348cd97",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
17763657 | pes2o/s2orc | v3-fos-license | Autoimmune Polyendocrine Syndrome 3 Onset with Severe Ketoacidosis in a 74-Year-Old Woman
Type 1 diabetes mellitus (T1D), autoimmune thyroid disease, and autoimmune gastritis often occur together forming the so-called autoimmune polyendocrine syndrome type 3 (APS3). We here report a clinical case of a 74-year-old woman who presented for the first time with severe hyperglycemia and ketoacidosis diagnosed as T1D. Further clinical investigations revealed concomitant severe hypothyroidism with autoimmune thyroid disease and severe cobalamin deficiency due to chronic atrophic gastritis. The diagnosis of type 1 diabetes mellitus was confirmed by the detection of autoantibodies against glutamic acid decarboxylase 65, islet cell antibodies, and anti-insulin autoantibodies. Anti-thyroperoxidase, anti-thyroglobulin, and anti-gastric parietal cell antibodies were also clearly positive. The case emphasized that new onset diabetic ketoacidosis, hypothyroidism, and cobalamin deficiency may simultaneously occur, and one disease can mask the features of the other, thereby making diagnosis difficult. It is noteworthy that an APS3 acute episode occurred in an asymptomatic elder woman for any autoimmune diseases.
Introduction
Type 1 diabetes mellitus (T1D), autoimmune thyroid disease (ATD), and autoimmune atrophic gastritis (AAG) often occur together forming the so-called autoimmune polyendocrine syndrome (APS) type 3. Thyroid autoimmunity is evident in up to one-third and gastric autoimmunity in up to a quarter of patients with T1D [1]. T1D has historically been considered a predominant disorder of children and young adults: the disease has been commonly referred to as juvenile diabetes because it shows a peak of occurrence at 10-14 years. However, recent studies support a different model in which the disease may occur at any age. Such patients are best identified by the presence of anti-islet autoantibodies, in particular autoantibodies against glutamic acid decarboxylase 65 (GAD65) [2].
Nonetheless the most common presentation of APS consists in an autoimmune thyroid disease without adrenal insufficiency and another associated autoimmune disease such as T1D, pernicious anemia, vitiligo, or alopecia that usually develops in middle-aged women.
Here a case of APS3 presenting with ketoacidosis in an otherwise healthy elder woman was described. Her physical exam revealed severe dehydration, though the patient was relatively well nourished. Her admission laboratory values were significant for blood glucose level of 61.4 mmol/L and pH of 7.2 with bicarbonate of 19 mmol/L. Urine analysis showed massive glycosuria and ketonuria (Table 1). Brain computed tomography showed no noticeable space occupying cerebral lesion or recent density abnormalities of the white or gray matter. Rehydration and insulin intravenous infusion restored patient's consciousness within 3-4 hours without neuropsychiatric sequelae. She was admitted to the Endocrine Unit of the IRCCS Policlinico San Donato where metabolic variables were stabilized within 3 days. Endocrine screening unmasked concomitant severe hypothyroidism (serum TSH 102 U/mL, free T4 0.15 ng/mL). The clinical signs associated with hypothyroidism were elevated total cholesterol (248 mg/dL) and creatine kinase (186 UI/L, n.v <176). Ultrasound imaging of the neck showed a thyroid gland of reduced volume and with hypoechoic features of thyroiditis. L-thyroxine replacement was started following the check of conserved adrenal secretion (ACTH pg/mL, cortisol g/dL). Further biochemical evaluation revealed mild anemia (haemoglobin 11 g/dL; n.v. 12-16) with macrocytosis (mean red cell volume 100.5 fL) associated with reduced serum folates (5 ng/mL; n.v. [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] and vitamin B12 (199 pg/mL; n.v. 191-663). Endoscopy diagnosed severe atrophic gastritis that was histologically confirmed by multiple biopsies. Cobalamin and folates were replaced. The patient significantly improved over a 14-day long hospitalization. She was discharged in good general condition.
An autoimmunity screening was performed: autoantibodies against glutamic acid decarboxylase (GAD) have positive results as well as anti-islet cell autoantibodies (ICA) and anti-insulin autoantibodies (IAA) ( Table 2). Moreover, antiperoxidase (AbTPO) and anti-thyroglobulin (AbTg) autoantibodies (Table 2) as well as anti-parietal cells autoantibodies (APCA) were detected in the patient's serum. The case here reported emphasizes that new onset diabetic ketoacidosis, hypothyroidism, and cobalamin deficiency may occur, and one disease can mask the features of the other, thereby making diagnosis a clinical challenge. It is noteworthy that an utterly asymptomatic patient to any autoimmune diseases had a APS3 acute episode (i.e., hyperglycemia, severe hypothyroidism, and cobalamin deficiency) in her elder age. Overt diabetes onset with typical signs of polyuria and polydipsia was likely delayed by the concomitant low metabolic basal rate secondary to the severe unrecognized hypothyroidism and malabsorption. In this context, the flu-like episode was sufficient to precipitate the impaired metabolic homeostasis. Indeed, acute infections are known to usually drop myxedematous coma.
Moreover, the present case reminds us that autoimmunity might arise in the elderly. In a previous study on T1D Northern Italian patients, the prevalence of diabetes-related autoantibodies was higher in the 21-40-year-old group compared to the 41-72-year-old group and these antibodies have higher results in females than in males [3]. GAD, ICA, and IAA were clearly detectable in the serum of the reported patient. Autoimmune thyroid disorders are the most prevalent immunological diseases in T1D patients [4]. The prevalence of positive AbTPO has been reported in about 80% of T1D patients with elevated TSH levels and in 10-20% in euthyroid T1D individuals. Most patients have subclinical disease, and the development of diabetes usually precedes the diagnosis of hypothyroidism [5]. Due to the increased prevalence of thyroid dysfunction in T1D subjects, regular screening of TSH is recommended [4]. Atrophic gastritis is present in about 25-40% of patients with T1D, whereas in thyroiditis the prevalence of atrophic gastritis is present in approximately one-third of cases. Therefore, the occurrence of multiorgan autoimmune involvement is not unusual though concomitant related insufficiency, as occurred in the present case, is surprising and, more importantly, might be life-threatening. The present case report supports the inclusion of serum TSH and cobalamin determinations among the hormone tests at admission of diabetic ketoacidotic elder patients. | 2017-09-14T06:25:34.013Z | 2015-03-02T00:00:00.000 | {
"year": 2015,
"sha1": "c4b440b28f07727c45fb4c16b52332e4ddb584cb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crie/2015/960615.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bb87c1a3ba233e17abe1c387e24118d28cbfce9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52011302 | pes2o/s2orc | v3-fos-license | An iteration-based differentially private social network data release
Online social networks provide an unprecedented opportunity for researchers to analysis various social phenomena. These network data is normally represented as graphs, which contain many sensitive individual information. Publish these graph data will violate users’ privacy. Differential privacy is one of the most influential privacy models that provides a rigorous privacy guarantee for data release. However, existing works on graph data publishing cannot provide accurate results when releasing a large number of queries. In this paper, we propose a graph update method transferring the query release problem to an iteration process, in which a large set of queries are used as update criteria. Compared with existing works, the proposed method enhances the accuracy of query results. The extensive experiment proves that the proposed solution outperforms two state-of-the-art methods, the Laplace method and the correlated method, in terms of Mean Absolute Value . It means our methods can retain more utility of the queries while preserving the privacy
INTRODUCTION
With the significant growth of Online Social Networks (OSNs), the increasing volumes of data collected in those OSNs have become a rich source of insight into fundamental societal phenomena, such as epidemiology, information dissemination, marketing, etc. Much of this OSN data is in the form of graphs, which represent information such as the relationships between individuals. Releasing those graph data has enormous potential social benefits. However, the graph data infer sensitive information about a particular individual [1] has raised concern among social network participants.
To deal with the problem, lots of privacy models and related algorithms have been proposed to preserve the privacy of graph data. Differential privacy is the most prevalent one due to its rigorous privacy guarantee. If the differential privacy mechanism is adopted in graph data, the research problem is then to design efficient algorithms to release statistics about the graph while satisfying the definition of differential privacy. Two concepts for graph have been proposed: Node Differential Privacy and Edge Differential Privacy. The former protects the node of the graph and the latter protects the edge in the graph.
Previous works have successfully achieved both node differential privacy and edge differential privacy when the number of queries is limited. For example, Hay et al. [9] implemented node differential privacy, and pointed out the difficulties to achieve the node differential privacy. Paper [17,18] proposed to publish graph dataset using a dK-graph model. Chen et al. [4] considered the correlation between nodes and proposed a correlated release method for sparse graphes. However, these works suffer from a serious problem: when the number of queries is increasing, a large volume of noise will be introduced. In real world, we have to release large number of queries for data mining, recommendation or other purposes. The difficulty lies on the problem is that the privacy budget should be divided into tiny pieces when the query set are large. Large amount of noise will be introduce to the published query answers in this scenario. This paper focuses on releasing a large set of queries for graph data. Given a set of queries, we apply an iteration method to generate a synthetic graph to answer these queries accurately. We can consider the iteration process as a training procedure, in which queries are training samples and the synthetic graph is an output learning model. Finally, we will adopt the synthetic graph to answer this set of queries. As the training process consumes less privacy budget than the state-of-the-art methods, the total noise will be diminished. In these procedures, the major research issue becomes how to design the iteration process to generate the synthetic graph.
Our major contribution of this paper is to transfer the query release problem to an iteration based training process. Specifically, we propose an iteration method, called Graph Update to generate a synthetic graph, which can answer a large amount of queries accurately. Compared with state-of-the-art methods, the Laplace method and the correlated method, it can decrease the total amount of noise significantly.
The rest of the paper is organized as follows: We present the preliminaries in Section 2. Section 3 discusses the Graph Update method and the experimental result is presented in Section 5, which is followed by the conclusion in Section 6.
Notation
We consider a finite data universe X and a dataset D is an unordered set of n records from X . Let r be a record with d attributes sampled from X , Two datasets D and D * are neighboring datasets if they differ in only one record. A query f is a function that maps dataset D to an abstract range R: .., f m (D)}. We use symbol m to denote the number of queries in F. The maximal difference on the results of query f is defined as the sensitivity s, which determines how much perturbation is required for the private-preserving answer. To achieve the target, differential privacy provides a mechanism M, which is a randomized algorithm that accesses the database. The randomized output is denoted by a circumflex over the notation. For example, f (D) denotes the randomized answer of querying f on D.
Graph Notations
We model social network as a simple undirected graph is a set of edges representing relationships between individuals. Fig. 1 shows an example of a social network graph. The nodes are represented by circles and connected with each other by edges represented by lines. The degree of a node refers to the number of its neighbourhoods. Formally, we define degree as follows, Neighbourhood: Degree:
Differential Privacy
The target of differential privacy is to mask the difference in the answer of query f between the neighboring datasets [7]. In -differential privacy, parameter is defined as the privacy budget [7], which controls the privacy guarantee level of mechanism M. A smaller represents a stronger privacy. The formal definition of differential privacy is presented as follows: Definition 1 ( -Differential Privacy) A randomized algorithm M gives -differential privacy for any pair of neighboring datasets D and D * , and for every set of outcomes , M satisfies: Sensitivity is a parameter determining how much perturbation is required in the mechanism with a given privacy level. [7] For a query f : D → R, the sensitivity of f is defined as
Definition 2 (Sensitivity)
The Laplace mechanism adds Laplace noise to the true answer. The mechanism is defined as follows: Definition 3 (Laplace mechanism) [7] Given a function f : D → R over a dataset D, the Eq. 3 provides the -differential privacy.
In graph data, we use G to represent D.
Node Differential Privacy
Node differential privacy ensures the privacy of a query over two neighbouring graphs where two neighbouring graphs can differ up to all edges connected to one node. Hay et al. [9] first proposed the notion of node differential privacy and pointed out the difficulties to achieve it, even it can provide strong privacy guarantee. Hay et al. [10] showed that the result of query was highly inaccurate for analysing graph due to the large noise.
Recently, there are few works [5,11] contribute to reduce sensitivity and return accurate answers under node differential privacy. Although this is a good progress, these algorithms still hard to be applied in real world, the most prevalent algorithms are focusing on the Edge differential privacy. computer systems science & engineering
Edge Differential Privacy
Edge differential privacy means adding or deleting a single edge between two nodes in the graph makes negligible difference to the result of the query. The first differential private computation over graph dataset with edge differential privacy appeared in paper [16], in which Nissim et al. tried to count the number of triangles in the graph. They provided the concept smooth sensitivity to calibrate the noise to a more local variant of sensitivity.
A work presented in [15] shared the differential private graph topology based on Stochastic Kronecker graph generation model by perturbing model parameters. While the Stochastic Konecker generation model cannot capture the properties of graph accurately due to simple generation process.
Paper [17,18] published graph dataset using a dK-graph model. They applied dK-series as query function and added controllable noise based on sensitivity parameter. Wang et al. [18] proved that privacy dK-graph model can more precisely capture most of the graph properties and achieve better utility preservation. In order to reduce the noise added to the dK-series, Sala et al. [17] provided an algorithm partitioning the data of dKseries into clusters with similar degree. It significantly reduced the sensitivity for each sub-series. But it used local sensitivity which can reveal information that cannot achieve strict privacy preserving [16].
A different approach was proposed in paper [19]. Inferring the network's structure via connection probabilities. They encoded the structure information of the social network by the connection probabilities between nodes instead of the presence or absence of the edges. Which reduced the impact of a single edge. Another work in paper [2] provided a reasonable hypothesis about the structure of the dataset to restrict the sensitivity of the query. However, those methods would generate a large dense matrix which are computationally infeasible for large social network.
The most similar work to ours is from Chen et al. [4], which shared the same target of this paper: releasing a synthetic graph to publish a large set of queries. However, they focused on the correlated queries on the sparse graph. When dealing with large amount of queries, the performance is not optimal.
Overview of Graph Update
The release method is an iteration-based algorithm, which is a prevalent release scenario of many applications [8]. Our proposed method is called Graph Update method as the key idea is to update a synthetic graph until all queries have been answered. For a social network graph G and a set of queries F = { f 1 , ..., f m }. Our goal is to release a set of query results F and a synthetic graph G to the public. Our general idea is to define an initial graph G 0 and update it to G m−1 in m round according to m queries in F. Release answers F and the synthetic graph G are generated during the iteration. During the process, four different types of query answer involve in the iteration: • True answer a t : this is the real answer that a graph response to a query. We cannot release it directly as it will arise privacy concern. The true answer is normally used as the baseline to measure the utility loss of a privacy-preserving algorithm. In this paper, we use a t = f (G) to represent the true answer for a single query f , and A t = F(G) = {a t 1 , ..., a tm } to represent an answer set for a query set F.
• Noise answer a n : when we add Laplace noise to a true answer, the result will be the noise answer. Traditional Laplace method will release the noise answer directly. However, as we mentioned in Section 1, it will introduce large amount of noise to the release result. We use a n = f (G) = f (G) + Lap(s/ ) to represent a single query answer and A n = F(G) = {a n1 , ..., a nm } to represent an answer set.
• Synthetic answer a s : this is the answer generated by a synthetic graph G. We use a s = f ( G) to represent a single query and A s = F( G) = {a s1 , ..., a sm } to represent an answer set.
• Release answer a r : this is the answer finally released after the iteration. In Graph Update method, the release answer set will consist of noise answers and synthetic answers. We apply a r = f and A r = F = {a r1 , ..., a rm } to represent the single answer of a query and the answer set, respectively.
These four different query answers control the graph update process. The overview of method is presented in Figure 2. On the left side of the figure, the query set F performs on the G to obtain a true answer set A t . Laplace noise is then added to A t to get a set of noise answer A s = {a s1 , ...a sm }. Each noise answer a si helps to update the initial G 0 and produce a release answer a ri . The method eventually outputs A r = {a r1 , ..., a rm } and the G m as final results. Comparing with the traditional Laplace method, the proposed Graph Update method adds less noise. As some queries are answered by the synthetic graph, these query answers will not consume any privacy budget. Moreover, the synthetic graph can be applied to predict new queries without any privacy budget. Eventually, the Graph Update method can outperform the tractional Laplace method.
Graph Update Method
The Graph Update method works in three steps: • initial the synthetic graph: As we only preserve the edge privacy, we assume that the number and the labels of nodes vol 33 no 2 March 2018 . Make all degrees in G round numbers. 12. Output A r = {a r1 , ..., a rm }, and G; are fixed. The synthetic graph is initialed as a fully connected graph with fixed nodes.
• update the synthetic graph: the initial graph will be updated according to result of each query in F, until all queries in F have been used.
• release query answers and synthetic graph: Two types of answers, noise answers and synthetic answers that have potential to be released. Synthetic graph is also released to the public.
Algorithm 1 is a detailed description of the Graph Update method. In step 1, the privacy budget is divided by m and will be arranged to each query in the set.
Step 2 initializes the graph to G 0 as a full connected one. Then for each query f i in the query set F, the algorithm computes the true answer f i (G) at Step 3. After that, the noise answer and the synthetic answer of f i are computed at Step 4 and 5, respectively. Step 6 measures the distance between the true answer and the synthetic answer. If the distance is larger than a threshold T , the Step 7 will release the noisy answer. Otherwise, the synthetic graph will be updated by an Updated Function in Step 8 and Step 9 will release the synthetic answer. This means the synthetic graph is applicable for answering question, so in Step 10, we put the current synthetic graph to the next round. This process is iterated until all queries in F are preceded. Finally, As the number of edges should be a integer, we round the number of degrees in Step 11. the algorithm generates A r and G as the output in Step 12.
The parameter T is a threshold controlling the distance between A n and A s . A larger T means less update of the graph and most of the answer in A r are synthetic answers. It leads to less privacy budget consuming, however, when the synthetic graph is far away from the original graph, the performance may not optimal. A smaller T means the algorithm has more updates of the graph and most of the answer in A r are noise answers.
Algorithm 2 Update Function
Require: G, f , d, θ, (0 < θ < 1) More privacy budgets will be consumes in this configuration. Consequently, the choice of T will have impact on different scenarios. We will confirm the value of T in the experiment in Section 5
Update Function
Step 8 in Algorithm 1 involves with an Update Function, which updates the synthetic graph G to graph G according to query answers. Specifically, Update Function is controlled by the distance d between the a n and a s of f . If a n is smaller than a s , it means that the synthetic graph has more edges than the original graph in the related nodes. Update Function has to delete some edges between the related nodes. Otherwise, Update Function will add some edges in the synthetic graph. These related nodes is defined in the follow definition 4:
Definition 4 (Related Node)
For a query f and a graph G, related nodes V f are all nodes that response to the query f , we use set D(V f ) to denote degrees of those nodes.
The number of edges for a node should be a integer. However, to adjust degree of those related nodes, we arrange weight θ (0 ≤ θ ≤ 1) for each edge. After the updating, these weights will be rounded to represent node edges. Algorithm 2 illustrates the detail of Update Function. In the first step, the function identifies related nodes. If d > 0, which means the synthetic graph has less edges than the original one, the function will enhance the θ in Step 2. If d ≤ 0, which means the synthetic graph has too many edges, the function will diminish those edges by θ in Step 3.
Step 4 merges the edges to the original graph.
Step 5 outputs the G .
Privacy Analysis
This section presents a comparison on the privacy between the tractional Laplace method and Graph Update. The sequential composition [14] of the privacy budget is applied, which is shown in Lemma 1 The sequential composition accumulates privacy budget of each step when a series of private steps is performed sequentially on a dataset. computer systems science & engineering Lemma 1 Sequential Composition: Suppose a method M = {M 1 , ...M m } has m steps, and each step M i provides privacy guarantee, the sequence of M will provide (m * )-differential privacy.
For traditional Laplace method, when answering F with m queries, will be divided into m pieces and arranged to each query f i ∈ F. Specifically, we have = /m and for each query, the noise answer will be a ni = f i + Lap(s/ ). According to the sequential composition, the Laplace method preserve ( * m)differential privacy, which is equal to -differential privacy.
In Graph Update method, the release answer set A r are the combination of noise answers A n and synthetic answers A s . Only A n consume privacy budget, while A s do not. In algorithm 1, even Step 4 adds Laplace noise to the true answer, the noise result does not release directly. Only when the algorithm processed to Step 7, in which a n is released, the algorithm consumes the privacy budget. Suppose there are j (0 ≤ j ≤ m) queries in F is released by synthetic answers, the algorithm preserves ((m − j ) * )-differential privacy. As (m − j ) * ≤ m * , the Graph Update method preserve more strict privacy than tractional Laplace method.
Utility Analysis
We applied Mean Absolute Error (MAE) as the utility of the query set on a graph. M AE r of release answer A r is defined as Eq. 6 Similarly, M AE n of noise answers and M AE s of synthetic answers are defined as Eq. 7 and Eq. 8, respectively.
It is obvious that for true answers A t , the M AE is zero. M AE n represents the performance of traditional Laplace method. A lower M AE implies a better performance. The target of Graph Update method is to achieve a lower M AE r in a fixed privacy budget. We apply a simulated figure 3 to illustrate the relationship between M AE values and the size of the query set m.
In Figure 3, x axis is the size of the query set and y axis is the value of M AE. For noise answer A n , M AE n is arising with the increasing of m. We apply a smooth line to represent the M AE n in this simulated figure. In real case, the line is fluctuated as the noise is derived from Laplace distribution. The M AE s is decreasing at the beginning with the increasing of m. When it reaches to its lowest point, the M AE s begins to rise with the enhance of m. This is because with the update of the graph, the synthetic graph is getting more and more accurate, M AE s is keeping decreasing. However, as the iteration procedure is controlled by the noise answer, it is impossible for synthetic graph to equal to the original graph, no matter how large m is. On the contrary, with the increasing of m, more noise will be introduced to iteration and the synthetic graph will be far away from the original graph.
As A r is the combination of A n and A s , M AE r of release answers can be reflected by synthetic answer M AE s and noise answer M AE n . Figure 3 shows that M AE s will below M AE n when the query size reaches to m 1 We will use experiment to confirm the optimal M AE in Section 5. As random noise is introduced to the method, points m 1 and m 2 can hardly be determined. In real case, they are ranges rather than exact points. In the Graph Update method, the parameter T is used to adjust the range.
EXPERIMENT AND ANALYSIS
This section evaluates the performance of the proposed Graph Update method by answering the following questions: • How do the parameter T impact on the performance of Graph Update Graph Update contains an essential parameter T that controls releasing outputs. In the first part of the experiment, we will test the impact of T in terms of Mean Absolute Error (MAE).
• What is the performance of Graph Update comparing with the traditional Laplace method and other related methods?
The proposed Graph Update method aims to effectively answer a large set of queries. We will investigate the performance of Graph Update on a set of queries and com-pare it with the traditional Laplace method and a Correlated method proposed by Chen et al. [4]. In addition, the performance will also be measured under different privacy budgets.
Datasets and Configuration
The experiment involve with four datasets, which are collected from Stanford Network Analysis Platform (SNAP) [13]. In the experiment, we consider the degree query on nodes, which is similar to the count query on relation dataset. To preserve the edge privacy, the degree query has the sensitivity of 1, which means deleting an edge will have maximum impact of 1 on the query result. The performance of results is measured by Mean Absolute Error (MAE) 6.
Evaluation of Parameters
In Graph Update, T is a threshold that controls the releasing results and has a direct impact on the performance of the query result. To achieve a comprehensive investigation, we investigate the impact of T on the utility. The parameter T varied from 0.02 to 1 with a step of 0.02 with the size of query set equal to 10 and privacy budget fixing to 1. Fig. 4 shows that at the beginning, it is apparent with an increasing of T , M AE drops quickly. But when T achieves a threshold, M AE reaches its minimum and keeps increasing. For example, as shown in Fig. 4a, M AE keeps decreasing until T = 0.1100, with M AE = 50.37 at its lowest point. After this, as T increases, M AE keeps rising. This trend can be observed in other data sets. in Fig. 4b, the M AE reaches its minimum when T = 0.2100 and remains stable until T ≥ 0.7100. After that, T is keeping increasing. Fig. 4c and 4d show the same trend. This pattern shows the impact of T on the performance. At the beginning, when T is relatively small, the increasing value of T will decrease the update round, which means the privacy budget can be saved and less noise is added to query answers. Thus the M AE is keeping decreasing. However, when T reaches to a threshold, the decreasing number of update rounds leads an inaccurate synthetic graph. Consequently, we choose a suitable T for each dataset to achieve a minimal M AE. According to the results shown in Fig. 4, we can chose T = 0.3100 as the parameter in ego-facebook dataset; T = 0.3600 in Wiki-Vote dataset; T = 0.2600 in p2p-Gnutella08 dataset and T = 0.4100 in ca-GrQc dataset.
The parameter θ is another important parameter that affects Graph Update. To evaluate the impact of θ , we use 100 queries and vary it from 0.1 to 1. 5 shows that in all datasets, when θ is increasing at the beginning, the M AE of Graph Update is decreasing. However, when M AE reaches to its lowest value, it begins to keep increasing with the enhancement of θ . This trend means that when θ is too small, the graph can not be fully updated within 100 queries. Consequently, M AE is keeping decreasing with the increasing of the θ . In this particular scale, a larger θ can help to update the graph in limited queries. But this decreasing of M AE cannot be lasted long, when θ is large enough, the M AE will be raised with the increasing of θ . During this process, we can choose a suitable θ that can minimize M AE.
The ego-Facebook dataset in Fig. 5a shows that when θ is reached to 0.0800, the minimum M AE is 70.100. This means for this datset, a proper θ could be 0.0800 when answering 100 queries. When θ reaches to 0.1300, M AE is increasing sharply. Fig. 5c and Fig. 5d, we can observe that θ can be 0.1 − 0.3 and 0.1 − 0.25 for those two datasets, respectively.
Performance Evaluation on Diverse Size of Query Sets
The performance of the Update Graph is examined through comparison with the state-of-the-art Laplace method [6] and Correlated method [4]. We set the size of query sets from 1 to 200, in which each query is independent to each other. Parameters T and θ as optimal one for each dataset and the is fixed at 1 for all methods. According to the Figure 6, we can generally get the performance of the Graph Update comparing with other methods. First, we observe that with the increasing of the size of the query sets, M AEs of all methods are increasing approximately in lin- ear. This is because the queries are independent to each other and the privacy budget is arranged equally to each query. With the linear increasing of the query number, the noise added to each query answer is enhanced linearly. Second, Figure 6 shows that Update Graph has lower M AE comparing with other two methods, especially when the size of the query set is large. As shown in Figure 6a, when the size of query set is 200, the M AE of Graph Update is 99.8500 while the Laplace method has M AE of 210.0020, and the Correlated method has M AE of 135.2078 which is 52.45% and 26.15% higher than the proposed Update Graph. This trend can be observed in Figure 6b, 6c and 6d. Graph Update has better performance because part of query answers does not consume any privacy budget, while noise is only added in the updated procedure. Other methods, including Laplace method consume the privacy budget when answering every query. The result shows the effectiveness of Graph Update in answering a large set of queries. Third, it is worth to mention that when the size of the query set is limited, the proposed Graph Update may not necessary outperform the Correlated method. Figure 6a shows that when the size is less than 20, M AEs of Graph Update and the Correlated method are mixed together. This is because when the query set is limited, the synthetic graph can not be fully updated and may differ from the original graph largely. Therefore, the performance may not necessary outperform other methods significantly. This result shows that Graph Update is more suitable in scenarios that need to answer a large amount of queries.
Performance Evaluation on Diverse Privacy Budgets
In addition, we test the performance of Graph Update with varying privacy budgets from 0.1 to 1 with 0.1 step, and a query set with 100 queries.
It is observed that as increases, the M AE evaluation becomes better, which means that the lower the privacy preservation level, the better the utility. In Fig. 7a, the M AE of Graph Update is 1035.40 when = 0.1. Even though it preserves a strict privacy guarantee, the query answer is inaccurate and can not be used in real world. When = 0.7, the M AE drops to 144.0774, retaining an acceptable utility in the result. The same trend can be observed on other datasets. For example, when = 0.7, the M AE is 141.7209 in Fig. 7b, and is 153.0225 in Fig. 7c. Both show great improvement compared to = 0.1. These results confirm that the utility is enhanced as the privacy budget increases.
We observe that the M AE decreases faster when ascends from 0.1 to 0.4, than when ascends from 0.4 to 1. This indicates that a larger utility cost is needed to achieve a higher privacy level ( = 0.1). We also observe that Graph Update and other methods perform stably when ≥ 0.7. This indicates that Graph Update is capable of retaining the utility for data release while satisfying a suitable privacy preservation requirement. The evaluation shows that the Graph Update method retains a higher accuracy compared to other methods when answering large sets of queries, and its performance is significantly enhanced with the increase in the privacy budget. We can select a suitable privacy budget to achieve a better trade-off.
CONCLUSIONS
Nowadays, the privacy problem has aroused people's attention [3,12,20]. Especially the online social network data, which contains a massive personal information. How to release social network data is a hot topic that attracts lots of attention. However, existing methods cannot provide accurate results when releasing large numbers of queries due to the huge noises added to query results. This paper proposed an interaction method that transfers the query release problem to an iteration based update process, so as to providing a practical solution for publishing a sequence of queries with high accuracy. We evaluate our methods on numerous graphs. Through extensive experiments on real datasets we have shown that our method is effective and outperforms the Laplace method and the correlated method. In the future, we will consider much more complied quires, such as cut queries and triangle queries, which can allow researchers get more information of the dataset while still can guarantee users' privacy. | 2018-08-16T13:26:44.512Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "e1a75647341796e80c866320b52a0d7f8543399f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.32604/csse.2018.33.061",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "619fa7c2085ffcba6d700b87cb3b34e821c0ddbe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219904310 | pes2o/s2orc | v3-fos-license | General practitioners’ and psychiatrists’ attitudes towards antidepressant withdrawal
Background There has been a recent rise in antidepressant prescriptions. After the episode for which it was prescribed, the patient should ideally be supported in withdrawing the medication. There is increasing evidence for withdrawal symptoms (sometimes called discontinuation symptoms) occurring on ceasing treatment, sometimes having severe or prolonged effects. Aims To identify and compare current knowledge, attitudes and practices of general practitioners (GPs) and psychiatrists in Cornwall, UK, concerning antidepressant withdrawal symptoms. Method Questions about withdrawal symptoms and management were asked of GPs and psychiatrists in a multiple-choice cross-sectional study co-designed with a lived experience expert. Results Psychiatrists thought that withdrawal symptoms were more severe than GPs did (P = 0.003); 53% (22/42) of GPs and 69% (18/26) of psychiatrists thought that withdrawal symptoms typically last between 1 and 4 weeks, although there was a wide range of answers given; 35% (9/26) of psychiatrists but no GPs identified a pharmacist as someone they may use to help manage antidepressant withdrawal. About three-quarters of respondents claimed they usually or always informed patients of potential withdrawal symptoms when they started a patient on antidepressants, but patient surveys say only 1% are warned. Conclusions Psychiatrists and GPs need to effectively warn patients of potential withdrawal effects. Community pharmacists might be useful in supporting GP-managed antidepressant withdrawal. The wide variation in responses to most questions posed to participants reflects the variation in results of research on the topic. This highlights a need for more reproducible studies to be carried out on antidepressant withdrawal, which could inform future guidelines.
Antidepressant use has increased in recent years, with the number of prescriptions in England doubling in the past decade from the number in the mid-2000s. 1 In 2018 the number of antidepressant prescriptions in England was 70.9 million. 2 While antidepressants are generally considered beneficial to many of the people taking them, there are risks of withdrawal symptoms when patients later discontinue treatment. These are also known as discontinuation symptoms. A range of problems varying in intensity and duration have been reported, including increased anxiety, flu-like symptoms, insomnia, nausea, imbalance, sensory disturbances, hyperarousal, 'brain zaps', depersonalisation and sensory disturbances. 1,3,4 The UK's National Institute for Health and Care Excellence (NICE) guidelines have been recently updated and state that 'whilst the withdrawal symptoms which arise when stopping or reducing antidepressants can be mild and self-limiting, there is substantial variation in people's experience, with symptoms lasting much longer (sometimes months or more) and being more severe for some patients'. 5 A recent all-party parliamentary group concluded that 'It is incorrect to view antidepressant withdrawal as largely mild, self-limiting and of short duration'. 6 This evidence suggests that withdrawal symptoms could be worse than those identified by NICE. The Royal College of Psychiatrists says that the potential benefits and harms of antidepressants, including withdrawal, should be discussed with the patient. 7 The exact pathophysiology of withdrawal symptoms is unclear, but previous studies have hypothesised that they are a response to long-term physiological adaption of cerebral neural systems, or that they could be caused by a rapid decrease in serotonin availability when the treatment ends abruptly. 3
Issues with antidepressant withdrawal symptoms
There are various confounders surrounding the issue of withdrawal symptom reporting. For example, withdrawal symptoms can be mistaken for relapse, prompting re-starting of the antidepressant or changing to an alternative one, extending antidepressant use. 1 In terms of patient perspective, fewer than 2% of antidepressant users recall being told by the prescriber about any withdrawal effects, or potential difficulties coming off the drugs. 1 Patients who reported lack of knowledge of withdrawal symptoms also experienced greater adverse effects during withdrawal. 3 Another poorly researched concept is the 'nocebo' effectthe expectation of feeling worse on discontinuation. 1 Withdrawal effects can vary depending on the drug. For example, they may occur less frequently and less severely with longer-acting agents such as fluoxetine, 3,8 giving rise to the option of switching to fluoxetine before stopping antidepressants completely, to potentially reduce withdrawal effects. 8 Generally, selective serotonin reuptake inhibitors (SSRIs) with a shorter half-life seem to have worse withdrawal effects. 3 For example, paroxetine has been found to have a higher incidence of antidepressant withdrawal when compared with fluoxetine and sertraline. 8,9 It is not known exactly why this is the case but withdrawal severity is believed to depend on the elimination half-life of the drug and the patient's rate of metabolism. 3
Alleviating antidepressant withdrawal symptoms
Approaches have been suggested to reduce withdrawal symptoms, but there is conflicting evidence. One example is using cognitivebehavioural therapy (CBT) during withdrawal to promote understanding that any symptoms are temporary and due to withdrawal, rather than indicating an inability to cope without the medication. 3 Patients who experienced worse adverse symptoms when commencing antidepressant treatment were more likely to suffer withdrawal symptoms; therefore, identifying these people could allow closer monitoring during withdrawal. 10 Overall, literature about antidepressant withdrawal seems to differ from the NICE guidelines particularly in that there could be more advice about alleviating symptoms. There is little research exploring what professionals do in practice to inform and manage withdrawal symptoms, which in turn can influence patient outcomes.
To our knowledge, no study has systematically examined and compared how different groups of prescribers perceive withdrawal effects of antidepressants. Knowledge and understanding of general practitioners' (GPs') and psychiatrists' attitudes towards withdrawal symptoms may allow identification of gaps in guidelines, policy or training.
Aims
This study aimed to identify current knowledge, attitudes and practices of GPs and psychiatrists in Cornwall, UK (population: 538 000) concerning antidepressant withdrawal symptoms.
Method
We completed a cross-sectional study of the opinions and perceptions of GPs and psychiatrists. The two groups were asked to complete a very similar questionnaire, with the GP survey consisting of nine questions and the psychiatrists' survey having 14 questions (Supplementary information 1 and 2, available at https://doi.org/ 10.1192/bjo.2020.48). The questionnaires contained questions assessing perceptions about, and approach to, antidepressant withdrawal. Both surveys consisted of a mix of questions with predetermined answers, questions requiring the answer to be entered, and one question that allowed for free-text comments. The questionnaires were constructed by the authors on the basis of a review of the literature and were co-designed with a lived experience expert. Some limited demographic details were collected from the psychiatrists, although for both groups the survey was anonymous. The method of delivery of the questionnaire differed for the two groups. An edited summary of the questions is included in the Appendix.
GPs
Across Cornwall, locality-based prescribing meetings are held in the North, Central and West GP localities of NHS Kernow, the clinical commissioning group (CCG) for the county, four times a year. These 12 meetings a year, organised by the CCG's Medicines Optimisation Team, are intended to have a focus on clinical prescribing and medicines optimisation. A GP prescribing lead from each primary care practice of the county is invited to attend these meetings and disseminate the learning to other GPs within their own practice. The questionnaire was handed out to GPs attending the winter 2019 meetings.
Psychiatrists
For the psychiatrists, an introductory email and electronic link to the survey were emailed out to all 60 practitioners employed by Cornwall Partnership NHS Foundation Trust purposively selected through established contacts. A reminder email was sent 1 week later. Ten of these were psychiatric trainees (core training years CT1-3) and 50 were in senior posts (registrars in specialty training years ST4-6, consultants and associate specialists). For simplicity, this paper will refer to this group as 'psychiatrists' and the group from GP meetings as 'GPs'.
Analysis of data was performed using Microsoft Excel (365 version, 2004). Descriptive statistics were used to describe and summarise the data highlighting the main elements of the study. Mann-Whitney U-tests were also used to evaluate whether there was any difference in responses between the two groups. IBM SPSS Statistics 26 for Windows was used to carry out these statistical tests.
Ethics and participation consent
No ethical permission was required as this was a study to evaluate knowledge and attitudes as part of a service evaluation. Furthermore, it involved a group of medical practitioners where consent was implicit by participation. All participants were advised at the start of the study that participation was voluntary and their replies, i.e. data, would be anonymised and analysed. We also used the NHS Research Authority tool (http://www.hradecisiontools.org.uk/research/index.html), which helped confirm that no ethics approval was needed for this project (Supplementary information 3).
Results
There are 60 GP surgeries (primary care centres) within NHS Kernow CCG. The three meetings in winter 2019 were attended by a total of 53 GPs (88% attendance), with completed questionnaires returned by 42 (79%) of the attendees. No other GP characteristics were recorded. Of the 60 doctors and associate specialists working for psychiatry services in Cornwall Partnership NHS Foundation Trust, 26 responded to the electronic survey (43% response rate): consultants (n = 17); associate specialists (n = 2); speciality doctor (n = 1); ST4-6 trainee (n = 1); CT1-3 trainees (n = 4); 1 respondent did not state their level of experience so it is unknown. A range of specialties were represented: general adult psychiatry (n = 12); child and adolescent psychiatry (n = 7); complex care and dementia psychiatry (n = 4); psychiatry of intellectual disability (n = 2); 1 respondent did not give their specialty.
For the purposes of this paper, we will focus on the answers to the questions concerning duration of withdrawal symptoms, severity of withdrawal symptoms, their frequency of occurrence, what might prompt stopping antidepressants, what proportion of patients approached the respondents about stopping their antidepressants, how respondents would withdraw them, who they might work with and how often they discuss potential withdrawal symptoms before commencing treatment.
When asked how long they believe withdrawal symptoms typically last for, 53% (22/42) of the GPs and 69% (18/26) of the psychiatrists perceived the duration to be between 1 and 4 weeks ( Table 1). There was no statistical difference between the two groups (P = 0.979). As regards the severity of withdrawal symptoms, on average (on a scale of 1 to 10, where 1 is no negative effect at all and 10 is lifethreatening) the mean value for the GPs' responses was 3.8 (s.d. = 1.3), although the range was 1-6. The mean value for psychiatrists was 5 (s.d. = 1.7) and the range was 1-8. On the e-questionnaire completed by the psychiatrists, they were only able to select whole-number answers. On the paper survey that was completed by the GPs, some chose to give ranges. For the purposes of analysis, the midpoint of the range they gave was used. The results for this question can be seen in Fig. 1. There is a statistical difference between the two groups for this question (P = 0.003), with the psychiatrists perceiving more severe withdrawal symptoms than the GPs.
The proportion of people believed to be affected by withdrawal symptoms when stopping antidepressants is shown in Table 2, with just under half of our GP respondents (45%) and just under onethird of the psychiatrists (31%) perceiving that 21-40% of patients are affected when withdrawing antidepressants. There is no statistical difference between the two groups (P = 0.606).
When asked what might prompt them to discuss stopping antidepressants with a patient (such as any particular indications or time periods), five main themes emerged for each group, as shown in Table 3.
All but four of the GPs provided free-text answers to a question about the three main concerns that might prompt them to discuss with a patient whether to stop antidepressants (such as any particular indications or time periods). The main headings are as follows: duration of treatment, for example after 6 months or after 1 year, was mentioned by 26 GPs; 9 GPs said that the patient asking to end treatment might act as a prompt; treatment side-effects and a routine medication review were each mentioned by 8 GPs; and noting an improvement in the patient's symptoms was cited by 7 GPs.
All the psychiatrists provided free-text answers to this question. The main headings are as follows: side-effects (21 responses); lack of efficacy (13 responses); duration of treatment (11 responses); prompted by the patient asking to end treatment (10 responses); noting an improvement in the patient's symptoms (8 responses).
When the GPs were asked to estimate what proportion of their patients who are on antidepressants approach them about stopping their antidepressant medication, the mean result was 20%, although the range was 2-100%. The mean result from psychiatrists was 23% (range 0-100%), from the 24 out of 26 respondents who gave a percentage.
The question on how they would typically go about withdrawing a patient from antidepressants elicited the responses in Table 4 (respondents could tick all that applied).
The psychiatrist who responded 'other' stated that they would 'Check the Maudsley guidance and follow that'.
When asked who they might consult/work with to support a patient through withdrawal and staying off medication (this could include professionals and non-professionals), the most common answers from the GPs were: social prescriber, mentioned by 12 GPs; no other support needed (or available), mentioned by 10; and consultant psychiatrist or other mental health counsellor, each mentioned by 7. The main responses from the psychiatrists were: patient/carer/family, mentioned by 13; GP, by 10; pharmacist or care coordinator, each mentioned by 9; and other counsellor/ mental health practitioner, mentioned by 5.
As regards how often there is a discussion of potential withdrawal symptoms with the patient before commencing antidepressant treatment (Table 5), there was no statistical difference between the two groups (P = 0.438).
The final question allowed for free-text comments on antidepressant withdrawal, to which nine GPs and ten psychiatrists responded. Three GPs perceived that severe withdrawal was not a problem in their experience (these GPs scored the severity of withdrawal symptoms as 3, 4 and 5). Two other GPs noted that patients often stop taking antidepressants when they feel better, often without consulting their doctor.
Discussion
This small study looked to understand a number of issues related to the withdrawal of antidepressants against a background of controversy and fierce debate on the benefits and harms of this class of medicine. 11 It is a study that involved a full ecosystem of mental health management, i.e. primary and secondary care across a single CCG and geographical area (Cornwall) that covers about 1% of the UK population. We found some differences and similarities in views both within the two groups of healthcare professionals and also across the two groups. Limitations The relatively small sample (42 GP responses and 26 psychiatry responses) means that there is a risk of type 2 error, and genuine differences might be revealed by a larger sample. To address this limitation, we hope to extend this study to the rest of the South West of the UK. The current study has been a useful pilot. We recognise the limitations with this small study undertaken in just one geographical region and, for the GPs, with a self-selected group of healthcare professionals. Cornwall is a predominantly rural and White (99.5%). It is also one of the most deprived areas not only of the UK but also of the European Union. Thus, it is possible that healthcare professionals working in different healthcare systems may hold different views on this subject. Further studies to cover wider areas of the UK would show whether these results are reproducible.
We also used a survey that could have introduced biases such as recall bias and answering tendencies. Triangulation with objective assessments of patient management such as case-note audits would have added to our characterisation of current practice.
Our questions were about withdrawal of antidepressants for any indication and respondents may actually have different views on severity or absence of withdrawal symptoms depending on the condition being treated.
Main findings
A systematic review reported that, overall, seven out of ten studies found that a large proportion of participants reported experiencing antidepressant withdrawal symptoms for more than 2 weeks. 1 This is somewhat contrary to our findings. The majority of respondents from both groups of our study thought that withdrawal symptoms typically last between 1 and 4 weeks: 53% (22/42) of GPs and 69% (18/26) of psychiatrists. However, a wide range of answers was given. For example, the GPs perceived that withdrawal symptoms could last typically for anywhere between a matter of days (6 respondents (15%) suggested less than a week) to 4-6 months (2 respondents (5%)), which was similar to the perceptions of some of the psychiatrists. Only 14 (33%) of our GPs and 12 (46%) of our psychiatrists considered that the proportion of people affected by withdrawal symptoms is greater than 40%.
Incidence and severity of withdrawal symptoms
As well as the heated debate about the merits of antidepressants, it is argued that another important discrepancy between the scientific literature and prevailing beliefs held by leading psychiatrists concerns withdrawal symptoms on discontinuation of antidepressant medication. 12 According to several studies, severe and persistent withdrawal reactions affect up to 50% of antidepressant users. Others quote that more than 50% of people who attempted to stop antidepressants experienced withdrawal effects and nearly 50% of those experiencing withdrawal effects described them as severe. 1 Notably, in our study the psychiatrists considered withdrawal effects to be more severe than the GPs. It is possible that this is due to differences in patient populations encountered by the two groups, with psychiatrists perhaps more likely than GPs to encounter patients who have been on antidepressants for longer periods of time, which could make withdrawal symptoms worse. Also, relationships have been shown between mental disorders and medical comorbidities, 13 so it is possible that those with more severe mental disorders or more complex needs (who are therefore more likely to need psychiatrist input) could show more severe withdrawal symptoms owing to an interplay with other medical needs.
Prescribing practices
In an analysis of 52 recorded primary care consultations for depression, anxiety and stress, patients resisted treatment because of doubts about its efficacy based on previous experiences, fears about dependency and/or side-effects, and concerns about attending group therapy. 14 Historically, it was known than many patients on longer-term courses of antidepressants were not being appropriately reviewed. 15 A study of Scottish GPs during 2014-2015 noted that the lack of proactive medication reviews (e.g. patients only present in crisis) contributed to further antidepressant prescribing growth over time. 16 Twenty-six (62%) of our GPs and 11 (42%) of our psychiatrists indicated that duration of treatment would make them consider withdrawal of treatment.
Patient support during withdrawal
It is interesting that 12 GPs (29%) identified a social prescriber as someone with whom they would consult/work to support a patient through withdrawal and staying off medication, yet the evidence base for social prescribing is very limited. It is recognised that there is a need to consider support for health professionals in the management of antidepressant medication and discussions of discontinuation in particular. 17 Nine (35%) psychiatrists said they would involve pharmacists and have access to them, but GPs did not mention seeking help from pharmacists. This could raise the suggestion of whether it would be valuable to encourage involvement of community pharmacists in antidepressant withdrawal that is managed by GPs. A recent study showed the potential support that community pharmacists could give to patients on antidepressants, such as by monitoring adherence and efficacy of treatment. 18 Patient advice on potential withdrawal symptoms Approximately three-quarters of respondents claimed that they always or usually provide information to patients about potential antidepressant withdrawal symptoms before commencing them. This is in sharp contrast to an online survey of a self-selected convenience sample of patients, mainly from Europe, North America and Australasia, in which less than 1% replied that had been told anything about withdrawal effects. 19 Table 4 Methods used for withdrawing a patient from antidepressants, as reported by general practitioners (n = 42) and psychiatrists (n = 26) Pre-determined answer
Implications for the patient
Our co-author, K.A., a lived experience expert who helped design the study and questionnaires, shares her perspective in response to the results of this study: 'This project highlights that there still remain gaps between perceptions of clinicians to those of patients. It is my personal experience and those who I represent that withdrawing from antidepressants would be much longer lasting and have more serious withdrawal effects than what has been reported clinically here. Personally, and I expect for many, it is a scary project to undertake. There is a lack of resources and support available to support person-centred withdrawal. I don't recall ever being warned by anyone treating me about withdrawal effects beyond the generic message of don't stop suddenly. The study highlights that many of the concerns I have outlined have not been objectively examined.'
Implications for practice
One of the key findings of this study is that roughly 75% of psychiatrists and GPs say they always or usually warn patients about withdrawal side-effects, but patient surveys say that only 1% are warned. This could be because clinicians are rapidly changing as awareness is growing, which would be positive, or because there is a communication gap and the message is not clear enough. The mismatch between the findings of our study and previous patient surveys could also mean that some patients who are told about withdrawal symptoms simply do not recall this. Both of possible negative explanations suggest a need to encourage more GPs and psychiatrists to inform patients of potential withdrawal effects and to do this in a way that patients will remember.
Implications for policy
The results highlight that a proportion of psychiatrists involve pharmacists when withdrawing a patient from antidepressants, but GPs do not. Encouraging involvement of community pharmacists to support GPs in doing this may be a useful source of support.
Implications for research
Antidepressant withdrawal is a serious public health concern that warrants more research and adequate appraisal by academic psychiatry. 12 It is interesting to note such a high variation in responses from all participants. The diversity of responses reflects the wide variation in results from research studies. This shows a need for more reproducible studies to be carried out into the patient experience of antidepressant withdrawal. We suggest that future studies exploring this topic should carefully consider their methodology to make it as representative as possible. This means that a more precise impression could be given of the proportion of patients who are affected by antidepressant withdrawal, how often the symptoms typically last and so on. This could also include, for example, identifying patient groups who are more at risk of severe withdrawal symptoms. This could inform future guidelines to make them more specific, so that GPs and psychiatrists are able to provide more consistent evidence-based advice and management to patients, which could improve patient outcomes.
Data availability
The data that support the findings of this study are available from the corresponding author on reasonable request. | 2020-06-20T13:06:40.135Z | 2020-06-18T00:00:00.000 | {
"year": 2020,
"sha1": "b11dbae2c8e4e708d8b4d978f21a57eed6cd4bb6",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/35BA36C08E083BEB06CB3479B9BCE6EE/S2056472420000484a.pdf/div-class-title-general-practitioners-and-psychiatrists-attitudes-towards-antidepressant-withdrawal-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2f4390f6de60656358ec6ae1af798fb4de6adfa",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88023368 | pes2o/s2orc | v3-fos-license | ENTOMOLOGIA HELLENICA
Insects have been reported to be associated with a broad variety of microorganisms, affecting the host biology in many different ways. Among them, Wolbachia, an obligatory intracellular and maternally-inherited symbiont, has recently attracted a lot of attention. Beside insects, Wolbachia are found in association with a wide variety of other invertebrate species, including mites, scorpions, spiders, crustaceans, filarial nematodes. Several surveys have indicated that Wolbachia may be symbiont of up to 70% of all insect species, rendering Wolbachia the most ubiquitous intracellular symbiotic organism on Earth. Wolbachia -host interactions range from many forms of reproductive parasitism to mutualistic symbioses. Different Wolbachia strains have been found to induce a number of reproductive alterations such as feminization, parthenogenesis, male-killing or cytoplasmic incompatibility. Despite their common occurrence and major effects on host biology, speciation and ecological diversity, little is known on the molecular mechanisms that mediate Wolbachia -host interactions. Recent studies focus on the potential of Wolbachia -based methods for the biological control of insect pests and disease vectors of agricultural, environmental and medical importance.
Introduction on Insect Symbiosis
Several types of insect-microbe associations are present in nature, many of which are accountable to a more or lesser extent for the evolutionary success of insects.Microbes are ubiquitous both inside and outside the insect bodies, inside or outside the insect cells and interact with their host in a broad variety of relationships that range from mutualism, which is beneficial to the host, to parasitism, where the symbiont has a negative impact on host's biology (Ishikawa 2003).The most intimate association is intracellular symbiosis, with the symbiont being vertically transmitted among generations.Intracellular symbionts of insects are divided into two groups.The first one, the primary symbionts, covers symbiotic microbes that usually supply hosts with nutrients and are harboured by the host bacteriocyte, a special cell differentiated for this purpose (Buchner 1965, Ishikawa 1989).Secondary symbionts, also known as "guest microbes", are not restricted to a particular cell type, but are present in many cell types of the host insect.Unlike primary symbionts, guest microbes can colonize naive hosts through horizontal transmission among host individuals and species.These associations are typically facultative from the perspective of the host and can be deleterious to the host (parasitism), beneficial only to the symbiont (commensalism) or beneficial to both parties (mutualism).During the last three decades, a novel type of symbiosis has been described for bacteria like Wolbachia, Cardinium, Rickettsia, Spiroplasma and Arsenophonus, that manipulate the host reproduction system to their advantage (reproductive parasitism).Wolbachia, the best-studied symbiont of this group, will be the focus of this review.
Introduction on Wolbachia
Collaborative studies between Marshall Hertig, an entomologist, and Samuel Wolbach, a pathologist, on the presence and identification of microorganisms in arthropods, resulted in the discovery of Wolbachia in the gonads of the Culex pipientis mosquito in 1924 (Hertig and Wolbach 1924); however, the complete description of this symbiotic association was published in 1936 (Hertig 1936).For decades, Wolbachia was known only from mosquitoes; the development of PCR-based screening methods clearly indicated that Wolbachia is widespread in nature (O'Neill et al. 1992).It has been demonstrated that Wolbachia infects up to 70% of insect species, a large number of other arthropods (including spiders, scorpions, mites, springtails, terrestrial isopods) as well as filarial nematodes (Werren et al. 1995, Bandi et al. 1998, Jayaprakash and Hoy 2000, Werren and Windsor 2000, Hilgenboecker et al. 2008, Werren et al. 2008).These studies place Wolbachia among the most common intracellular bacteria known, with estimates of several million infected species (Werren et al. 1995, Jayaprakash and Hoy 2000, Werren and Windsor 2000, Hilgenboecker et al. 2008, Werren et al. 2008).
Molecular phylogenetic analysis based on the 16S rRNA gene indicated that Wolbachia belongs to α-Proteobacteria, being evolutionary related to other intracellular bacterial species of the genera Anaplasma, Ehrlichia and Rickettsia (Breeuwer et al. 1992, O'Neill et al. 1992, Rousset et al. 1992).A significant amount of Wolbachia genomic information is available since the genome of four Wolbachia strains (wMel, wRi, wPip and wBm) has been completed (Wu et al. 2004, Foster et al. 2005, Klasson et al. 2008, Klasson et al. 2009).The available genomic information allowed the development of two multi locus sequencing typing (MLST) systems which can be used for the genotyping of any given Wolbachia strain (Baldo et al. 2006, Paraskevopoulos et al. 2006); they have also facilitated the classification of the Wolbachia strains into 10 major phylogenetic clades which have been named 'supergroups' (Werren et al. 1995, Bandi et al. 1998, Zhou et al. 1998, Lo et al. 2002, 2007, Rowley et al. 2004, Bordenstein and Rosengaus 2005, Ros et al. 2008, Bordenstein et al. 2009).
Several studies have shown that Wolbachia is mainly localized in the reproductive tissues of arthropods and is responsible for the induction of a number of reproductive alterations including feminization, thelytokous parthenogenesis, male-killing and cytoplasmic incompatibility (CI) (Werren 1997, Bourtzis and O'Neill 1998, Bourtzis and Braig 1999, Stouthamer et al. 1999, Werren et al. 2008).The widespread distribution of Wolbachia as well as the manipulation of host's reproductive system places this symbiont among the most promising targets for disease/ pest control.The aim of this review is to present the Wolbachia-induced reproductive manipulations with an emphasis on how this symbiont could be used for the control of insect pests and disease vectors.
Wolbachia-induced Phenotypes
Wolbachia is maternally inherited and has evolved a number of strategies to ensure transmission by manipulating the host reproductive system (Werren et al. 2008).These strategies include: a) feminization, the conversion of genetic males into females, b) parthenogenesis, the production of diploid offspring in the absence of sexual reproduction, c) male killing, the killing of infected males to the benefit of infected female siblings and d) cytoplasmic incompatibility (CI), the inability of infected males to successfully fertilize eggs from either uninfected females or from females infected with different Wolbachia types.Each of these phenotypes increases the frequency of infected females in the host population and therefore they represent bacterial adaptations to increase transmission of the microorganisms.Such parasite effects on hosts are commonly referred to as "reproductive parasitism" (Bandi et al. 2001).
Feminization is the most obviously beneficial strategy for a maternally inherited bacterium such as Wolbachia.The conversion of genetic male offspring into females doubles the potential Wolbachia transmission to the following generation.However, the Wolbachia-induced feminization is the most infrequently described of the Wolbachia-induced phenotypes, reported most commonly in several species of terrestrial isopods (Bouchon et al. 1998, Rigaud et al. 1999a, Michel-Salzat et al. 2001).In these isopod hosts, Wolbachia within genetic males inhibits the development of the androgenic gland and the production of the androgenic hormone (Azzouna et al. 2004).These "feminized" males may however suffer a fitness disadvantage compared to genetic females, with males preferring to mate with genetic females (Moreau et al. 2001).A feminizing Wolbachia infection with complete penetrance would eliminate phenotypic males and lead to the extinction of both the host population as well as the symbiont.Such events, although difficult to be observed, may occur in nature.On the other hand, populations that do persist take advantage of imperfect transmission of feminizing Wolbachia strains (Rigaud et al. 1999b) or constrain the ability of Wolbachia to spread by exploiting the local scarcity of males (Moreau and Rigaud, 2003).Recent studies suggest that Wolbachia can also induce feminization in insect species as reported for the leafhopper Zyginidia pullula (Hemiptera: Cicadellidae) and the butterfly Eurema hecabe (Lepidoptera: Pieridae) (Negri et al. 2006, Narita et al. 2007).
Parthenogenesis, the production of female offspring in the absence of sperm fertilization offers an obvious advantage to a maternally inherited microorganism.If a 100% occurrence is assumed, parthenogenesis as well as feminization doubles the potential transmission of Wolbachia to the offspring.Interestingly, all currently documented cases of Wolbachiainduced parthenogenesis are found only within haplodiploid species belonging to Thysanoptera (Arakaki et al. 2001), Acari (Weeks and Breeuwer 2001) and Hymenoptera (Stouthamer et al. 1993, Zchori-Fein et al. 1995).Haplodiploidy describes the development of (diploid) females from fertilized eggs, while (haploid) males develop from unfertilized eggs.In this particular sex determination system, parthenogenesis may occur either by complete suppression of meiosis (apomixis) or by restoration of diploidy upon meiosis (automixis).Wolbachia-induced parthenogenesis has been found to be apomictic in mites (Weeks and Breeuwer 2001) and automictic in wasps (Zchori-Fein et al. 1995).
The killing of genetic males by Wolbachia has been described in four different Arthropod orders namely Diptera (Hurst et al. 2000, Dyer andJaenike 2004), Coleoptera (Fialho and Stevens 2000;Majerus et al. 2000), Lepidoptera (Jiggins et al. 2000) and Arachnida (Zeh et al. 2005).Male killing may be advantageous under limited conditions, where resource reallocation from dead males to female siblings increases the fitness of infected females (Hurst 1991, Hurst et al. 2003).In all cases detected, the Wolbachia-induced male killing meets the above criterion.Another predicted benefit would be the resulting avoidance of the inbreeding (Werren 1987).
Despite the fact that cytoplasmic incompatibility (CI) is the most commonly described reproductive abnormality induced by Wolbachia, the underlying mechanism still remains under investigation.CI has been described in many different arthropod orders: Diptera (Yen and Barr 1973), Coleoptera (Wade and Stevens 1985), Acari (Breuuwer and Jacobs 1996), Isopoda (Moret et al. 2001), Lepidoptera (Brower 1976), Hymenoptera (Reed and Werren 1995), Homoptera (Hoshizaki and Shimada 1995) and Orthoptera (Kamoda et al. 2000).As shown in Figure 1, CI can be unidirectional or bidirectional, depending on the number of Wolbachia strains involved in the phenotype (Breeuwer and Werren 1990, O'Neill and Karr 1990, Bourtzis et al. 1998, Bourtzis et al. 2003).Unidirectional CI describes the embryonic lethality observed when a Wolbachia-infected male mates with an uninfected female.All the other possible crosses are fully compatible, favoring the relative fitness of infected females and the spread of Wolbachia.Bidirectional CI occurs between populations infected with different Wolbachia strains, when an infected male mates with a female lacking the same Wolbachia strain.The second type of incompatibility reproductively isolates two populations and may contribute to speciation (Werren 1998, Bordenstein 2003, Telschow et al. 2005).
The types of the incompatible crosses lead to the assumption that there are at least two distinct functions involved in CI, the "modification" and the "rescue" function (Werren 1997, Bourtzis et al. 1998, Merçot and Poinsot 1998).When the female lacks the "rescue" function, the "modification" function of the male results in embryonic lethality.Although the exact mechanism remains unclear, the incompatible phenotype is associated with an asynchrony in the development of the male and female pronuclei probably due to impaired histone deposition in the male pronucleus (Lassy and Karr 1996, Tram and Sullivan 2002, Landmann et al. 2009).
In addition to the above mentioned reproductive abnormalities, Wolbachia can positively or negatively influence other aspects of host fitness.In Aedes albopictus (Diptera: Culicidae), fitness benefits resulting from Wolbachia infection affect both fecundity and longevity (Dobson et al. 2002).Both single and doubly infected females produce more eggs and live longer than uninfected females; no effect on males has been observed.It should be noted that similar observations were recently reported for Drosophila simulans (Diptera: Drosophilidae) (Weeks et al. 2007).Negative effects of Wolbachia in host longevity have been well documented due to wMelpop strain (Min and Benzer 1997).Flies bearing wMelpop suffer significant reduction in longevity, most likely due to overproliferation of the symbiont in the neuronal tissue (Min and Benzer 1997, FIG. 1. Schematical representation of unidirectional (A) and bidirectional (B) cytoplasmic incompatibility.Insects bearing incompatible Wolbachia strains are marked with red or black.McGraw et al. 2002, McMeniman et al. 2009).
Although Wolbachia successfully evades the host immune system and does not induce the normal antibacterial response (Bourtzis et al. 2000), Wolbachia infection has been shown to be a key player in host immunity.In at least one host-parasitoid system, the presence of Wolbachia decreases fitness in both the host and the parasitoid (Fytrou et al. 2006).D. simulans infected with Wolbachia is less effective in killing the eggs laid by the parasitoid Leptopilina heterotoma (Hymenoptera: Eucoilidae).Similarly, Wolbachia infection of L. heterotoma makes the parasitoid more vulnerable to the host defenses.The exact nature of these interactions is currently unknown.On the other hand, recent reports suggest that Wolbachia infections provide virus protection in insect hosts (Hedges et al. 2008, Teixeira et al. 2008).
Practical Applications
The widespread distributions as well as the manipulation of host's reproductive system render Wolbachia as key-player in pest control management (Table 1).Wolbachia's potential as novel environmentally friendly bio-control agent has already attracted a lot of attention (Beard et al. 1993, Sinkins et al. 1997, Bourtzis and Braig 1999, Sinkins and O'Neill 2000, Aksoy et al. 2001).Several strategies have been proposed, most of which take advantage of the induction of cytoplasmic incompatibility (Bourtzis, 2008).
Despite the global distribution of Wolbachia, many important agricultural pests (e.g.Bactrocera oleae) and disease vectors (Aedes aegypti, Anopheles gambiae) are not naturally Wolbachiainfected.However, many studies have shown that Wolbachia can be transferred and established into a naive host resulting in the expression of the expected reproductive phenotype (Boyle et al. 1993, Braig et al. 1994, Poinsot et al. 1998, Sasaki and Ishikawa 2000, McGraw et al. 2001, Zabalou et al. 2004a, b, Xi et al. 2005).Based on these observations, Wolbachia may serve as an important tool for the "Incompatible Insect Technique", the use of a symbiont-associated B A reproductive incompatibility as for the control of insect pests and disease vectors (Bourtzis and Robinson 2006).
A successful example of stable transinfection of a Wolbachia-free agricultural pest has been reported for Ceratitis capitata (Diptera: Tephritidae) (Zabalou et al. 2004b).Wolbachia strains from the host Rhagoletis cerasi (Diptera: Tephritidae) have been used to stably infect the Mediterranean fruit fly through embryonic injection.Crosses between uninfected females and Wolbachia-infected males resulted in 100% egg mortality, while crosses between fly lines bearing different Wolbachia strains were 100% bidirectionally incompatible.The major advantage of the Wolbachia-based Incompatible Insect Technique over the Sterile Insect Technique lies on the fact that the insects do not have to be irradiated before release.However, the necessity of employing an effective sexing strain of the insect pest, so that only infected males are released, still remains (Bourtzis and Robinson 2006).Zabalou et al. (2009) described a Wolbachia-infected line of the VIENNA 8 genetic sexing strain of the medfly that carried the selectable marker temperature sensitive lethal (tsl).Transferred Wolbachia induced high levels of CI even after the temperature treatment required for the male-only production.
Genetic manipulation that reduces the fitness of a pest population would provide a useful tool to complement current control strategies.Drive systems are an important component of population replacement strategies that provide mechanisms for the autonomous spread of desired genotypes/transgenes into the targeted population (Dobson 2003).Besides autonomous transposons, primary candidates for drive strategies are bacterial symbionts used as expression vehicles (Curtis andSinkins 1998, Turelli andHoffmann 1999).The reproductive advantage afforded by CI to Wolbachiainfected females promotes the spread of the maternally inherited Wolbachia infection.Thus, desired genotypes / transgenes linked to a Wolbachia infection would be expected to spread into a targeted population following the seeding of the targeted population with proper Wolbachia-infected females.Xi et al. (2005) demonstrated the ability of wAlbB to spread into an A. aegypti population after seeding of an uninfected population with infected females, reaching infection fixation within seven generations in laboratory cage tests.
Age is a critical determinant of the ability of most insect vectors to transmit a range of human pathogens.This is due to the fact that most pathogens require a period of extrinsic incubation in the insect host before pathogen transmission can occur.This developmental period for the pathogen often comprises a significant proportion of the expected lifespan of the vector.As such, only a small proportion of the population that is oldest contributes to pathogen transmission (Cook et al. 2008).Given this, strategies that target vector age would be expected to obtain the most significant reductions in the capacity of a vector population to transmit disease.The identification of insect symbionts that shorten the host lifespan would offer new tools for the control of vector-borne diseases (Sinkins and O'Neill 2000).McMeniman et al. (2009) reported the successful transfer of wMelpop, a life-shortening strain of Wolbachia, into the major mosquito vector of dengue, Aedes aegypti (Diptera: Culicidae).The association halved host life span under laboratory conditions and the symbiont induced complete cytoplasmic incompatibility, which should facilitate its invasion into natural field populations.
Concluding Remarks
During the last decades, insect symbiosis gained a lot of attention as a widespread phenomenon affecting the host biology in many ways.Among the bacteria related to insects in a positive or negative aspect, Wolbachia is doubtless the most ubiquitous.Causing a broad range of reproductive phenotypes to the hosts, Wolbachia is a key player with biological, ecological and evolutionary significance.Due to its unique properties, Wolbachia offers the potential for development of novel and environment friendly biotechnological strategies for the control of insect pests and disease vectors.
TABLE 1 .
Synopsis of Wolbachia-based applications. | 2019-01-02T06:15:33.954Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a223644fa9ac64074b21d98cb49d8d3b57cb8515",
"oa_license": "CCBYNCSA",
"oa_url": "https://ejournals.epublishing.ekt.gr/index.php/entsoc/article/download/11597/11648",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a223644fa9ac64074b21d98cb49d8d3b57cb8515",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
247345581 | pes2o/s2orc | v3-fos-license | The Use of Electronic Nose in the Quality Evaluation and Adulteration Identification of Beijing-You Chicken
The objective of this study was to reveal the secrets of the unique meat characteristics of Beijing-you chicken (BJY) and to compare the difference of quality and flavor with Luhua chicken (LH) and Arbor Acres broiler (AA) at their typical market ages. The results showed the meat of BJY was richer in essential amino acids, arachidonic acid contents, inosine monophosphate (IMP), and guanosine monophosphate (GMP). The total fatty acid and unsaturated fatty acid contents of BJY chicken and LH chicken were lower than that of AA broilers, whereas the ratios of unsaturated fatty acids/saturated fatty acids (2.31) and polyunsaturated fatty acids/monounsaturated fatty acids (1.52) of BJY chicken were the highest. The electronic nose and SPME-GC/MS analysis confirmed the significant differences among these three chickens, and the variety and relative content of aldehydes might contribute to a richer flavor of BJY chicken. The meat characteristics of BJY were fully investigated and showed that BJY chicken might be favored among these three chicken breeds with the best flavor properties and the highest nutritional value. This study also provides an alternative way to identify BJY chicken from other chickens.
Introduction
Beijing-you (BJY) chicken, as one of 27 rare breeds in China, is increasingly favored by Chinese customers for its superior meat and egg qualities [1]. In 2020, BJY chicken was awarded the "Agro-product Geographical Indications" by the Ministry of Agriculture of the P.R.C., due to its distinctive appearance, strong viability, and stable genetic performance [2]. As a result, the price of BJY chicken of approximately RMB 70/500 g is much higher than other commercially available chickens. However, very few researchers focus on investigating the quality and flavor characteristics of BJY chicken and adulteration identification technology to differentiate BJY chicken from other low-value chickens, which could help BJY chicken to remain competitive in the market.
Consumer acceptance of chicken meat relies on its quality, such as visual appearance, smell, tenderness, and juiciness [3]. Nowadays, nutrition and sensory quality are noted as key factors in the consumer perception of chicken meat [4,5]. Among them, the flavor, including taste and odor, is one of the most important characteristics [6]. Taste is the sensation that the tongue receives when it contacts with soluble substances, such as free amino acids and nucleotides, while the odor is sensed through the olfactory organs [7]. Gas chromatography-mass spectroscopy (GC/MS) is the most commonly technique to analyze odor profiles of meat products, which could provide an accurate approach for the IMF content was measured as described by Ju et al. [20], with slight changes. A minced meat sample (5 g) of each chicken was mixed with 50 mL of petroleum ether to ultrasonically extract the IMF for 45 min. Extracted IMF was filtered, dried with anhydrous Na 2 SO 4 , and concentrated by rotary evaporator in a 70 • C water bath. The above steps were repeated three times to obtain IMF. The results were expressed as the weight percentage of wet muscle tissue. The nitrogen (N) content was assayed using the Kjeldahl method, which was used to calculate CP by multiplying N × 6. 25. The results were expressed as the weight percentage of wet muscle tissue.
Determination of Nucleotide Compound Contents
Nucleotide content was estimated as described by Jung et al. [21], with slight changes. A minced meat sample (5 g) of each chicken was mixed with 20 mL of 5% (volume/volume) perchloric acid to extract nucleic acids. Extracted nucleic acids were centrifuged at 9200× g for 10 min. The supernatant was then adjusted to pH 6.4 with 1 mol/L KOH. The supernatant was placed in a volumetric flask and adjusted to a volume of 25 mL with distilled water, filtered through a 0.22 µm membrane filter, and analyzed for adenosine triphosphate (ATP), and its related compounds were measured by HPLC (Shimadzu, Kyotos, Japan) equipped with an SPD-10A (V) detector, VP-CDS C18 column (4.6 mm id × 250 mm, 5 µm). The sample (10 µL) was injected at a flow rate of 0.7 mL/min, and the peak was detected at 254 nm. The amounts of ATP, adenosine diphosphate (ADP), adenosine monophosphate (AMP), IMP, inosine (HxR), hypoxanthine (Hx), and GMP were determined and calculated based on the standard ATP, ADP, AMP, IMP, HxR, Hx, and GMP. All standards reagents were purchased from Sigma (Merck, Darmstadt, Germany). The results were expressed as milligram of nucleotides contents per 100 g of wet muscle tissue.
Determination of Amino Acid Contents
Amino acid content was estimated based on the previous methods reported by Li et al. [22]. Breast muscle samples (2 g) were freeze-dried and ground for extraction, then amino acids were determined in triplicate by an Amino Acid Analyzer (Sykam, Munich, Germany). The results were expressed as milligrams of amino acids per 100 g of wet muscle tissue.
Determination of Fatty Acids
Fatty acid content was determined by gas chromatography, as reported by Gecgel [23]. Breast muscle samples were freeze dried and ground and then analyzed using an HP6890 gas chromatography system (Hewlett-Packard, Palo Alto, CA, USA). The results were expressed as milligrams of fatty acids per 100 g of wet muscle tissue.
Determination of VOCs
The VOCs were determined by an automated injector using the method introduced by Li et al. [24], with some modifications. Meat samples weighing 1 g were placed into 20 mL headspace vials prior to being pre-heated at 60 • C for 20 min for system equilibration. PDMS/DVB fiber (65 mm) was inserted and exposed to the headspace of the vial. After 30 min, the fiber was withdrawn and inserted into the injection port of a GC (Shimadzu, Kyoto, Japan) injector at 200 • C for 2 min for desorption.
In a GC-MS system equipped with an MS detector (Shimadzu, Kyoto, Japan), VOCs were separated by a capillary DBWAX column (30 m × 0.25 mm × 0.25 mm). The temperature of the GC oven was first kept at 40 • C for 3 min, increased 5 • C/min to 120 • C, and then increased 10 • C/min to 200 • C and held for 13 min. The injections were performed in splitless mode, and the carrier gas was helium with a flow rate of 1 mL/min. The collection of MS data was acquired at a full scan range from 35 to 500 m/z. The transfer lines and MS source remained at 250 • C and 200 • C, respectively. The VOCs were identified by matching mass spectra or retention time with those in the National Institute of Standards and Technology (NIST) 11 spectral database and were quantified by the area normalization method.
Electronic Nose Evaluation
The volatile compounds of chicken breast meat were analyzed using an E-Nose 10001 system (developed by the College of Information and Electrical Engineering, China Agricultural University). The E-Nose 10001 electronic nose system mainly consists of the following parts: data acquisition part, data conditioning part, interface circuit part, and computer host. The hardware part includes a gas sensor array, a signal conditioning circuit board, an A/D conversion interface, and a computer, as shown in Figure 1. Previous studies have proven that, after the sensor array optimization and feature optimization, E-Nose 10001 can distinguished pork from different manufacturers, and also the parts and storage conditions. Compared with the PEN3 electronic nose of Airsense, the results of E-Nose 10001 are more accurate.
Electronic Nose Evaluation
The volatile compounds of chicken breast meat were analyzed using an E-Nose 10001 system (developed by the College of Information and Electrical Engineering, China Agricultural University). The E-Nose 10001 electronic nose system mainly consists of the following parts: data acquisition part, data conditioning part, interface circuit part, and computer host. The hardware part includes a gas sensor array, a signal conditioning circuit board, an A/D conversion interface, and a computer, as shown in Figure 1. Previous studies have proven that, after the sensor array optimization and feature optimization, E-Nose 10001 can distinguished pork from different manufacturers, and also the parts and storage conditions. Compared with the PEN3 electronic nose of Airsense, the results of E-Nose 10001 are more accurate. The electronic nose was equipped with 16 different metal oxide sensors: TGS824, TGS822, TGS825, TGS880, TGS812, TGS831, TGS813, TGS830, TGS822TF, TGS2600, TGS2620, TGS2611, TGS2602, TGS2620, TGS2610, TGS2201. Before the measurements were taken, the headspace gases were injected at a flux speed of 3 L/min for 60 s. Then, 5 g of minced chicken breast samples (3 birds per breed, respectively) were placed in a vial at a temperature of 40 °C . The gases in the headspace of the sample were pumped into a gas sensor chamber at the same speed. The electronic nose measurement interval was 0.05 s. Electronic nose real-time responses to chicken breast samples were recorded with 5 replicates.
Reagent Section
All reagents and solvents used are listed in Table 1. The electronic nose was equipped with 16 different metal oxide sensors: TGS824, TGS822, TGS825, TGS880, TGS812, TGS831, TGS813, TGS830, TGS822TF, TGS2600, TGS2620, TGS2611, TGS2602, TGS2620, TGS2610, TGS2201. Before the measurements were taken, the headspace gases were injected at a flux speed of 3 L/min for 60 s. Then, 5 g of minced chicken breast samples (3 birds per breed, respectively) were placed in a vial at a temperature of 40 • C. The gases in the headspace of the sample were pumped into a gas sensor chamber at the same speed. The electronic nose measurement interval was 0.05 s. Electronic nose real-time responses to chicken breast samples were recorded with 5 replicates.
Reagent Section
All reagents and solvents used are listed in Table 1.
Statistical Analysis
Mean and standard deviations were calculated and subjected to analysis of variance. Duncan's test was used to test for differences between means, and the significance was defined at p < 0.05 using SPSS 18.0 software (Chicago, IL, USA). The discriminant results of the electronic nose sensors for different chicken breast meat were based on canonical discriminant analysis (CDA).
Contents of Muscle IMF and CP
The IMF and CP contents of different breeds of chicken breast are presented in Figure 2. The CP contents of BJY and AA broilers were slightly higher than that of LH chicken, and there were no significant differences between BJY and AA broilers on the CP contents. Comparison of breeds revealed that breast IMF content was higher (p < 0.05) in BJY chicken (0.41%), whereas IMF content was intermediate in LH chicken (0.28%) and lowest in the AA broilers (0.23%). IMF content has a close relationship with good flavor, juiciness, and improved tenderness of meat [25]. These similar results were also found in previous studies, where native chicken breeds had higher IMF contents than imported commercial broilers [26,27]. Both genes and environment could influence the IMF content of meat [28]. Ranran et al. [29] revealed the embryonic development-related proteome and metabolome signatures in the breast muscle and intramuscular fat of fast-growing (BJY) and slowgrowing chickens.
Statistical Analysis
Mean and standard deviations were calculated and subjected to analysis of v Duncan's test was used to test for differences between means, and the significa defined at p < 0.05 using SPSS 18.0 software (Chicago, IL, USA). The discriminan of the electronic nose sensors for different chicken breast meat were based on ca discriminant analysis (CDA).
Contents of Muscle IMF and CP
The IMF and CP contents of different breeds of chicken breast are presented in 2. The CP contents of BJY and AA broilers were slightly higher than that of LH and there were no significant differences between BJY and AA broilers on the CP c Comparison of breeds revealed that breast IMF content was higher (p < 0.05) chicken (0.41%), whereas IMF content was intermediate in LH chicken (0.28%) and in the AA broilers (0.23%). IMF content has a close relationship with good flavor, ju and improved tenderness of meat [25]. These similar results were also found in p studies, where native chicken breeds had higher IMF contents than imported com broilers [26,27]. Both genes and environment could influence the IMF content of m Ranran et al. [29] revealed the embryonic development-related proteome and meta signatures in the breast muscle and intramuscular fat of fast-growing (BJY) an growing chickens.
Contents of Nucleotide Compound
From Table 2, the IMP contents of the breast meat from BJY and LH, which ar mg/100 g and 413.49 mg/100 g, respectively, were significantly higher than AA (247.25 mg/100 g). Some other studies have shown similar results; for example, Ju [20] and Tang et al. [30] found that slow-growing chicken breeds in Korea and Ch
Contents of Nucleotide Compound
From Table 2, the IMP contents of the breast meat from BJY and LH, which are 459.77 mg/100 g and 413.49 mg/100 g, respectively, were significantly higher than AA broilers (247.25 mg/100 g). Some other studies have shown similar results; for example, Jung et al. [20] and Tang et al. [30] found that slow-growing chicken breeds in Korea and China had higher contents of IMP than fast-growing commercial broilers. The differences in IMP content among different breeds may be explained by the effects of genotype, age, or their interaction. In addition, there was genetic effect on IMP content in chicken meat among indigenous breeds [31]. Li et al. [22] found that the content of IMP from Wenchang chicken, another indigenous chicken from China, was highly related to their genotype. It has been widely accepted that IMP is the most important nucleotide-based flavor precursor and can produce a synergistic effect conjugated with monosodium glutamate [32]. GMP is another important nucleotides that provides pleasant flavor for meat and can also be used as a flavor enhancer [33]. The content of GMP in BJY (5.87 mg/100 g) and LH (6.15 mg/100 g) was significantly higher than that in AA (4.19 mg/100 g). The differences of GMP content may be explained by the effects of genotype, feed, age and feeding condition [34][35][36][37][38]. Over time after slaughter, IMP can degrade to HxR and Hx. It was reported that the accumulation of HxR and Hx led to a decrease in freshness [39]. Therefore, the relatively high IMP and GMP contents and lower content of Hx and HxR in the BJY breast meat may produce a better flavor compared with LH chicken and AA broilers.
Contents of Amino Acids
Free amino acids are of great importance in eating quality due to their specific tastes and important flavor and flavor precursor substance in chicken meat [40]. The amino acid profiles of breast meat from BJY, LH, and AA broilers are depicted in Table 3. It is clear that the predominant amino acids in the essential fraction were leucine and lysine in all chicken breeds. In the nonessential fraction, glutamic acid was the richest amino acid. Similar results were also reported in previous studies. Chen et al. [5] found that glutamic acid in the nonessential fraction and lysine and leucine in the essential fraction were also major amino acids in 817 crossed chickens (a commercial Chinese crossed chicken), AA broilers, and Hyline Brown (commercial spent hens). The same results were also found in some other meat such as eland, cattle [41], and goose [42].
In this study, different chickens were significantly different (p < 0.05) in their amino acid contents, with the exception of glutamic acid, glycine, alanine, valine, leucine, arginine, and proline in the breast. Regarding total essential amino acids, significant differences were found among all three groups. AA broilers contained relatively higher total essential amino acids (315.20 mg/100 g), followed by BJY and LH (296.02 mg/100 g and 263.33 mg/100 g, respectively). However, the essential amino acid contents of BJY chicken showed significantly higher values than LH and AA broilers in breast meat. Glutamic acid is an important flavor compounds of meat, which is an important contributor to the fresh taste of meat [43]. BJY chicken had the highest content of glutamic acid (82.02 mg/100 g) among these three chicken breeds. The content of glutamic acid in LH chicken was slightly higher than that in AA broilers, but there was no significant difference. Wattanachant et al. [44] also confirmed that the breast meat of Thai native chickens had higher glutamic acid compared with broiler chickens. Therefore, the results indicate that the chicken breed considerably affects the amino acid composition considerably. The high content of essential amino acids in the breast muscles of BJY chicken might suggest that the BJY chicken has more nutritional value to humans than LH and AA chicken.
Contents of Fatty Acids
The fatty acid composition of meat was affected by many factors, such as age and genotype [21,29]. Furthermore, dietary manipulation can alter the fatty composition and fatty acid contents [45]. The different fatty acid compositions of muscles most likely affect lipid stability and flavor. Table 4 summarizes the fatty acid profiles in the breast muscle of these three breeds. The major components measured in the chicken meat from BJY, LH, and AA were linoleic acid (C 18:2), oleic acid (C 18:1), and palmitic acid (C 16:0), which accounted for approximately 70% of total fatty acids; this is consistent with the results reported by previous studies [5]. Regarding the total saturated acids (SFA), LH chickens showed significantly lower values (32.21 mg/100 g) in comparison to BJY and AA broilers, while there were no significant differences between BJY and AA broilers. However, the contents of unsaturated fatty acids (UFA) in AA broilers were significantly higher than those in BJY and LH chickens. Regarding UFA/SFA and PUFA/monounsaturated fatty acids (MUFA), the ratios were both significantly higher in BJY chickens than in LH chickens and AA broilers, which suggests that the composition of fatty acids in BJY chickens breast meat were better than that of same-age LH chickens and fast-growing AA broilers. The arachidonic acid content (C 20:4) in BJY chickens was more than twice than that of AA broilers, which were 14.55 mg/100 g and 6.25 mg/100 g, respectively. It was 10.87 mg/100 g C 20:4 in LH chickens, which was higher than that in AA broilers but lower than that in BJY chickens. Arachidonic acid can directly participate in intracellular signaling transduction and affect other signaling pathways to control cellular biological activity, which is a very important intracellular second messenger [46]. In addition, it was reported that, when the arachidonic acid composition was increased by supplementation with an acid enriched oil diet, the flavor intensity, total taste intensity, umami, and aftertaste of broiler muscle also increased [47]. Jeon et al. [48] found that the breast meat of Korean indigenous chickens had higher arachidonic acid contents than that of broilers. Zhao et al. [3] reported that the breast meat from BJY chickens contained significantly higher amounts of arachidonic acid than that of commercial fast-growing AA broilers at their market age. These results suggest that BJY chickens have better flavor properties and more nutritional value to humans than that of LH chickens and AA broilers. Table 5 shows the comparison of VOCs relevant contents of different types of chicken breast; 20, 25, and 19 VOCs were detected in the BJY, AA and LH, respectively. VOCs chromatograms of BJY, LH, and AA were showed in Figures S1-S3, respectively. There were no complicated heterocyclic compounds such as pyrroles and pyrazines in this study, which might be related to the lower maturation temperature (60 • C). Among the VOCs, 5 compounds were detected in all three types of chicken, including1, 3-bis (1,1-dimethylethyl)benzene, heneicosane, tetradecane, 2,4-bis (1,1-dimethylethyl)-enol, and hexadecanal. The volatile flavor substances together affect the final sensory quality of chicken meat. Jiang [49] found that the volatile compounds in Avain broilers, fast-da Yellow chicken, and BJY also contained tetradecane and palmaldehyde. It was predicted that tetradecane and hexadecanal were common volatile substances in chicken meat. 3.55 ± 0.00 2-Thiopheneacetic acid, oct-3-en-2-yl ester --1.59 ± 0.00 9-Hexadecenoic acid, 9-hexadecenyl ester 5.37 ± 0.01 3.11 ± 0.01 -Docosanoic acid nonyl ester 5.59 ± 0.00 -- Table 6 shows the quantity and relative content of VOCs in different types of chicken meat. Hydrocarbons, alcohols, and aldehydes were major VOCs in all three chicken samples, and their contribution to chicken flavor varies with the substance threshold [50]. Hydrocarbons were mainly derived from the cleavage of fatty acid alkoxy radicals, and the differences in the contents might be caused by the differences in their precursor fatty acids. The relative contents and types of hydrocarbon compounds in the three muscles were quite different, which were consistent with the previous results of the fatty acid contents. The aroma threshold of hydrocarbons is relatively high, and it is generally believed that they have little direct contribution to the flavor of meat [51]. Alcohols are mainly derived from the oxidation and degradation of lipids, which have pleasant fruity and floral odors [52]. Among them, 1-octen-3-ol has a mushroom-like smell and is the product of arachidonic acid oxidation by lipoxygenase [24], which was detected in both BJY and LH. In this study, the quantity and relative contents of alcohols in BJY and LH chicken were higher than AA chicken. Aldehydes are aliphatic compounds produced by lipid oxidization and thermal degradation [53], which are usually considered to be the major flavor contributors to meat products due to their low thresholds [54]. Previous studies have proved that aldehydes such as nonanal and decanal were characteristic aroma substances of chicken [55]. Nonanal was mainly formed by the oxidation of linoleic acid [56]. The fat content of BJY was relatively higher, so the variety and relative content of the aldehydes detected were also higher than LH and AA, contributing to a richer flavor of the chicken. In addition, saturated linear aldehydes with high molecular weight may produce pungent odors. The content of Pentadecanal and Hexadecanal in AA was relatively high. Li et al. have found that Tetradecanal might be one of the sources of the unpleasant earthy smell in fish [24].
Electronic Nose Analysis
The increase in meat production and the need for rapid detection have contributed to the development of simple, fast, accurate, and inexpensive methods to evaluate the classification or quality of meat [48]. Figure 3 shows how the electronic nose simulates the olfaction.
Electronic Nose Analysis
The increase in meat production and the need for rapid detection have contributed to the development of simple, fast, accurate, and inexpensive methods to evaluate the classification or quality of meat [48]. Figure 3 shows how the electronic nose simulates the olfaction. The multivariate recognition algorithm to process the multi-sensor array signals i based on the linear discriminant analysis method [57]. Because the process is simple and economical, sample preparation is minimal, and reading and interpretation of the meas urements are clear, the electronic nose has become a viable alternative to conventiona analysis [11,58] and has been applied to evaluate the shelf-life of livestock products [59] and to analyze meat quality [60].
In the present study, there were nine eigenvalues of 16 sensors of the electronic nose including means, integral value, differential value, range, quadratic coefficient, primary coefficient, halfwidth, primary coefficient of logarithmic regression function, and constan term of logarithmic regression function, which were used to analyze these three breeds o chicken [61]. They could be accurately identified with the CDA to analyze the acquired data in order to evaluate the overall flavor characteristics among the three chicken breeds As shown in Figure 4, the CDA based on the 144 parameters derived the first and second canonical variables (CN1 and CN2, respectively). CAN1 explained 93.4% of the variability and was able to differentiate BJY and LH chickens from AA broilers. Furthermore, CAN2 The multivariate recognition algorithm to process the multi-sensor array signals is based on the linear discriminant analysis method [57]. Because the process is simple and economical, sample preparation is minimal, and reading and interpretation of the measurements are clear, the electronic nose has become a viable alternative to conventional analysis [11,58] and has been applied to evaluate the shelf-life of livestock products [59], and to analyze meat quality [60].
In the present study, there were nine eigenvalues of 16 sensors of the electronic nose, including means, integral value, differential value, range, quadratic coefficient, primary coefficient, halfwidth, primary coefficient of logarithmic regression function, and constant term of logarithmic regression function, which were used to analyze these three breeds of chicken [61]. They could be accurately identified with the CDA to analyze the acquired data in order to evaluate the overall flavor characteristics among the three chicken breeds. As shown in Figure 4, the CDA based on the 144 parameters derived the first and second canonical variables (CN1 and CN2, respectively). CAN1 explained 93.4% of the variability and was able to differentiate BJY and LH chickens from AA broilers. Furthermore, CAN2 was able to separate BJY from LH chickens. The results indicate that the distance between the core of BJY chickens and LH chickens was relatively close, but it could also be completely separated. The possible reason is that these two breeds of chicken were fed in the same environment and at the same age.
The above studies have shown that electronic nose technology can be used to successfully distinguish the volatile compounds among these three breeds of chickens. This method exhibits satisfactory results using appropriate pattern recognition techniques for data analysis, which provides a gratifying analytical method for Beijing-you chicken identification. With the advantages of high sensitivity and excellent selectivity, the electronic nose has broad application prospects for the detection of meat adulteration in the future. was able to separate BJY from LH chickens. The results indicate that the distance between the core of BJY chickens and LH chickens was relatively close, but it could also be completely separated. The possible reason is that these two breeds of chicken were fed in the same environment and at the same age. The above studies have shown that electronic nose technology can be used to successfully distinguish the volatile compounds among these three breeds of chickens. This method exhibits satisfactory results using appropriate pattern recognition techniques for data analysis, which provides a gratifying analytical method for Beijing-you chicken identification. With the advantages of high sensitivity and excellent selectivity, the electronic nose has broad application prospects for the detection of meat adulteration in the future.
Conclusions
This study demonstrated several differences among BJY chickens, LH chickens, and AA broilers in terms of nutritional and sensory properties. At their typical market ages, 240-day-old BJY was preferable to 240-day-old LH and 42-day-old AA broilers due to its higher protein content, higher IMP and GMP content, and lower Hx and HxR content. Noticeably, BJY chicken showed an especially high arachidonic acid (14.55 mg/100 g) and essential amino acids content (127.84 mg/100 g). These characteristics might contribute to better flavor properties and higher nutritional value to humans, which would meet the preference of those consumers in the current market. The relative contents and varieties of VOCs, detected by SPME-GC/MS, were quite different between the three chicken breeds, which resulted in differences in the flavors. The variety and relative content of aldehydes might contribute to a richer flavor of the chicken. Furthermore, the electronic nose results also confirmed that there were significant differences between the breast meat of these three breeds of chicken by canonical discriminant analysis. The results revealed the meat characteristics of BJY chicken and perhaps provide a possible adulteration identification method, which can help consumers choose premium chicken meat in the market.
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: VOCs chromatograms of BJY breast, Figure S2: VOCs chromatograms of LH breast, Figure S3: VOCs chromatograms of AA breast.
Conclusions
This study demonstrated several differences among BJY chickens, LH chickens, and AA broilers in terms of nutritional and sensory properties. At their typical market ages, 240-day-old BJY was preferable to 240-day-old LH and 42-day-old AA broilers due to its higher protein content, higher IMP and GMP content, and lower Hx and HxR content. Noticeably, BJY chicken showed an especially high arachidonic acid (14.55 mg/100 g) and essential amino acids content (127.84 mg/100 g). These characteristics might contribute to better flavor properties and higher nutritional value to humans, which would meet the preference of those consumers in the current market. The relative contents and varieties of VOCs, detected by SPME-GC/MS, were quite different between the three chicken breeds, which resulted in differences in the flavors. The variety and relative content of aldehydes might contribute to a richer flavor of the chicken. Furthermore, the electronic nose results also confirmed that there were significant differences between the breast meat of these three breeds of chicken by canonical discriminant analysis. The results revealed the meat characteristics of BJY chicken and perhaps provide a possible adulteration identification method, which can help consumers choose premium chicken meat in the market. | 2022-03-10T16:09:21.632Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "bfd8719d4862ecd44833e5ef7629055638921f8f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/6/782/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "065eef1f2a6c449883d56b6c3e67b2af61c62063",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30958469 | pes2o/s2orc | v3-fos-license | Design of Stable Nanocrystalline Alloys
Metal Manipulation Reducing the grain size below 100 micrometers can vastly improve the properties of a metal. However, these nanocrystalline metals are not thermally stable; at elevated temperatures the grains will grow and merge. Alloying with a second metal to slow grain growth can slow down this process, which has shown some success on a trial-and-error basis. Chookajorn et al. (p. 951; see the Perspective by Weertman) now provide a theoretical framework to create stability maps to identify potential alloys with the greatest thermal stability. For tungsten, counterintuitively, the theory suggests that atoms with the largest size differential or lowest solubility are not the best alloying choice. Indeed, an alloy of tungsten and titanium was processed more easily than pure nanocrystalline tungsten and also showed better stability at high temperatures. Designed nanostructured alloys, amenable to large-scale production, show high-temperature thermal stability. Nanostructured metals are generally unstable; their grains grow rapidly even at low temperatures, rendering them difficult to process and often unsuitable for usage. Alloying has been found to improve stability, but only in a few empirically discovered systems. We have developed a theoretical framework with which stable nanostructured alloys can be designed. A nanostructure stability map based on a thermodynamic model is applied to design stable nanostructured tungsten alloys. We identify a candidate alloy, W-Ti, and demonstrate substantially enhanced stability for the high-temperature, long-duration conditions amenable to powder-route production of bulk nanostructured tungsten. This nanostructured alloy adopts a heterogeneous chemical distribution that is anticipated by the present theoretical framework but unexpected on the basis of conventional bulk thermodynamics.
Over the past few years, attention has shifted to the problem of stabilizing nanocrystalline structures by alloying. Whereas one approach to the problem is a classical kinetic strategy, that is, including alloying elements to slow grain growth, there is increasing interest in the notion of genuine thermodynamic stabilization of grain boundaries (12)(13)(14)(15)(16)(17)(18). In analogy to microemulsions, in which the addition of surfactant is used to stabilize interfacial area (19), the concept of thermodynamic nanostructure stabilization is to add to a polycrystal an alloying element (solute) selected for its preference to occupy grain boundary sites vis-à-vis those in the crystal interiors to relieve the energy penalty of the interfaces. The grain boundary energy, γ, is lowered from that of a pure material, γ , through such segregation, which in a simplified and linearized view can be written (12) γ γ Γ Δ k ln (1) where the specific solute excess at the boundary, Γ, lowers the enthalpy by ΔH seg , the enthalpy of segregation, and raises the entropy via kTlnX, with kT the thermal energy and X the composition.
A few systems, both simulated and experimental, have provided evidence that this thermodynamic approach can suppress grain growth and stabilize nanostructured polycrystals (16,(20)(21)(22)(23)(24)(25)(26)(27)(28). Some authors have provided guidelines for estimating the grain boundary segregation strength given a base element by identifying preferred features of the solute, for example, atomic size mismatch with the solvent (16) or low bulk solid solubility (23,24), both of which are presumed to correlate with a higher tendency for solute rejection into grain boundaries. By and large, these approaches amount to semi-empirical preferences for alloy systems that might exhibit grain boundary segregation and do not generally speak to true thermodynamic stability of nanostructures, which requires consideration of the relative stability of nanostructured phases to competing bulk phases. For example, existing approaches often suggest systems that experience other problematic structural instabilities beyond just normal grain growth, that is, abnormal grain growth or phase decomposition (21)(22)(23)(24). We advance a thermodynamic model, with which nanostructure stability maps can be generated and used as a design tool. One newly predicted system, namely W with a minority addition of Ti, is evaluated and demonstrates stability at a length scale of around 20 nm over long durations at elevated temperatures.
To assess the efficacy of solutes in stabilizing nanostructures, we describe the mixing free energy of a nanostructured binary system with separate energetic interactions in grain and intergranular regions. The two regions are not treated as separate phases, per se, but are geometrically connected to one another such that a reduction in grain size, d, causes an increase in the grain boundary volume fraction, f gb , which follows the scaling 1 , where t is the mean grain boundary thickness and d ≥ t. The model, presented in a preliminary form by Trelewicz and Schuh (29), reduces to a classical regular solution model in the limit of infinite grain size and also reproduces a grain boundary energy expression like that of Eq. (1) in the proper limit. The model thus essentially provides the form of Gibbs free energy surfaces for mixing, G mix , as a function of both f gb and X as where the subscripts denote the two regions, crystal (c) and grain boundary (gb), and the superscripts denote the two alloy components, A (solvent) and B (solute). The symbol G mix denotes the Gibbs free energy for a regular solution model, written for the subscripted components: the first two terms in the equation thus amount to a weighted average of two regular solutions, one for the crystals and one for the grain boundary regions. The additional terms are associated with the geometrical way in which those two regions interact. The bond energies are collected in the usual way into an interaction parameter, ω, of which there are two (crystal and grain boundary); additional terms include the coordination number, z; transition bond fraction, ν; and atomic volume, Ω. The most important point for this model is that it describes a free energy surface in composition-grain size space. We are interested in finding alloys where there are global minima in such a surface, that is, where there is a thermodynamically preferred grain size dictated by the grain boundary segregation state at a given composition.
For the purposes of developing a design approach, it is useful to focus on two key thermodynamic parameters, which together contain all of the most relevant physics of the problem: the enthalpy of mixing in the crystalline state, Δ ω 1 , to represent the grain interior, and the dilute-limit enthalpy of segregation, Δ ω , to capture the thermodynamics of the grain boundary environment, which incorporates chemical interactions, elastic mismatch, and the mismatch in interfacial energies. These two parameters form the axes of a nanostructure stability map, as shown in Fig. 1A. By fixing other quantities (most notably, temperature), we can iterate the values of these two parameters over physically plausible ranges and calculate the shape of the free energy surface given by Eq. (2). We identify the global minima that correspond to nanocrystalline grain sizes with a particular grain boundary segregation profile. These minima are then compared to the energies of other possible bulk states. Combinations of parameters that have stable nanocrystalline states are marked by green points; those without, red x's. The conditions under which a nanocrystalline system would be stable or not are thus demarcated by the green and red regions of Fig. 1A.
Examples of how nanocrystalline states are evaluated for stability with respect to bulk structures are shown in Fig. 1, B and C, for conditions in which the nanocrystalline structure is stable and unstable, respectively. In these panels, the blue curves represent local cuts of the free energy surface of Eq. (2) and the lowest-energy state available for particular nanocrystalline grain sizes and segregation states. There are many such states across a range of composition, as reflected by the multiple distinct curves shown in the panels. The black curves are the bulk regular solution; the systems shown exhibit a miscibility gap denoted by the common-tangent dashed lines because they have positive H mix . The difference between Fig. 1B and 1C lies in the position of the nanostructure free energy curves with respect to the bulk phase separation common-tangent line. In Fig. 1B, the nanostructured states are stable, that is, have lower Gibbs free energy than the competing bulk phase separated state. In Fig. 1C, there are nanostructured states that could exist, but these are less stable than bulk phase separation. The difference between the two systems in Fig. 1, B and C is fundamental, and we specifically seek to identify alloys that fall into the first category rather than the second.
To demonstrate the utility of the map in Fig. 1A, we consider one problem in nanostructured alloy design, namely the development of nanocrystalline tungsten, which is of interest for its anticipated high strength and unique capacity for shear localization (30,31), but which has proven challenging to produce in bulk form because of its extremely high melting (and therefore processing) temperature. The finest grain sizes reported in a bulk tungsten material are about 40 nm, and this required a complex processing route involving multiple severe deformation steps (32). In principle, tungsten could be made in bulk form through a powder route, but the requirement of high-temperature sintering is usually a debilitating roadblock to such routes because of nanostructure coarsening; tungsten can generally only be sintered at temperatures above about 1050ºC even with sintering aids (33, 34).
We use Eq. (2) and Fig. 1A to develop a stable nanostructured W alloy by first placing particular alloying elements for W on the map. This requires estimates of both H mix and H seg for each possible binary system. We have calculated these quantities and populated Fig. 1A with all of the alloys (35) that exhibit positive H mix with W and for which reliable data are available (values listed in table S1, with more details on the calculations provided in the supplementary materials). There is, of course, some uncertainty in all of these estimates, generally of the magnitude illustrated for one of the represented alloys, W-Ti, on the map. Fig. 1A identifies a variety of candidate stable nanostructured materials that lie in the green region. Closer consideration reveals some unexpected predictions. For example, the likelihood of nanostructure stability has often been estimated by solute segregation strength based on atomic size mismatch and/or solubility. However, in the present system, the solute with the highest size difference listed in table S1 (Sr, ~55% mismatch with W) and those with the lowest solubility (Ag and Cu, essentially zero solubility) lie decidedly in the red bulk stable region in Fig. 1A. Also counterintuitively, the element with the lowest value of H seg in the set, Ti, is actually one of the most suitable candidates, being safely within the nanocrystalline stable region. In fact, Ti would be counterindicated by conventional approaches, which would seek low bulk solubility [Ti has extremely high solubility in W of 48 atomic % at 1100°C (36)] and high atomic size mismatch (Ti has a low-to-moderate mismatch with W of only ~6%).
On the basis of these results, we produced a W-20 atomic % Ti alloy with an average grain size around 20 nm (Fig. 2B) by high-energy ball milling (35); the output of this process is microscale powders, where each particle comprises many nanocrystalline grains. As a control, we also produced unalloyed nanocrystalline W with about the same grain size through the same process. The powders were then equilibrated at 1100°C in an argon atmosphere for one week; at this temperature the dominant diffusion pathway is intergranular, and the mean diffusion distance is several micrometers, which is thousands of times greater than the grain size. Pre-and postannealing structures were characterized to explore the stability of the nanostructure, and Fig. 2 shows the most important results of such characterizations.
After one week at 1100°C, the unalloyed nanocrystalline W exhibits the typical instability of such materials, with grain coarsening to the micrometer scale, as shown in Fig. 2C. On the other hand, the W-20 atomic % Ti alloy retains a uniform nanostructure with a nominally unchanged average grain size of about 20 nm. This stability can be seen visually in Fig. 2D and quantitatively in the grain size distributions of Fig. 2A. With Ti present, the system adopts a complex alloy configuration where Ti and W are heterogeneously distributed on the nanoscale as a polycrystalline body-centered cubic (BCC) structure, with no signatures of any amorphous content. This heterogeneous distribution is illustrated by the chemical arrangement in the equilibrated alloy in Fig. 3, with Fig. 3A showing the atomic contrast between W and Ti and Fig. 3B showing a local chemical map based on energy dispersive spectroscopy (35). A compositional line scan in Fig. 3C reveals the magnitude of the Ti composition ranging from near 0 atomic % to about 50 atomic %.
The nanoscale chemical distribution seen in Fig. 3 is not expected for a bulk equilibrium alloy, where Ti is soluble to 48 at.% in W at the equilibration temperature, and a homogeneous chemical distribution should be observed. This solute distribution is a consequence of the nanostructure: the high volume fraction of grain boundaries creates different chemical configurations, and a lower energy state results from the heterogeneous solute distribution. In a nanoscale structure, a heterogeneous solute distribution is explicitly expected from Eq. (2).
From a technological standpoint, the results in Figs. 2 and 3 suggest that nanocrystalline tungsten can, in principle, be made sufficiently stable to survive a typical consolidation thermal cycle. Given the exceptionally high strength of nanocrystalline BCC metals (37) and the unusual secondary properties (such as shear localization) that emerge at these grain sizes, the present results may speak to a new family of engineering tungsten alloys. At the same time, our experimental work on W-Ti is simply an example of a single alloy design exercise; the above approach may be applied again to a number of different base metals. (2) (A). For each combination of parameters, the free energy of nanocrystalline structures is compared to that of the bulk regular solution (for details of the comparison, see Figs. S1 and S2). An example case for the nanocrystalline stable region is presented in (B), for a specific alloy of W-Sc. The free energy of the nanostructured phases is below that of the regular solution common tangent (dashed line). In (C), a bulk stable case where the nanostructured phases fall above the common tangent line is shown; the W-Ag system will then prefer to phase separate at bulk scales as dictated by the bulk regular solution thermodynamics. Particular binary tungsten alloys are placed on the map after calculating their enthalpies of mixing and segregation; for W-Ti, the typical ranges of uncertainty of these calculations are shown. (For details of this calculation, see tables S1 and S2.) | 2016-03-14T22:51:50.573Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "190606c36d27f44a10f8245e6fced1d155d0b23e",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/80308/1/DesignofStableNanocrystallineAlloys_chookajorn.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "1ff40a0d66357b288cb3727477afe8698be5f283",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
232265149 | pes2o/s2orc | v3-fos-license | Humanized Mice Exhibit Exacerbated Abscess Formation and Osteolysis During the Establishment of Implant-Associated Staphylococcus aureus Osteomyelitis
Staphylococcus aureus is the predominant pathogen causing osteomyelitis. Unfortunately, no immunotherapy exists to treat these very challenging and costly infections despite decades of research, and numerous vaccine failures in clinical trials. This lack of success can partially be attributed to an overreliance on murine models where the immune correlates of protection often diverge from that of humans. Moreover, S. aureus secretes numerous immunotoxins with unique tropism to human leukocytes, which compromises the targeting of immune cells in murine models. To study the response of human immune cells during chronic S. aureus bone infections, we engrafted non-obese diabetic (NOD)–scid IL2Rγnull (NSG) mice with human hematopoietic stem cells (huNSG) and analyzed protection in an established model of implant-associated osteomyelitis. The results showed that huNSG mice have increases in weight loss, osteolysis, bacterial dissemination to internal organs, and numbers of Staphylococcal abscess communities (SACs), during the establishment of implant-associated MRSA osteomyelitis compared to NSG controls (p < 0.05). Flow cytometry and immunohistochemistry demonstrated greater human T cell numbers in infected versus uninfected huNSG mice (p < 0.05), and that T-bet+ human T cells clustered around the SACs, suggesting S. aureus-mediated activation and proliferation of human T cells in the infected bone. Collectively, these proof-of-concept studies underscore the utility of huNSG mice for studying an aggressive form of S. aureus osteomyelitis, which is more akin to that seen in humans. We have also established an experimental system to investigate the contribution of specific human T cells in controlling S. aureus infection and dissemination.
INTRODUCTION
Bone infections, a debilitating complication of total joint replacement (TJR) arthroplasties and fracture fixation, have dramatically increased over the past decade in the United States alone (1)(2)(3). Staphylococcus aureus, a significant human pathogen, remains the leading cause of bone infections in TJR surgeries, causing 30-42% of fracture-related infections (FRI), and 10,000-20,000 peri-prosthetic joint infections (PJI) in patients each year in the US (4-7). Methicillin-resistant S. aureus (MRSA) and newly emerging strains with panresistance significantly complicate treatment leading to adverse clinical outcomes such as amputation and septic death (8,9).
There is an urgent need to control these deep bone infections utilizing non-antibiotic interventions. Unfortunately, no preventative S. aureus immunotherapies exist, despite almost 20 years of research to identify conceptually promising vaccine targets and significant money spent on clinical trials (10)(11)(12). Poor antigen selection and the ability of S. aureus to evade the human immune system might contribute to the failure of vaccines. Alternatively, the lack of relevant models that recapitulate human immune responses could explain the failure of these trials.
Murine models have greatly facilitated our understanding of S. aureus pathogenesis and identified critical virulence factors such as staphylococcal protein A, iron-scavenging proteins, fibrinogen binding proteins, penicillin-binding proteins, hemolysins, autolysins, etc. (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23). However, the knowledge acquired using these murine models does not necessarily translate into these targets becoming useful vaccine candidates in humans. A prominent case in point is the murine preclinical data of an immunogenic vaccine candidate from iron-scavenging protein IsdB (IsdB-V710) that demonstrated reduced infection lethality, and protection against bacteremia in mice (24)(25)(26)(27). Unfortunately, a large phase IIb/III vaccination clinical trial based on these preclinical studies involving~8,000 patients failed to provide any protection and elevated the risk of adverse outcomes, including death, among patients who encountered post-immunization S. aureus infections (28). Therefore, we are in dire need of small animal models that can better mimic the human immune system. Moreover, S. aureus is a significant human pathogen with several virulence proteins and bicomponent toxins with high degrees of tropism to receptors expressed on human leukocytes (29,30). Due to these humanspecific toxins, it is possible that this pathogen does not necessarily exhibit their typical phenotype in murine S. aureus infections.
Non-obese diabetic (NOD)-scid IL2Rg null (NSG) mice, reconstituted with human CD34+ hematopoietic immune system (huNSG), have emerged as a powerful model system to investigate human disease (31)(32)(33). These mice evoke a human immune response to infection and have been utilized to study bacterial and viral pathogens such as Salmonella, Leishmania, HIV, and EBV (34)(35)(36)(37)(38)(39). The use of humanized mice to study S. aureus infections remains relatively limited (40)(41)(42), and until now, no studies have described S. aureus pathogenesis during osteomyelitis in humanized mice. To this end, we developed a transtibial implant-associated S. aureus osteomyelitis model in humanized NSG mice and examined if S. aureus induces a human immune response in these mice during bone infection. Additionally, we also assessed infection severity, the extent of bone osteolysis, and Staphylococcal abscess communities (SAC) formation during the establishment of implant-associated MRSA osteomyelitis.
Ethics Statement
Animal studies were performed according to protocols approved by the ethical committee of the canton of Grisons in Switzerland. Animal surgical procedures were performed according to Swiss animal protection law and regulations in an Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) International approved facility.
Murine Implant-Associated Osteomyelitis Model
Female C57BL/6J mice (stock 000664), NSG (NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ, stock 005557) mice were purchased from the Jackson Laboratories (Bar Harbor, ME, USA), housed five per cage in two-way housing on a 12-h light/dark cycle, and fed a maintenance diet and water ad libitum. Humanized NSG (huNSG) mice were generated by Jackson Labs by engrafting NSG mice with CD34+ human hematopoietic cells from three different donors using protocols described previously (31,32). Briefly, 3-week old NSG mice were subjected to total body irradiation (100 cGy) and injected intravenously with lineage negative human CD34 + hematopoietic stem cells (2 x 10 5 cells/ mice) isolated of cord blood. At 12 weeks post engraftment, mice were subjected to submandibular bleeding to isolate peripheral lymphocytes and human immune cell reconstitution was assessed in huNSG mice by flow cytometry (markers: antihuman CD45 -overall reconstitution, anti-human CD3 -T cells, anti-human CD20 -B cells, anti-human CD33myeloid cells). Supplemental Table 1 describes the percentage of human CD45 + cells, human B cells, T cells, and myeloid cells engrafted in huNSG mice generated from all different donors. Transtibial implant-associated osteomyelitis with MRSA was performed on skeletally mature 20-24-week-old huNSG mice, and agematched C57BL/J6 and NSG mice utilizing our well-validated protocols described previously (22,43,44). Briefly, mice were anesthetized with Sevoflurane in a Plexiglass box (ca. 7% in O2, flow rate 0.6-1 L/min), maintained with Sevoflurane through a face mask (ca. 2-3% in O2, flow rate 0.6-1 L/min). Peri-and postoperative analgesia consisted of Tramal, which was added to the drinking water 24h prior to surgery (25mg/L) and maintained for two days after surgery to minimize skin wounds from injections and at the same time provide adequate analgesia. Before surgery, a flat stainless-steel surgical wire (cross-section, 0.2 mm by 0.5 mm) 4 mm long (MicroDyne Technologies, Plainville, CT, USA) bent at 1mm to form an Lshape was steam sterilized and inoculated with clinical S. aureus USA300 LAC strain grown overnight. After anesthesia induction, the right leg was clipped, and the skin was aseptically prepared with chlorhexidine scrub (Hibiscrub, 4% Chlorhexidine Digluconate) and 70% ethanol. The implant localization was identified (2 to 3 mm under the tibial plateau in the proximal tibia) using the proximal patella as an anatomical landmark and the jaws of the Mayo-Hegar needle driver as the measure. A hole was pre-drilled in the proximal tibia using a percutaneous approach from the medial to lateral cortex using a 26-gauge needle. Subsequently, a S. aureus infected pin (5.0 x 10 5 colony forming units (CFU)/mL) was surgically implanted in the pre-drilled hole from the medial to the lateral cortex. Osteotomy and implant position were confirmed radiographically in the lateral plane immediately after surgery. At 14 days postinfection, mice were euthanized, and the infected leg containing the transtibial implant was excised out for either CFU quantitation or high-resolution micro-computed tomography (mCT) imaging, followed by histology and transmission electron microscopy (TEM). Additionally, internal organs liver, spleen, kidneys, and heart were harvested sterilely for CFU enumeration. Further, all mice were subjected to submandibular bleeding on days 0, 7, and 14 post-infection to collect serum for assessing anti-S. aureus antibodies. Murine infection studies were performed four independent times and the results shown are pooled data from these experiments.
Bacteriology
Tibia, tibial implant, and the soft tissue abscesses surrounding the tibia were removed, weighed, and placed in 1mL of room temperature sterile PBS. The implant was sonicated for 2 min to dislodge attached bacteria, and organ tissues were homogenized (Omni TH, tissue homogenizer TH-02/TH21649, Kennesaw, GA, USA) in 1mL of PBS. Implant sonicate fluid and tissue homogenates were serially diluted, plated on blood agar (BA) plates, and incubated overnight at 37°C. To confirm S. aureus on the plates, random colonies from each plate/organ/tissue were picked, and StaphLatex agglutination test (Thermo Fisher Scientific, Waltham, MA, USA) was performed. Bacterial colonies were enumerated, and the generated CFU data were presented as CFUs per gram of tissue.
Micro-Computed Tomography (mCT)
The tibia was dissected from mice post-euthanasia and fixed for 72 hours in 4% neutral buffered formalin. Subsequently, specimens were rinsed in PBS, deionized water, and prepared for mCT scans. High-resolution mCT scans of the mice tibia receiving MRSA-contaminated or sterile pin were imaged ex vivo at 10.5 mm voxel size with the VivaCT40 (Scanco Medical AG, Switzerland), using 100 ms integration time, energy of 70 kV, and intensity of 114 mA. Post-processing and analyses of the resultant DICOM files generated from VivaCT40 were performed on Amira software (FEI Visualization Sciences Group; Burlington, MA, USA). Medial and Lateral hole volume quantification was performed by manual segmentation of the void area followed by a point trap triangulation in Amira. Reactive bone volume was also computed using methods described previously by Mys et al. (45). Briefly, the bone was segmented using adaptive thresholding techniques and masks described previously (45). Then, the thickness of all bone structures was calculated in IPL software (Scanco Medical AG, Switzerland), and all the bone structures thicker than 6 voxels (63.0mm) were assigned to be cortex. The reactive bone was calculated by subtracting the quantified outer mask from the cortex. Thresholding was set at 10 voxels to clean the reactive bone masks. The reactive bone volume calculations were performed only on the distal side of the pin to minimize the influence of the pin's position on the results.
Histology
Following mCT, each mouse tibia was rinsed with ddH2O and decalcified in 14% EDTA tetrasodium solution for 7 days, with radiographical monitoring of the decalcification progress. Following decalcification, samples were paraffin-embedded, cut into 5 mm transverse sections, and mounted on glass slides for histological staining. Slides were deparaffinized and stained with Hematoxylin & Eosin (H&E) and Brown and Brenn (Gram) staining as described previously (43,46). Digital images of the stained slides were created using VS120 Virtual Slide Microscope (Olympus, Waltham, MA, USA). Numbers SACs were manually enumerated and averaged across two or more histologic sections at least 50 mm apart from 6-7 mice in each experimental group. Quantitative analysis of SAC area within the tibias of C57BL/6J WT, NSG, and huNSG animals was performed on Brown and Brenn (Gram) stained slides using Visiopharm (v.2019.07; Hoersholm, Denmark) colorimetric histomorphometry utilizing a custom Analysis Protocol Package (APP). Manual regions-of-interest (ROIs) were drawn around the tibia and SACs within the tibia on each image prior to batch processing for automated quantification of SAC area normalized to tibial area between the groups. The 5 mm formalin-fixed paraffin sections were incubated at 60°C overnight for deparaffinization. Tissue sections were quickly transferred to xylene and gradually hydrated by transferring slides to absolute alcohol, 96% alcohol, 70% alcohol, and then water. Slides were immersed in an antigen retrieval solution, boiled for 30 minutes, and cooled down for 10 minutes at room temperature (RT). Slides were rinsed several times in water and transferred to PBS. Non-specific binding was blocked with 5% normal donkey serum in PBS containing 0.1% Tween 20, 0.1% Triton-X-100 for 30 minutes, at RT in a humid chamber. Primary antibodies were added to slides and incubated in a humid chamber at RT, ON. Slides were quickly washed in PBS, and fluorescently labeled secondary antibodies were incubated for 2 hours at RT overnight in a humid chamber. Finally, slides were rinsed for 1 hour in PBS and mounted with Vectashield antifade mounting media with DAPI (H-1200, Vector Laboratories, Burlingame, CA, USA). Pictures were taken with a Zeiss Axioplan 2 microscope and recorded with a Hamamatsu camera.
Transmission Electron Microscopy (TEM)
Brown and Brenn staining was performed to identify SAC presence within the intramedullary canal of MRSA-infected huNSG mice. Once a SAC was identified, the paraffin block was oriented to match the 5 mm section of the Brown and Brenn slide, in order to excise the precise area from the paraffin block. Once the right area was excised, the block was deparaffinized, post-fixed sequentially in 2.5% glutaraldehyde (24 hours) and 1.0% osmium tetroxide (90 minutes), dehydrated in a graded series of ethanol to 100%, transitioned into propylene oxide, infiltrated with EPON/Araldite epoxy resin and finally embedded block face down into a BEEM capsule lid for 48 hours at 60°C. The block was sectioned at one micron and stained with Toluidine blue to confirm the SAC location, then thin sectioned at 70 nm using a diamond knife and an ultramicrotome. The thin sections were mounted onto formvar carbon coated nickel slot grids, then examined using a Hitachi 7650 transmission electron microscope, and images were captured using Gatan Erlangshen 11-megapixel digital camera and DigitalMicrograph software.
Flow Cytometry
Immunophenotyping of spleen from huNSG mice was performed according to protocols described previously (47). Briefly, single-cell suspension of splenocytes were prepared, and 0.5 X 10 6 cells/mice were initially stained with fixable viability dye eFluor ™ 780 (eBioscience ™ , Thermo Fisher Scientific) for 30 minutes at 4°C to exclude dead cells from the analysis. Following washing, the following fluorochromeconjugated anti-human antibodies were used for phenotyping huNSG splenocytes: BV510 CD45 (clone 2D1), PerCP CD3 (clone UCHT1), PE-Dazzle 594 CD8a (clone HIT8a), FITC CD4 (clone OKT4), PE-Cy5 CD19 (clone SJ25C1), and PE CD56 (clone HCD56). Single channel compensation controls f o r t h e s e a n t i b o d i e s w e r e c r e a t e d u s i n g h u m a n polymorphonuclear cells (PMBCs). All antibodies were purchased either from BioLegend or BD Biosciences (San Jose, CA, USA). After staining, the cells were fixed with 2% formaldehyde/PBS prior to running on a BD FACSAria ™ III multicolor flow cytometer (BD Biosciences). Flow data were analyzed using FlowJo version 10.6 (BD Biosciences), and the gating strategies are outlined in Supplemental Figure S1.
Statistical Analyses
Unpaired student's t-test was used for statistical comparison of the flow cytometry data. Two-way ANOVA with Sidak's posthoc tests was performed to compare body weight change over time. One-way ANOVA analyses with Tukey's post-hoc tests were utilized for comparing osteolysis area, number of SACs, SAC area, log-transformed CFUs, and the number of immune cells revealed by immunostaining. All analyses were conducted using GraphPad Prism (version 9.0), and p < 0.05 was considered significant.
Humanized NSG Mice Elicit Human T Cell Responses During the Establishment of S. aureus Osteomyelitis
Because NSG mice allow the engraftment of human immune cells, we hypothesized that MRSA infection would elicit a human immune response in huNSG mice (Supplemental Table S1). To test this, huNSG mice received a sterile (Sham) or MRSA contaminated tibial implant, and the spleens were harvested for analyses on day 14 post-op. Immunophenotyping by flow cytometry revealed that S. aureus infection induced significant upregulation of human CD3+ T cells (p = 0.029) and its subsets CD4+ T helper cells (p = 0.007), CD8+ cytotoxic T cells (p = 0.019) in huNSG compared to the control group ( Figure 1A). No such induction of human CD19+ B cells or CD56+ natural killer (NK) cells were observed in infected huNSG. Additionally, immunofluorescent histochemistry revealed distinctive B and T cell areas in the spleen of infected huNSG mice ( Figure 1B). Additionally, immunostaining with human cell proliferation marker PCNA revealed expanding human T and B cells in huNSG mice in response to S. aureus ( Figure 1C). However, anti-S. aureus human antibody responses in huNSG serum using our custom Luminex assay were undetectable 14 days post infection (data not shown), and serum cytokine levels analyzed over time revealed modest induction of human cytokines including IFN-g, TNF-a, and IL-13 (Supplemental Figure S2). Nonetheless, our results indicate that S. aureus infection induces a human immune response in the spleen of huNSG mice.
Humanized NSG Mice Exhibit Exacerbated Susceptibility to S. aureus Osteomyelitis
Given the potential negative impact of S. aureus immunotoxins on human immune cells, we hypothesized that the huNSG mice would develop a more severe MRSA infection due to the presence and induction of human immune system. To test this, we examined implant-associated osteomyelitis in huNSG mice and its age-matched NSG, C57BL/6J WT counterparts. In general, huNSG mice appeared sicker, failed to recover their body weight after implant surgery, and exhibited significantly increased weight loss throughout the 14-day study period (Figure 2A, p < 0.05). High-resolution mCT analyses of the tibiae revealed that S. aureus-infected huNSG mice displayed significantly greater peri-implant osteolysis at the insertion site compared to age-matched NSG and C57BL/6J WT controls ( Figures 2B, C, p < 0.05). Interestingly, no differences in reactive bone volume were observed between these groups, suggesting that the human engrafted cells do not affect osteoblast activity (Supplemental Figure S3). No difference in osteolysis was observed in animals that underwent sterileimplant surgery, suggesting that the observed bone phenotype is due to S. aureus infection.
To assess effects of human engrafted cells on bacterial load, ex vivo CFU quantification was performed on the implants, which revealed higher bacterial loads in huNSG (13-fold, p = 0.012) and NSG (4.2-fold, p = 0.025) mice compared to C57BL/6J WT ( Figure 3A). Similarly, an 86.2-to 215.2-fold higher CFU on the tibia (p < 0.01) and a 79.7-to 310.9-fold higher CFU load on infected soft tissue (p < 0.01) surrounding the bone were observed in huNSG and NSG mice ( Figure 3B). Interestingly, significantly increased MRSA dissemination from the implant to internal organs (kidney, liver, heart, and spleen) was observed in huNSG compared to control groups ( Figure 3C). 14/17 huNSG mice were S. aureus culture-positive from at least one organ, while only 6/14 and 3/14 mice in the NSG and WT groups were culture-positive in at least one organ ( Figure 3C). Remarkably, some huNSG mice were highly septic due to MRSA bone infection, while some huNSG mice showed no dissemination ( Figure 3C).
Next, histopathology of the infected tibia was performed to further assess the extent of bone osteolysis in huNSG mice. H&E staining of the infected tibia confirmed the extensive osteolysis revealed in huNSG mice ( Figure 4E) compared to C57BL/6 WT ( Figure 4A) and NSG ( Figure 4C) controls. In addition, Brown and Brenn staining of the infected tibia revealed extensive Staphylococcal abscess communities (SAC) formation in huNSG mice ( Figure 4F) compared to control groups ( Figures 4B, D). The number of SACs formed per infected tibia was significantly higher in huNSG than in control groups ( Figure 4I, p < 0.05). Histomorphometry quantification revealed a marked increase in the SAC area in huNSG, suggesting heightened severity of MRSA bone infection in these animals ( Figures 4G-J, p < 0.05). TEM interrogation of mature SACs in huNSG mice confirmed the formation of a fibrin-like pseudocapsule (51,52), which sequesters and protects S. aureus from host immune cells ( Figure 5).
Induction of Human T Cell Response in huNSG Tibiae Due to S. aureus Osteomyelitis
We next investigated the repertoire and spatial distribution of human T and B cells proximal to SACs via multicolor immunofluorescent histochemistry ( Figure 6). The tibia sections from infected huNSG mice revealed significant induction and trafficking of human T cells clustered adjacent to SACs (Figures 6B, E, H, J, p < 0.0001). Human B cells were observed in sham treated huNSG mice, but only small amounts of these cells were induced and trafficked in response to S. aureus infections ( Figure 6J). Expectedly, no human B or T cells were identified in nonengrafted NSG control mice (Figures 6A, D, G), though S. aureus induced production of mouse Ly6G + neutrophils in the infected tibia of huNSG and NSG animals ( Figure 6K, p < 0.05). Interestingly, the levels of murine Ly6G + neutrophils in these animals were similar to the levels observed in C57BL/6 WT animals in response to S. aureus ( Figure 6K). Besides, the presence of mouse Ly6G + neutrophils in huNSG suggests recovery of innate cells post g-irradiation-induced myeloablation in NSG mice before HSC engraftment. Subsequent immunofluorescent staining of infected huNSG tibiae revealed CD3+ T-bet+ Type 1 human T cells adjacent to the SACs (Figures 6L, M). In addition, examination of huNSG tibia sections using proliferating cell nuclear antigen (PCNA) revealed that both human T and B cells are proliferating near the SACs and that the percentage of proliferating human T cells (CD3+PCNA+ cells) is significantly higher than that of B cells (CD20+PCNA+ cells) (Supplemental Figure S4). Collectively, these results suggest S. aureus-mediated activation and proliferation of type 1 human T cells.
DISCUSSION
Development of effective immunotherapies against S. aureus remains among the greatest priorities in orthopedics as bone infections caused by this pathogen continue to be a significant public health problem (1). The failure of several anti-S. aureus vaccine trials can be attributed to overreliance on preclinical murine studies, where S. aureus does not entirely display their typical phenotype (10,53). Humanized mice have emerged as an attractive small animal model to investigate human disease (54). In the current study, we assessed its utility to study S. aureus pathogenesis during implant-associated osteomyelitis. In this proof-of-concept study involving S. aureus transtibial implantassociated osteomyelitis in huNSG mice, we observed that these mice displayed increased susceptibility to S. aureus as evidenced by increased weight loss and extensive peri-implant osteolysis compared to C57BL/6 mice. Others have shown that huNSG mice also display increased susceptibility to S. aureus infection in peritoneum, skin, and lung infection models (40)(41)(42), though these studies were acute infection studies unlike the one described here. Importantly, the authors noted that huNSG mice required 10-100-fold fewer bacteria to have analogous pathology in the non-humanized mice (41). In our model, it is A B C FIGURE 2 | Humanized mice exhibit increased body weight loss and osteolysis during S. aureus implant-associated osteomyelitis. (A) HuNSG mice and agematched C57BL/6 WT, NSG controls underwent transtibial implantation of MRSA (USA300 LAC) contaminated stainless steel wire, and total body weight was assessed over the 2-week infection period. The % of baseline body weight on days 0, 3, 7 and 14 is presented for each group with the mean +/-SD (n = 14-17, *p < 0.05, two-way ANOVA). (B) Tibiae implanted with sterile and MRSA contaminated wires were harvested on day 14 post-op, processed for mCT, and representative 3D renderings are shown to illustrate the levels of reactive bone formation and osteolysis around the implants. Note the extensive osteolysis in the infected huNSG tibia. (C) The osteolysis area on the lateral and medial sides of the tibiae were quantified, and the data for each is presented with the mean +/-SD for the group (n = 6, *p < 0 .05, one-way ANOVA). Note that osteolysis is greater on the medial side in this model due to the directionality of wire implantation from the medial to the lateral side.
conceivable that the more severe infection phenotype in huNSG mice could be the result of higher bacterial inoculum that we routinely use for achieving reproducible implant-associated osteomyelitis in C57BL/6 mice (22,23,43,44,55). Nonetheless, this critical finding needs to be carefully examined in our humanized mouse model of implantassociated osteomyelitis.
In vivo MRSA infection in huNSG mice revealed markedly higher CFUs on tibial bone and soft tissue in both huNSG and NSG than C57BL/6J WT mice. Increased MRSA dissemination from the implant to distal organs was also observed in huNSG compared to the control groups confirming their increased susceptibility to S. aureus. We found these observations remarkable as several groups have observed no such bacterial dissemination in wild type mouse models of S. aureus osteomyelitis (56,57). The increased tibial bacterial load in the bone and MRSA dissemination in humanized mice could be attributed to induction of human immune response due to S. aureus, and the presence of staphylococcal immunotoxins that exhibit high tropism to human leukocyte receptors (29,30). This idea is consistent with the exacerbated lung pathology reported by Prince et al. and the decreased severity in huNSG mice infected with an MRSA strain deficient in the human-specific PVL toxin (42). Another example is the increased susceptibility to MRSA bacteremia in a humanized C57BL/6J mouse containing human CD11b receptor due to strong tropism of immunotoxin LukAB for human CD11b (58). These studies, including ours, highlight the adaptation processes that this pathogen has evolved to survive in the human host.
Analysis of the MRSA-infected tibia in huNSG mice revealed increased bone osteolysis compared to C57BL/6J WT mice. Perhaps the presence of human immune cells in the bone marrow of huNSG, and the ability of S. aureus to target human leukocytes are causing increased osteoclastogenesis and infection-associated trabecular bone loss during MRSA osteomyelitis (59). Nonetheless, the increased dysregulation of bone homeostasis during osteomyelitis in huNSG mice warrants further investigation.
An important finding of the current study is the extensive MRSA-induced SAC formation in huNSG mice. Quantitative analyses of the SACs show that the number of SACs per bone area was significantly higher in huNSG mice suggesting increased interaction between S. aureus and human leukocytes in the bone. The formation of a multilayered SAC structure during osteomyelitis is a host-induced mechanism of infection control, which is manipulated by S. aureus with the deployment of several virulence genes including clumping factor A (ClfA), chemotaxis inhibitory protein of staphylococci (CHIPS), and staphylococcal complement inhibitor (SCIN) (51,52,(60)(61)(62). In clinical studies, the lack of humoral immunity against SCIN and CHIPS correlated with adverse clinical outcomes in patients with S. aureus osteomyelitis (49). Assessing the expression of these genes in a huNSG SAC or a 3D in vitro model of SAC (63) using bone marrow cells from these animals could shed light on virulence mechanisms associated with increased abscess formation in humanized mice. T cells are essential for orchestrating anti-S. aureus adaptive immunity, and studies have demonstrated their dichotomous roles in protection vs. pathogenesis during infections (64)(65)(66)(67)(68)(69). Analysis of human tissue samples in patients with implantrelated bacterial biofilm infections indicate the presence of CD4 and CD8 T cells (70,71), and these T cells were terminally differentiated effector cells (72,73). However, these observations were not S. aureus-specific, and the exact role of T cells in the context of chronic S aureus osteomyelitis remains poorly understood. Immunohistopathology of infected huNSG tibia revealed increased numbers of clustered human T cells adjacent to abscesses, suggesting S. aureus-mediated human T FIGURE 6 | Evidence of human T cell immune responses against S. aureus in huNSG mice with implant-associated osteomyelitis. The histology sections (n = 3-5 per Group) described in Figure 4 were stained with fluorescently labeled antibodies specific for anti-mouse Ly6G, anti-human CD3, anti-human CD20, anti-human Tbet, and anti-human RORgT. Light microscopy of the H&E stained sections (A-C), and fluorescent microscopy of adjacent 5 mm sections (D-I) were performed on the SACs in tibiae from infected NSG, infected huNSG mice, and sham-control huNSG mice. Black squares in H&E images show the area depicted in 3x3 mosaic immunofluorescent micrograph. Yellow squares show higher magnification images of the CD3 + T cells (red), CD20 + B cells (green), and Ly6G + neutrophils (white), in the sections of the 3x3 mosaic immunofluorescent micrographs. The dotted yellow line separates the SAC border from the rest of the bone marrow. Note that mouse Ly6G + neutrophils accumulated inside and in close proximity to SACs, and the absence of human lymphocytes in infected NSG mice (D, G). In contrast, large numbers of human T and B cells accumulate around the SACs in the infected huNSG mice (E, H), while human lymphocytes are scant in uninfected huNSG mice (F, I). Histomorphometry was performed on 5 randomly chosen fields at 200X magnification in each condition (J, K), and aggregated data is presented as the mean+/-SEM for each Group (n = 3-5 mice, ND, not detected, NS, not significant, ****p < 0.0001, one-way ANOVA). (L, M) Evidence of Type 1 human T cell induction (CD3+T-bet+, white arrows) adjacent to the SACs. cell activation and proliferation. Other studies using intraperitoneal infection model in huNSG mice showed increased human T-cell activation and apoptosis due to S. aureus, which led to increased bacterial counts and higher mortality rates in mice (40). Conceivably, exacerbated T cell activation could be due to increased expression of T cell targeting superantigens and immunotoxins in huNSG mice.
The current study is limited by inherent deficiencies in the NSG mouse, including limited myeloid lineage development and insufficient functional T cell development (35,74). Indeed, S. aureus infection was more severe in a humanized NSG mice variant that allowed for enhanced human myeloid lineage reconstitution (42,75,76). Additionally, we cannot exclude the effects of sublethal g-irradiation-induced myeloablation in NSG mice before HSC engraftment. This can be examined in NSG mice with engrafted murine bone marrow cells. A pulmonary infection model utilizing NSG mice engrafted with cells of C57BL/6J mice, reported MRSA levels comparable to those in NSG and C57BL/6J control groups (42), ruling the detrimental impact of radiation on the control of the bacterial infection in the lungs. However, radiation could have a different effect on bone immunity. Nevertheless, further studies are warranted to improve this mouse model to make it highly relevant to human musculoskeletal infections, in addition to validating its usefulness to study fracture-related infections, prosthetic joint infections, and evaluating novel experimental immunotherapies.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Ethical committee of the canton of Grisons in Switzerland.
AUTHOR CONTRIBUTIONS
GM: study conception, experimental design, data acquisition and analysis, funding acquisition, and drafting the manuscript. TM, EMS, JD, RR, and SZ: experimental design, data analysis, funding acquisition, and drafting the manuscript. AW, JR-M, MH, KD, KM, MK, and ETS: experimental design, data acquisition, and analysis. All authors contributed to the article and approved the submitted version. Supplementary Figure 4 | Evidence of human immune cell proliferation in the huNSG tibia due to S. aureus implant-associated osteomyelitis. The histology sections described in Figure 4 were stained with fluorescently labeled antibodies specific for goat anti-proliferating cell nuclear antigen (PCNA), anti-human CD3, and anti-human CD20. Light microscopy of the H&E stained sections (A,C), and fluorescent microscopy of adjacent 5 mm sections (B,C,E,F) were performed on the SACs in tibiae from infected huNSG mice, and sham-control huNSG mice. White squares show higher magnification images of the CD3 + T cells (red), CD20 + B cells (green), and PCNA + cells (white), in the sections of the 3x3 mosaic immunofluorescent micrographs. Note that proliferating PCNA+ human T cell and B cells accumulate around the SACs only in the infected huNSG mice (E, F). Histomorphometry was performed on 5 randomly chosen fields at 200X magnification in each condition (G-H), and aggregated data is presented as the mean+/-SEM for each group (n = 3 mice/group, **p < 0.01, ***p < 0.001, one-way ANOVA). Note that the percentage of proliferating human T cells (CD3+PCNA+ cells) is significantly higher than proliferating B cells (CD20+PCNA+ cells). | 2021-03-18T13:19:43.109Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "31cdf3d9e01c956b3456d717dc499a749cefb1c3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.651515/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31cdf3d9e01c956b3456d717dc499a749cefb1c3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17836217 | pes2o/s2orc | v3-fos-license | Addition of an induction regimen of antiangiogenesis and antitumor immunity to standard chemotherapy improves survival in advanced malignancies
Studies have shown that cancer requires two conditions for tumor progression: cancer cell proliferation and an environment permissive to and conditioned by malignancy. Chemotherapy aims to control the number and proliferation of cancer cells, but it does not effectively control the two best-known conditions of the tumor-permissive environment: neoangiogenesis and tolerogenic immunity. Many malignant diseases exhibit poor outcomes after treatment with chemotherapy. Therefore, we investigated the potential benefits of adding an induction regimen of antiangiogenesis and antitumor immunity to chemotherapy in poor outcome disease. In a prospective, randomized trial, we included patients with advanced, unresectable pancreatic adenocarcinomas, non-small cell lung cancer, or prostate cancer. Two groups of each primary condition were compared: group 1 (G1), n = 30, was treated with the standard chemotherapy and used as a control, and group 2 (G2), n = 30, was treated with chemotherapy plus an induction regimen of antiangiogenesis and antitumor immunity. This induction regimen included a low dose of metronomic cyclophosphamide, a high dose of Cox-2 inhibitor, granulocyte colony-stimulating factor, a sulfhydryl (SH) donor, and a hemoderivative that contained autologous tumor antigens released from patient tumors into the blood. After treatment, the G2 group demonstrated significantly longer survival, lower blood level of neoangiogenesis and immune-tolerance mediators, and higher blood levels of antiangiogenesis and antitumor immunity mediators compared with the G1 group. Toxicity and quality of life were not significantly different between the groups. In conclusion, in several advanced malignancies of different primary localizations, an increase in survival was observed by adding an induction regimen of antiangiogenesis and antitumor immunity to standard chemotherapy.
Introduction
Classical chemotherapy, which exerts its antitumor activity by causing damage and inducing apoptosis in rapidly dividing cells, has been a corner stone in standard cancer treatment for several decades. The rationale for using classical chemotherapy is to kill malignant cells in order to reduce tumor size. However, this method has not provided satisfactory benefits for patients with advanced cancers and poor prognoses in terms of survival. Often, these patients experience disease progression after a short period of remission, if any, despite treatment with classical chemotherapy. This progression requires not only residual cancer cells, but also a biological response permissive to and conditioned by the malignancy, according to several reports in the current literature [1,2]. In these reports, two broad components of the permissive biological response were identified: neoangiogenesis and tolerogenic immunity. Therefore, in addition to using treatments that kill cancer cells, targeting these additional components may also improve the antiprogressive efficacy of the treatments.
Recently, several clinical trials have attempted to control neoangiogenesis by incorporating antiangiogenic therapies into classical chemotherapy treatments for malignancies with poor prognoses. Improvements in progression-free survival have been shown in some cases. However, it is premature to draw conclusions about the overall survival benefits based on currently available evidence [3]. In order to optimize these results, it was suggested that agents that target both neoangiogenesis and tolerogenic immunity, and not neoangiogenesis alone, might provide a greater benefit as adjuvants of chemotherapy. Indeed, the relevance of the tolerogenic immune component in a permissive biological response of malignant progression has been highlighted in reports that identified tolerogenic immunity as an early, permanent, and common phenomenon of malignancies [4]. Researchers previously reported that some standard chemotherapies [5,6], drugs used in non-cancer conditions [7,8], and cancer vaccines [9] could switch angiogenesis and immune responses from a tumor progression/tolerance balance to an antiprogressive/antitumor balance when used at specific dosages within a particular regimen. Therefore, in this study, we tested the effect of combining a set of agents that have been reported to promote antiangiogenesis and switch tumor tolerogenic immunity to antitumor immunity with standard chemotherapy [2]. In order to determine the applicability of our approach in different cancers, we tested the regimen in three malignancies with recognized poor prognoses, high prevalence, and appropriate survival expectancy for this study, namely unresectable [10] locally advanced pancreatic cancer, non-small-cell lung cancer (NSCLC), and hormone-refractory metastatic prostate cancer.
Pancreatic cancer is a worldwide health problem, and surgery is currently the only potentially curative treatment. However, the number of newly diagnosed patients with surgically resectable pancreatic cancer is limited to 10-20 %. Locally advanced disease is observed in 15-20 % of patients, which is associated with a median survival time of 6-10 months. To date, chemotherapy only provides a marginal improvement in the overall survival for these patients. Similarly, lung cancer is the leading cause of cancer-related mortality for men and women worldwide. In the United States, 222,520 new cases of lung cancer were diagnosed in 2010, and 157,300 deaths resulted from the disease. Approximately 85 % of primary lung cancers are categorized as NSCLC, which includes the main histological subtypes of adenocarcinoma, squamous cell carcinoma, and large cell carcinoma. The majority of NSCLC patients present with advanced disease at diagnosis, and the survival rates are quite low. The overall survival for patients with unresectable NSCLC is generally 13-14 months after treatment. Lastly, prostate cancer is one of the most common solid tumors affecting men. It is the second most commonly diagnosed form of cancer and the sixth leading cause of cancer-related deaths among men worldwide. Once metastasized to distant organs, prostate cancer is incurable, leaving clinicians with palliative care as the only option for disease management. In their hormone-refractory stage, more than 84 % of prostate tumors metastasize, with a median patient survival of approximately 14 months. Therefore, in this study, we investigated the potential benefits of adding an induction regimen of antiangiogenesis and antitumor immunity to chemotherapy in poor outcome disease.
Study design
A prospective, randomized, phase 1/2 trial was designed primarily to assess safety, tolerance, and preliminary efficacy of the combination of standard chemotherapy with the aforementioned, previously published treatment that switches both angiogenesis and immunity conditioning. This assessment was performed in patients with poor prognoses and unresectable malignancies of the pancreas, lung, or prostate. The study protocol was approved by the institutional review board and conducted in accordance with the Declaration of Helsinki [11]. Written informed consent was obtained from all patients at the time of enrollment.
For each primary localization, patients were included and randomly distributed in one of two groups: G1 (n = 30), which received standard chemotherapy for the cancer condition, and G2 (n = 30), which received standard chemotherapy and the antiangiogenesis and antitumor immunity induction regimen. The study design included a follow-up of 2 years. The primary endpoint was overall survival. Secondary endpoints were toxicity and quality of life.
All of the patients of the three primary localizations were regrouped into two cohorts: 90G1 (n = 90), which included patients who had only received standard chemotherapy for each primary localization, and 90G2 (n = 90), which included patients treated with the same chemotherapy and the induction regimen of antiangiogenesis and antitumor immunity agents. Blood concentrations of known mediators of angiogenesis and immunity were measured, and the series of values in the 90G1 and 90G2 cohorts were statistically compared.
Patients
Inclusion criteria were as follows: patients with 18-65 years of age who were diagnosed with unresectable, histologically confirmed, pancreatic adenocarcinoma, NSCLC, or prostate cancer; who had a performance status 0-2 according to the Eastern Cooperative Oncology Group [12]; and who were expected to survive for at least 4 months. Organic functions required for inclusion were absolute neutrophil count C1,500/ lL, lymphocyte count C1,000/lL, platelet count C100,000/ lL, hemoglobin C8 g/dL, serum creatinine \1.5-fold of the upper limit of normal (ULN) value, alkaline phosphatase\3fold, and bilirubin\1.5-fold of the ULN value. The included locally advanced pancreatic cancer patients had M0 metastatic status with locally advanced tumor and had undergone choledochoenteric bypass before inclusion. The included NSCLC patients had M0 metastatic status, locally advanced tumor, without epidermal growth factor receptor (EGFR) mutations. The hormone-refractory metastatic prostate cancer patients included were M1-stage disease. Exclusion criteria included patients who exhibited comorbidity requiring treatment, who were pregnant, and/or who could not complete the treatment regimen and follow-up.
Induction regimen of antiangiogenesis and antitumor immunity
In order to induce a switch of conditioning from the malignancy-induced neoangiogenesis and tolerogenic immunity to antiangiogenesis and antitumor immunity (Fig. 1), patients received oral cyclophosphamide (Cytoxan) 50 mg q.d., the Cox-2 inhibitor celecoxib (Celebrex) 400 mg b.i.d., and the sulfhydryl (SH) donor N-acetylcysteine (oral NAC) 400 mg b.i.d.
After the switch of conditioning, specific antitumor immunity was induced through subcutaneous immunization performed every 4 weeks using a thermostable autologous plasma fraction obtained from drawn blood. This fraction has been shown to contain tumor antigens released spontaneously and because of chemotherapy-induced apoptosis [13].
Assessments
The following tests were performed on all patients prior to treatment (baseline) and 3 months after the start of treatment. The results were expressed as a percentage of baseline levels.
Delayed-type hypersensitivity (DTH) assay was performed by injecting an aliquot of the autologous hemoderivative used in the immunization to the volar surface of the forearms. An induration [5 mm after 48 h was considered a positive DTH response.
An IFN-ELISPot assay was used to assess for the presence of IFN-producing T-lymphocytes. Dendritic cells (DCs) were pulsed with autologous hemoderivative immunogen from patients and healthy donors as controls. Pulsed DCs were co-incubated with autologous T-cells for 40 h. The total number of T-cells per well was 5 9 10 4 . The number of IFN spots was measured automatically using ELISPot software (Carl Zeiss Vision). The frequency of tumor-reactive T-cells was calculated as follows: (number of spots in wells with immunogen-pulsed DCs -number of spots in control wells)/number of T-cells per well. Individuals were considered positive when the number of spots in the presence of DCs pulsed with immunogen was significantly higher than in control wells (p \ 0.05).
Vascular endothelial growth factor (VEGF) and angiostatin (AT) levels in blood samples were determined by ELISA using standard laboratory techniques.
Efficacy and safety
Survival was plotted in Kaplan-Meier curves, and the mean and standard deviation of time required to reach 50 % of survival were calculated. The difference between means in different treatment groups was analyzed using the log-rank test. Statistical significance was set at p = 0.05.
A safety evaluation included monitoring for hematological toxicity, nausea/vomiting, changes in liver function, changes in renal function, and CNS toxicity. Cardiac function was monitored by echocardiograms. Toxicities were graded according to Common Terminology Criteria for Adverse Events (CTCAE) v3.0 of the National Cancer Institute [14]. Quality of life was scored using the current core questionnaire of the EORTC QLQ-C30 [15]. Figure 2 shows the Kaplan-Meier plot [16] estimates for survival of G1 and G2 patients for each cancer studied (pancreatic, NSCLC, and hormone-refractory prostate cancer). The addition of the tested regimen in G2, which is a recognized procedure for eliciting antiangiogenesis and antitumor immunity, improved the survival rate compared with G1 patients who were only treated with chemotherapy. The mean survival was significantly longer for G2 patients than for G1 patients for the three tumor types analyzed: 18.0 versus 10.2 months (log-rank, p = 0.036), 16.7 versus 12.1 months (log-rank, p = 0.042), and 20.4 versus 16.8 months (log-rank, p = 0.048) for pancreatic cancer, NSCLC, and prostate cancer, respectively.
Results
To interpret these findings, we also confirmed the efficacy of this regimen in the frame of this study for switching neoangiogenesis and tolerogenic immunity to antiangiogenesis and antitumor immunity. For this purpose, we assessed the percent change from baseline of markers of neoangiogenesis (VEGF), antiangiogenesis (AT), immunity response (aDC), and tolerogenic immunity (T-Reg) after 3 months of treatment. Figure 3 showed the results in the cohorts 90G1 and 90G2 expressed as a percentage of baseline levels. VEGF levels were significantly higher in 90G1 patients compared with 90G2 patients (196.0 ± 21.3 vs. 98.5 ± 9.2, respectively; p = 0.014). Moreover, AT reached levels significantly higher in patients in 90G2 compared with patients in 90G1 (186.1 ± 15.9 vs. 55.1 ± 8.3, respectively; p = 0.010). In addition, the T-Reg levels were significantly lower in 90G2 patients compared with 90G1 patients (58.0 ± 8.4 vs. 214.8 ± 17.4, respectively; p = 0.009). The levels of aDC increased in 90G2 patients compared with 90G1 patients (196.4 ± 21.3 vs. 64.7 ± 7.2, respectively; p = 0.010). Furthermore, we aimed to confirm that these changes in the immunity response conditioning and the immunization with the autologous hemoderivative were sufficient to allow for the emergence of antiautologous-tumor immunity. Using the autologous hemoderivative containing tumor antigens for the immune challenge, we performed a DTH test to assess for cellularmediated immune responses and an IFN-ELISPot assay to measure for IFN-producing T-lymphocytes. We found that compared with baseline values, the percentage of positive DTH in 90G2 patients significantly increased after 3 months of therapy (204.8 ± 17.4), while the percentage significantly decreased slightly in patients of the 90G1 group (82.6 ± 6.3; p = 0.001). In addition, the percentage of the number of spots in the IFN-ELISPot assay was higher for the 90G2 patients compared with 90G1 patients after 3 months of therapy compared with baseline (144.6 ± 11.7 vs. 91.3 ± 1.2, respectively; p = 0.012).
As shown in Fig. 4, no significant differences (p [ 0.05) were observed in toxicities or quality of life profiles between the two cohorts during the 2-year follow-up period. The toxicities and quality of life profiles observed in the cohort receiving the induction regimen of antiangiogenesis and antitumor immunity were as expected and related to the chemotherapy regimen.
Discussion
It is now well accepted that carcinogenesis includes the conditioning of a patient's biological response, including Percentage of baseline (pretreatment) values (mean ± SD) at 3 months of follow-up in the 90G1 cohort treated with standard chemotherapy compared with the 90G2 cohort treated with the same chemotherapy and an induction regimen of antiangiogenic and antitumor immunity agents. Antiangiogenesis was monitored by measuring VEGF and angiostatin blood levels. Antitumor immunity conditioning was determining by assessing the number and presence of T-Regs and aDCs. Antitumor immunity was tested with DTH and IFN-ELISPot assays challenged with an autologous hemoderivative containing tumor antigens neo-angiogenesis and tolerogenic immunity, for disease progression to occur. This study aimed to explore the rationale of a complementary therapeutic approach that targets angiogenesis and immunity. In this trial, the analysis of two 30-patient groups of three different primary cancer types showed that survival was significantly improved when an induction regimen of antiangiogenesis and antitumor immunity was added to chemotherapy compared with chemotherapy alone. This survival improvement was observed in patients with advanced pancreatic cancer, NSCLC, and hormone-refractory prostate cancer. Interestingly, the degree of improvement, though varied, was significant for all three of the primary diseases assessed in this study, indicating that this approach has a general benefit and suggests that the pathogenic and therapeutic mechanisms involved are essential for malignancies.
The link between these effects on survival and the modulation of the biological response was shown by comparing a 90-patient cohort that was treated with only chemotherapy with a 90-patient cohort treated with chemotherapy plus the induction regimen of antiangiogenesis and antitumor immunity. Although separate clinical trials for each type of cancer would be beneficial for analysis purposes, we believe that this design was more effective for assessing the essential mechanism proposed for the development of malignancies. The comparability of the analyzed groups was possible due to enrollment of the same number of patients with the three tumor types in each group as well as the use of the same inclusion and exclusion criteria.
After 3 months of treatment, the assessment of angiogenesis and immunity mediators in blood showed a net increase in AT and VEGF levels as well as a net increase in aDC and T-Reg cells. These results are in agreement with a change in the conditioning of the biological response induced by malignancies, neo-angiogenesis, and tolerogenic immunity [17][18][19]. The conditioning becomes more antiangiogenic and less immune tolerogenic when chemotherapy is combined with an induction regimen of antiangiogenesis and antitumor immunity. This modulatory activity can be explained by the known properties of the agents included in this regimen [20]. Metronomic treatment with a low dose of cyclophosphamide has been shown to not only be antiangiogenic due to its antiproliferative activity upon endothelial cells, but also antitolerogenic by selectively depleting regulatory T-cells and restoring T and NK effector functions in immunity [21]. In addition, Cox-2 inhibitors interfere with VEGF expression in neoangiogenesis and also block indoleamine 2,3-dioxygenase (IDO) activity, which is required to generate the tolerogenic immunity of T-Regs [22]. Granulocyte colony-stimulating factor (G-CSF) increases the number of peripheral blood DCs and the expression of their activation markers [23], thereby improving the antigen processing and presentation (i.e., classical non-tolerogenic immunity). Furthermore, sulfhydryl (SH) donors improve the generation of angiostatin from autoproteolysis of plasmin [24], allowing tumor infiltration from the blood immune-competent cells [25]. Taken together, the properties of the drugs included in the induction regimen explain the tumor infiltration effects of the antiangiogenics and the non-tolerogenic immune-responder cell population. However, the generation of antitumor immunity requires not only immune-responder cells, but also a challenge of the immune system with tumor antigens. In the tested antiangiogenesis and antitumor immunity induction regimen, the tumor antigens were those released from tumors into the bloodstream by spontaneous [26][27][28] or chemotherapy-induced apoptosis [29,30] of previously stressed cells [31]. Indeed, it has been reported that some of those tumor antigens released by tumors circulate as protected complexes with heat-shock proteins can induce vaccination against tumors [32][33][34][35] and can be recovered in a thermostable hemoderivative [13]. This hemoderivative was used as the tumor immunogen to challenge the conditioned immuneresponder cells. The results of cell-mediated immune responses assessed by DTH and IFN-ELISPot assays indicated the efficiency of this immunogen to induce antitumor immunity. Introducing autologous antitumor immunity in cancer treatments, as previously stated [36], adds autologous tumor specificity, exposes the current tumor antigen library to the immune system, and provides immune memory. Taken together, the results of this study are compatible with the rationale of combining tumor cell killing with an induction regimen of antiangiogenesis and antitumor immunity.
Conclusions
In advanced malignant diseases with poor prognoses that are treated with standard chemotherapy, the addition of an induction regimen of antiangiogenic and antitumor immunity agents that effectively switch the biological response from neoangiogenesis to antiangiogenesis and the immunity from permissive to antitumor immunity safely improved the survival of patients with three different tumor types in this study. Although these results are preliminary, they encourage further studies to confirm the clinical relevance of these findings. | 2018-04-03T02:20:58.541Z | 2012-07-19T00:00:00.000 | {
"year": 2012,
"sha1": "8bd628c2d547087e9b1387f161a4e7901ec8ce6f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12032-012-0301-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bd628c2d547087e9b1387f161a4e7901ec8ce6f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236406122 | pes2o/s2orc | v3-fos-license | A GREY HYBRID MODEL TO SELECT THE OPTIMAL THIRD-PARTY LOGISTICS PROVIDER
Enterprises need to work with a proper third-party logistics provider to reduce costs and increase their logistics performance; so the third-party logistics (3PL) provider selection problem is a significant one for them. In this study, grey step-wise weight assessment ratio analysis (SWARA) and grey combinative distance-based assessment (CODAS) are proposed to address this problem. To the best of the authors’ knowledge, no other study uses grey SWARA and grey CODAS together to solve any problem. Therefore a new grey hybrid model incorporating grey SWARA and grey CODAS is proposed to identify the best 3PL provider.
INTRODUCTION
In an environment in which global competition is intense, businesses need to benefit from every positive opportunity to improve their performance. In this environment, strong competition and customer satisfaction push enterprises to work in close cooperation with external collaborators. Effective close partnerships with external partners enable companies to gain a competitive advantage. Outsourcing, which can result in greater profitability and competitiveness, is one of these enterprise activities [1]. Most global companies outsource their logistics activities. For example, according to Forrester Research, 54 per cent of Fortune 500 enterprises have outsourced their distribution services, 78 per cent of them have outsourced their transportation services, and 46 per cent have outsourced their manufacturing activities [2]. Thus it can be said that logistics outsourcing is important for companies. Additionally, logistics outsourcing has become an indispensable component of all enterprises because of the increased cost pressure on businesses and the globalisation of enterprise activities [3].
The execution of logistics activities by a good strategic partner will provide the following benefits: decreased costs, increased logistics performance, and a focus on their core business activities and on building virtual businesses [4]. The logistics activities of companies can be carried out by third-party logistics (3PL) providers instead of the enterprises themselves. Although 3PL providers function at locations in the supply chain between the producer and the end consumer, they are named 'third party' because they do not have their own products [5]. The services that an enterprise requires, such as freight consolidation and distribution, pro-logistics transportation, cross-docking, and storing and stock management, can be supplied by a 3PL provider [6]. Companies need to work with strategic 3PL providers to take advantage of such benefits.
Since a 3PL provider has a critical position and role in logistics tasks, working with a high-performance 3PL provider will allow logistical activities to be carried out properly. A range of quantitative and qualitative attributes, which are frequently in conflict with each other, may be involved in the procedure to select 3PL providers; so this selection is a multi-criteria decision-making problem that includes various types of vagueness [6]. Multi-criteria decision-making methods are used to solve these types of problem, which are affected by several attributes [7]. As human preference, perception, intuition, and judgement remain uncertain and hard to gauge, methods that use crisp numbers might not always be sufficient to handle an uncertainty problem. In order to address this issue, many approaches, such as fuzzy set theory (FST), rough theory (RT), and grey theory (GT), are proposed in the literature. According to the literature, FST has been used more than GT in selecting a 3PL provider. However, GT considers the circumstance of fuzziness, and that is the key advantage of GT over FST; and GT generates satisfactory results using limited, small, and incomplete data [8][9][10]. Therefore, in this study a grey hybrid multi-criteria decision-making method is preferred to handle the vagueness issue in selecting a 3PL provider. This study contributes to the literature by proposing a new grey hybrid model that incorporates grey step-wise weight assessment ratio analysis (SWARA) and grey combinative distance-based assessment (CODAS) to identify the best 3PL provider.
The structure of this article is as follows. Section 2 presents a detailed literature review related to 3PL provider selection and the CODAS method. The grey SWARA and CODAS methods are elucidated in Section 3. A case study in the textile industry is related to the application of the grey hybrid model, and a comparison of the results of grey CODAS and those of grey COPRAS (complex proportional assessment) multi-criteria decision-making (MCDM) [11], grey additive ratio assessment (ARAS) [12], and grey multiattributive border approximation area comparison (MABAC) [13] is given in Section 4. Section 5 presents the discussion, followed by a brief conclusion.
Literature review related to third-party logistics provider evaluation
A company needs to be able to increase its competitiveness by cooperating closely with partners in the competitive global environment. A 3PL provider that helps companies to take advantage in the competitive environment is a significant outsourcing partner for such companies. 3PL providers should have professional experience in the services of transportation, warehousing, and so forth, as they mostly focus their attention on these services [14]. The selection of the right 3PL provider is a crucial issue for firms, given the increasing significance of outsourcing logistics [15].
The studies in the literature related to the selection of a 3PL provider used MCDM, artificial intelligence, statistical methods, hybrid methods, and mathematical programming [16]. Among these approaches, integrated methods are useful to determine the most important assessment criteria and to choose the best 3PL provider [16]. For instance, Zhang, Shang and Li [17] suggested an integrated model using K-means clustering, TOPSIS, and an information granulation entropy approach to choose a 3PL provider. In their study, an information granulation entropy approach was used to identify the weights of the criteria, and TOPSIS was used to rank the 3PL providers. Their integrated model considered five main assessment criteria: enterprise culture, financial performance, client relationships, improvement and compatibility, and operational capabilities. Falsini et al. [18] proposed a hybrid method integrating the analytic hierarchy process (AHP), linear programming, and data envelopment analysis to assess and choose the best 3PL provider in Italy. They took into account seven main criteria: speed of service, environmental safeguards, equipment, costs, flexibility, operators' safety, and quality and reliability. They also validated their model in three sectors: perishable products, industry and defence, and consumer goods. Kabir [19] integrated the fuzzy analytic hierarchy process (FAHP) and fuzzy TOPSIS methods to assess and select a suitable 3PL provider. Wong [20] suggested a decision support system consisting of pre-emptive fuzzy integer goal programming and the fuzzy analytic network process (FANP) to identify the best 3PL provider in a global supply chain. In another study, Perçin and Min [21] proposed an integrated approach using zero-one goal programming, quality function deployment, and fuzzy linear regression to identify the best 3PL provider for a company in the automobile industry. Hsu et al. [22] suggested a hybrid model using the analytic network process (ANP), Decision Making Trial and Evaluation Laboratory (DEMATEL), and grey relation to identify the best outsourcing partner for a Taiwanese firm. Akman and Baynal [2] combined FAHP and fuzzy TOPSIS to determine the best 3PL provider among seven alternatives for a Turkish tyre company. Hwang and Shen [23] proposed a non-additive fuzzy integral to identify criteria weights and select the best 3PL provider. The six main criteria they considered were information technology, cost, service, performance, quality assurance, and intangibles. Sharma and Kumar [24] integrated quality function deployment and Taguchi loss function to choose the optimal 3PL provider for an Indian ball-bearing manufacturing firm. Yayla et al. [25] developed a model consisting of FAHP and fuzzy TOPSIS to determine the most appropriate 3PL provider for a Turkish confectionery firm. The three criteria they considered were service quality, developing sustainable relationships, and continuous improvement. Sahu et al. [26] developed a model based on interval-valued fuzzy numbers to assess and choose 3PL providers for an automobile part manufacturing enterprise in India. Govindan et al. [6] proposed a grey DEMATEL model to develop the selection criteria for a 3PL provider for an automobile manufacturing firm in Iran. Keshavarz Ghorabaee et al. [3] developed an integrated model based on interval type-2 fuzzy, including weighted aggregated sum product assessment (WASPAS) and criteria importance through inter-criteria correlation (CRITIC) to assess 3PL providers. Jung [27] proposed an FAHP method to solve the 3PL provider assessment problem taking into account social sustainability. Raut et al. [1] combined data envelopment analysis and ANP to assess and select 3PL providers. Ji et al. [28] developed a model based on single valued neutrosophic sets with Bonferroni mean operator to identify the best 3PL provider. Karbassi Yazdi et al. [29] the Delphi method, the entropy method, and an area-based evaluation method for ranking to select 3PL providers for the Iranian automobile industry. Chen et al. [30] developed a model using extended regret theory and fuzzy axiomatic design to select the best logistics provider in an omni-channel environment. Singh et al. [31] integrated FAHP and fuzzy TOPSIS to determine the best 3PL provider for an Indian food manufacturing company. Sremac et al. [32] integrated rough SWARA, rough WASPAS, and rough Dombi aggregator to determine the best 3PL provider for the Serbian chemical industry. Ecer [16] integrated evaluation based on distance from average solution (EDAS) and FAHP to choose the best 3PL provider for a Turkish marble company. Pamucar [15] combined the best-worst method, MABAC, and WASPAS based on interval rough numbers to assess 3PL providers.
Literature review related to CODAS
The CODAS method (developed by Keshavarz Ghorabaee [33]), which is a kind of MCDM method, has been used in the literature to rank alternatives by using two distances approaches (Euclidean and Taxicab). Many studies have used this technique and its types to address MCDM problems. Table 1 summarises the studies that have used CODAS and its types . Keshavarz Ghorabaee [34] Suggested fuzzy CODAS to assess and select market segment for a shoe company.
Panchal et al. [35] Integrated fuzzy AHP and fuzzy CODAS to select the best maintenance strategy for an Indian urea fertilizer business.
Bolturk and Kahraman [36] Proposed interval-valued intuitionistic fuzzy CODAS to select the best wave energy facility location.
Bolturk [37] Suggested Pythagorean fuzzy CODAS to solve the selection of supplier problem.
Badi et al. [38] Proposed CODAS to choose the best supplier for a Libyan steelmaking company.
Pamučar et al. [39] Suggested a linguistic neutrosophic CODAS method to choose the optimal power-generation technology located in Libya.
Mathew and Sahu [40] Proposed the CODAS, WASPAS, EDAS, and multi-objective optimisation on the basis of ratio analysis (MOORA) methods to solve the selection of material handling equipment problems.
Ren [41] Integrated interval AHP and intuitionistic fuzzy CODAS to rank alternatives for energy storage technologies.
Dahooei et al. [42] Suggested CODAS with interval-valued intuitionistic fuzzy sets to assess the business intelligence of enterprise systems.
Peng and Garg [43] Combined weighted distance-based approximation, CODAS, and similarity measure to solve the problem of mines' emergency decision-making in an interval-valued fuzzy soft decision environment.
Yeni and Özçelik [44] Proposed interval-valued Atanassov intuitionistic fuzzy CODAS to solve a personnel selection problem for a company.
Karaşan et al. [45] Suggested interval-valued neutrosophic CODAS to select the location for a wind energy plant in Turkey.
Laha and Biswas [46] Combined the entropy and CODAS methods to analyse the performance of banks in India.
As can be seen from the table above, no study has used the grey SWARA and the grey CODAS (CODAS-G) methods together to solve any problem. This study will fill this gap in the literature. The next section describes the methods applied in this study.
MATHEMATICAL BACKGROUND
This section consists of three sub-sections: arithmetic operations for grey numbers, grey SWARA, and CODAS-G.
Arithmetic operations for grey numbers
Suppose that ⨂ = [ ; ] and ⨂ = [ ; ] denotes two non-negative grey numbers and is a crisp and positive natural number. Arithmetic operations for these numbers are indicated as follows [8]: For Euclidian and Taxicab distances, equations 5 and 6 respectively are used.
Grey SWARA
In this study, the grey SWARA [47] method is used to determine the weights of defined criteria. This method is an extension of traditional SWARA [48]. The steps of grey SWARA are explained as follows: Step 1.1. Defined criteria are ordered by decision-makers in descending expected importance.
Step 1.2. The relative importance of the th criterion is identified by comparing it with the − 1th criterion. This process continues until the last criterion. When decision-makers compare two criteria, they use the linguistic comparison terms shown in Table 2 to compute ⨂ (the grey comparative importance of the average value). [7,9] Step 1.3. The grey coefficient (⨂ ) is calculated by using equation 7.
Step 1.4. The recalculated grey weight for each criterion, ⨂ , is computed with equation 8.
Step 1.5. The grey weight for each criterion is computed with equation 9.
⨂ = [ ; ] denotes the grey weight of the th criterion. After all decision-makers have identified the grey weights, the arithmetic mean is used to consolidate these weights, which are transferred into CODAS-G.
CODAS-G
In this study, the grey extension of CODAS is proposed to select the 3PL provider. The steps of CODAS-G are explained as follows: Step 2.1. Decision-makers assign the terms in Table 3 with respect to the performance of a 3PL provider, and these scores are aggregated by using the arithmetic mean to structure the grey decision matrix (⨂ ) as follows: where ⨂ is the grey performance value of the th 3PL provider on the th criterion. Step 2.2. Grey normalised decision matrix is calculated by using equation 11.
where and indicate the sets of beneficial and non-beneficial criteria respectively. In equation 11, ⨂ denotes a grey normalised value of ⨂ .
Step 2.3. Grey weighted normalised values are computed by using equation 12.
Step 2.4. Grey negative-ideal solution is determined as follows: Step 2.5. ⨂ and ⨂ distances of 3PL providers from the grey negative-ideal solution are computed by using equations 15 and 16.
Step 2.6. Equation 17 is used for ⨂ conversion into crisp , and equation 18 is used for ⨂ conversion into crisp . In equations 17 and 18, is set as 0.5 for this study.
Step 2.7. The relative assessment matrix ( ) shown in equation 19 is established by using equation 20.
, indicating threshold function, can be shown as: denotes the function's threshold parameter, and the decision-maker can set this value between 0.01 and 0.05. In this study, this value is set at 0.02.
Step 2.8.The final score ( ) for each 3PL provider can be computed as: The 3PL provider that has the highest is identified as the best 3PL provider.
The next section illustrates the application of the proposed model.
APPLICATION
The grey integrated model was applied to a Turkish textile firm that manufactures fabric The company wanted to cooperate with a 3PL provider to deliver its products to global markets. For the evaluation process, an expert team consisting of this firm's top management (five people) was formed, and asked to decide on the criteria used in the literature. The expert team took a joint decision to use seven criteria in the selection process: cost (C), delivery (D), quality (Q), services (S), flexibility (FL), reputation (R), and financial position (FP).
First, the steps of grey SWARA were applied to derive grey weights. Table 4 illustrates the results of the grey SWARA for Expert 1. The grey weights of the criteria were also calculated for the other experts using the grey SWARA method. After this process, all the grey weights of the criteria were combined using the arithmetic mean. Table 5 presents the combined grey weights. After identifying the grey weights, the grey decision matrix (⨂ ) was structured by using the arithmetic mean to aggregate the preferences of the decision-makers. Table 6 gives the grey decision matrix (⨂ ). Equation 11 was used to determine the grey normalised decision matrix, shown in Table 7. The grey weights of the criteria were multiplied by the grey normalised values to obtain the grey weighted normalised values, using Equation 12. Equation 14 was used to determine the grey negative-ideal solution. Table 8 presents the grey weighted normalised values and the grey negative-ideal solution. ⨂ and ⨂ distances were computed by using equations 15 and 16 respectively. These grey values were converted into crisp and by using equations 17 and 18 respectively. Table 9 illustrates these values for each 3PL provider. In the final step, the relative assessment matrix ( ) was generated, and the final score for each 3PL provider was calculated. Table 10 presents this matrix and the final scores ( ). According to Table 10, the 3PL providers were sequenced as follows: 3PL1 > 3PL2 > 3PL4 > 3PL3. The results of the grey CODAS were compared with the results from other grey methods (grey COPRAS, grey ARAS, and grey MABAC). Table 11 gives the comparison of the grey methods. As can be seen from Table 11, the rankings of the 3PL providers did not change. This indicates that the CODAS-G method achieved accurate results.
DISCUSSION
As can be seen in the literature review section of this paper, the AHP, fuzzy AHP, and ANP methods have been used many times in the literature. The grey SWARA method has a less complex structure than these other methods, and can obtain criterion weights with less data. In addition, the rough SWARA method has been used in some studies. Although most of the steps in the rough SWARA method (except for the first) are similar to the grey SWARA method, the first step of the rough SWARA method makes the method complicated. In particular, combining the values assigned by decision-makers makes the rough SWARA method more complex than the grey SWARA method. For all these reasons, the grey SWARA method was preferred in this study to find the weights of the criteria.
TOPSIS, EDAS, MABAC, WASPAS, and their fuzzy and rough versions were used in most of the studies in the literature. Since the grey CODAS method uses two different distance approaches (Euclidean and Taxicab), it can be said that it achieves more detailed and rigorous results than the other methods. In addition, the grey CODAS can reach a solution with a small and limited dataset. Here, less data means that the smallest and largest values of any criterion re sufficient for the grey CODAS method to start its analysis and to achieve results.
The proposed model can be easily used in circumstances with little, limited, or incomplete data and high uncertainty. The fact that the process steps of the grey SWARA method are fewer and are not complicated helps to reach the criterion weights quickly; and the grey CODAS method helps to maintain rigour and achieve accurate results, thanks to the two-distance approach it uses.
To test the validity of the proposed model for businesses, a short survey was conducted with five managers in the textile company where the model was applied. Two questions were asked in the questionnaire: (1) "What is the performance of the proposed model in reaching correct results? Please rate it on a scale of 1 (very bad) to 10 (very good)"; and (2) "Do you think the proposed model is feasible for businesses? Please rate it on a scale of 1 (definitely no) to 10 (definitely yes)". Table 12 presents the results of the questionnaire. As can be seen from Table 12, the average score given to the first question (on the performance of the proposed model) was 8.2, while the average score given to the second question (on the feasibility of the proposed model) was 8.4. Since both scores were high, the performance of the proposed model according to the managers was very high, and the proposed model was found to be feasible for this firm.
CONCLUSION
Firms need to cooperate with good 3PL providers to gain advantages from the relationships, such as reduced costs, improved logistics performance, the ability to concentrate on their core business activities, and so on. Therefore the selection of a 3PL provider has strategic importance for companies. Solving the 3PL provider selection problem requires multi-criteria decision-making methods, since this selection process contains both qualitative and quantitative criteria that may also include uncertain data. As multi-criteria decision-making methods with crisp numbers may not adequately handle uncertain data, many approaches -such as FST, GT, and RT -have been proposed in the literature. As GT considers the circumstance of fuzziness and generates satisfactory results even with small, limited, and incomplete data, this study used GT to solve the 3PL provider selection problem. In this study, grey SWARA and grey CODAS were applied to a Turkish textile company. According to the results of the proposed model, the 3PL providers were ordered as follows: 3PL1 > 3PL2 > 3PL4 > 3PL3. The results of the grey CODAS were also compared with those from other grey methods (grey COPRAS, grey ARAS, and grey MABAC). The comparison showed that the grey CODAS achieved the same results as the other grey methods, proving that the method's results were correct. In addition, in order to test the validity of the proposed model for businesses, a short survey was conducted with five managers in the textile company where the model was applied. They were asked two questions in a questionnaire, and the results showed that the the performance of the proposed model was very high and that the proposed model was feasible for use in this firm.
This study contributes to the literature by proposing a new grey hybrid model using grey SWARA and grey CODAS to identify the best 3PL provider. Future research could use grey CODAS to solve different multicriteria decision-making methods. | 2021-07-27T00:05:19.840Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "58e8ebf10bcd3179ffbf6172964b16493b2b12a2",
"oa_license": "CCBY",
"oa_url": "http://sajie.journals.ac.za/pub/article/download/2126/1070",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "57578b93488da80fbc85c245acef3cff736d3893",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1847192 | pes2o/s2orc | v3-fos-license | Slag Behavior in Gasifiers. Part II: Constitutive Modeling
The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport) properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.
Introduction
Deposition of ash in fluidized-bed combustion is primarily caused by the transfer of molten mineral matter from the burning char onto the bed surface.Two possible mechanisms have been proposed for this unwanted phenomenon: (1) partial melting or reactive liquid sintering, and (2) viscous flow sintering [1].The first situation occurs with partial melt at 500-700 °C, which is normally lower than the standard operating temperatures of many fluidized beds.The second mechanism occurs at temperatures about or higher than 1,000 °C, creating a highly viscous and non-linear fluid.At such high temperatures, the standard methods of measuring viscosity do not always work.Heat transfer at the walls of a combustor depends on many parameters including ash deposition.This depends on the processes or parameters controlling the impact efficiency and the sticking efficiency [2,3].The main problems with ash deposition are reduced heat transfer in the boiler and corrosion of the tubes.Common ways of dealing with these issues are soot blowing and wall blowing on a routine basis; however, unexpected or uncontrolled depositions can also complicate the situation and there are always locations inaccessible to the use of such techniques.Wang and Harb [4] list eight important concepts which should be addressed in any formation of ash deposits modeling: (1) understating the process of ash formation; (2) understanding the fluid dynamics and the equations governing the particle transport; (3) the process of particle impacts and sticking to the surfaces; (4) the location of deposit growth in the combustion chamber, (5) deposit properties, (6) heat transfer mechanisms through the deposit layers, (7) the effect of deposition on temperatures and heat fluxes, etc., and (8) the structure of deposit and how this affects the flow patterns in the combustion facility.Erickson et al. [5] proposed three distinct layers, which include: (1) an initial layer deposit formed by small ash particles, (2) a bulk layer formed from the partially molten ash and the non-deformable particles captured by the deposit surface, and (3) a slag layer flowing as a viscous fluid.
One of the main reasons for using an entrained-flow gasifier is that the highest temperature can be achieved in the entrained-flow slagging process [6].Next to the viscosity of ash or slag, thermal conductivity is the most important physical /material parameter.The ash deposits are porous and they can be approximated as packed beds.For a detailed analysis and a discussion of the relevant issues to heat transfer in ash deposits, we refer the reader to the review article by Zbogar et al. [7].Jak et al. [8] mention that the major components of coal slags produced in combustion or gasification processes are based on FeO-Fe 2 O 3 -CaO-SiO 2 -Al 2 O 3 chemical composition (Interestingly, the chemistry of many metallurgical smelting operations also depend on a similar chemical compositions but with seven components, instead of five, namely, PbO-ZnO-FeO-Fe 2 O 3 -CaO-SiO 2 -Al 2 O 3 ).. In their study, they used a modified quasi-chemical model for the molten slag phase, and the thermodynamic modeling was based on the FACT computer system [9].As Jak et al. [8] mention: "The slag composition and operating temperature should be such that a small temperature decrease in the reactor or a small variation in slag chemistry does not lead to a large increase in the fraction of solids in the slag."This is an issue related to the control of the slag layer, where it is preferable for the slag to behave like a fluid so that it can be tapped from the reactor [10].
A common definition of "slagging" is the deposition of ash in the radiative section of a boiler (whereas "fouling" refers to the deposition of ash in the convective-pass region) [5].Slag and ash also occur in ignite-fired power plants whose main characteristics are a high water and ash content [11].
Vorres et al. [12] observed that in regions where there is a large amount of iron present in the coal ash, especially in the eastern parts of the United States, in the more oxidizing environment of a boiler, coal slags behave more like a highly polymerized fluid than in the less oxidizing environments such as the slagging gasifier or cyclone combustor [13].As Jak et al. [14] observed, at a typical section of slag deposit on the walls of a reactor, the temperature at the water-cooled wall of a gasifier can be taken to be ~450 °C, while the temperature at the slag surface is ~1450 °C.Such a sharp gradient causes various types of responses including the creation of different sub-layers.Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and for the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa•s [15].
In recent years, one of the approaches to improve the production efficiency of blast furnaces has been to include low slag volume by using high pulverized coal injection (PCI) operation.In many cases, as Kang et al. [16] observed, the fluidity of blast furnace slag is controlled by changing the slag chemistry.Kang et al. [16] indicate that not only is the viscosity of slag important in these operations, but also coke/slag interactions, which could significantly impact the liquid permeability of the blast furnace.There is a great deal of similarity in the processes involving steel production and those of slag in coal gasification or combustion processes.Important issues in thermal processing of steel ingots include thermal stress modeling and panel cracking [17][18][19].
The world demand for building supplies requires large amounts of raw materials.A new emerging area is the use of slag from the industrial streams, such as blast furnaces slag, in building materials [20].
There have also been studies on freeze-thaw cycle on alkali-activated slag concrete [21].Slag cement and other viscosity modifying admixtures (VMAs) have been used in recent years in self-consolidating concrete (SCC) with some success [22].As fossil fuel use increases, the amount of waste materials and the environmental issues dealing with their disposal also increase.One of the promising approaches is the development of coal/waste co-firing technology with fuels such as biomass.Biomass constitutes an estimated 14% of the world energy use, which makes it the fourth largest energy source [23].Biomass can comprise wood residues, agricultural residues (crops, foods, animals), municipal solid waste, etc. [24].Additionally, energy crops, including short-rotation woody crops and herbaceous crops such as tall switch grass, are predicted to become the largest source of biomass in the future.In general, biomass fuels are converted to energy via thermal, biological, and physical processes.Bridgwater et al. [25] and Bridgwater [26] indicate that the three primary thermal processes for converting biomass to useful energy are combustion, gasification, and pyrolysis.Coal can be used as the primary fuel for high temperature environments such as the open-cycle Magneto-hydrodynamics MHD generators where coal particles are burnt in the combustion chamber with temperatures and pressures in the order of 3,000 K and 10 atm, respectively.Under these high-temperature and high-pressure conditions, the dynamics of coal particles change significantly.As mentioned by Sondreal et al. [27] high temperatures are needed for most advanced combustion technologies, including those using co-firing biomass with fossil fuels, to improve the thermodynamic efficiencies, which in turn raises problems associated with high temperature such as corrosion and deposition by coal ash and slag.Slagging combustors have been used in more recent MHD applications [28] in conjunction with an advanced gasifier system, known as the Multistaged Enthalpy Extraction Technology (MEET) [29].According to these researchers, improvements in coal gasification, cleanup, turbine generators, system integration, etc., improve efficiency in integrated gasification combined cycle (IGCC) plants.
One way of improving the efficiency in IGCC processes is to use a two-stage feeding, which involves a combustion stage and a reduction stage with a gasifier.Chen et al. [30] developed a numerical model for a two-phase type flow, using the K-ε turbulence model for the gas and a particle dispersion model, emphasizing that the movement of particles is the primary parameter determining the local fuel-O stoichiometric ratio, which is important in controlling the overall carbon conversion.The accurate formulation or representations of the (transport) properties of coal (and biomass for co-firing cases) present a special challenge of modeling efforts in computational fluid dynamics (CFD) applications.For example, we do not possess a good knowledge of the specific heats of coals as a function of temperature at high heating rates [31].As pointed out by Backreedy et al. [32], in most CFD studies related to coal combustion, the effects of gravity are assumed to be negligible.This is not a good assumption, since in many processes, especially co-firing with biomass, 10%-40% of the ash can fall into the bottom ash hopper.In recent years, many CFD codes have been developed.An important issue is the physical models which are embedded in these codes; most of these models are linear constitutive equations.Bjorkvall et al. [33] presented a multi-component model where the oxide activities, the determining or the driving force of the chemical reactions, was obtained based on the experimental information corresponding to the binary subsystem.Some of the carbon associated with char particles are unburnt and are transferred to the molten ash slag.Montagnaro and Salatino [34] studied the coal particles in an entrained gasifier in the slagging regime.They developed a simple one-dimensional model of the gasifier by suggesting that the gasifier can be divided into three components, namely, a lean-dispersed phase, a wall ash layer, and a dense-dispersed phase.Ni et al. [6] used an Eulerian-Lagrangian approach to study the flow of the gas and particle phases.
Among the computational codes one can name the particle-size and composition distribution (PSCD) of the ash produced in combustion and other simplified transport models.However, as Ma et al. [35] observed, there does not seem to be a comprehensive and integrative approach in the codes to predict the deposit formation and growth in specific areas and its impact on the total heat transfer in the boiler.Ma et al. [35] present their results based on a computer code named AshPro SM , which can obtain information about slagging and fouling at specific localized points.One of the latest studies by Koric and Thomas [36] considers two different elastic visco-plastic models for the behavior of steel solidification.To do their study, they used a model developed by Anand [37] in the commercially available code ANSYS and the model by Kozlowski et al. [38] using the commercially available ABAQUS code.In both classes of models the strain rates are given by experimentally obtained/fitted correlations that are functions of temperature, stresses, chemical composition, etc.To develop an accurate heat transfer model in any type of coal combustion or gasification process, the heat transfer and to some extent the rheological properties of ash and slag, especially in high-temperature environments need to be understood and modeled properly.It has been recognized that the viscosity of slag and the thermal conductivity of ash deposits are two of the most important constitutive parameters that must be studied.As Rezaei et al. [39] state, the latter depends on the porosity, chemical composition, temperature of the deposit, etc.They observed that the thermal conductivity of ash increases with increasing temperature, but decreases with increasing porosity.Noticeably, the thermal conductivity of slags was found to be higher than that of the particulate structure in the porosity range 0.2-0.8.
In this paper, we first provide a brief review of the various approaches taken by different researchers in formulating or obtaining a slag viscosity model.In general, these models are based on experiments.Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied.Based on this brief review, a new constitutive model is proposed, where the stress tensor is not only represented by a yield stress part, but it also has a viscous part which is capable of demonstrating shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects.
In the next section of this paper, we present the basic governing equations for the flow of slag if it is considered as a non-homogeneous and non-linear single component material.Massoudi and Wang [40] discuss cases where slag may be modeled as a part of a multi-component or a two-component system (such as gasification) where generally a two-fluid (Eulerian-Eulerian) approach is used, or as a part of a (dilute) multi-component (Lagrangian-Eulerian) system.
Governing Equations of Motion and Heat Transfer
If slag is treated as a single component (phase) material, then, in the absence of any electro-magnetic effects, the governing equations of motion are the conservation of mass, linear momentum, convection-diffusion, and energy equations [41]: Conservation of mass: where ρ is the density of the fluid, ∂/∂t is the partial derivative with respect to time, and u is the velocity vector.For an isochoric motion, we have where b is the body force vector, T is the Cauchy stress tensor, and d/dt is the total time derivative, given by u . The balance of moment of momentum reveals that, in the absence of couple stresses, the stress tensor is symmetric.Conservation of Concentration: where c is the concentration and f is a constitutive parameter.This equation is also known as the convection-reaction-diffusion equation.Conservation of Energy: where ε is the specific internal energy, L is the gradient of velocity, q is the heat flux vector, and r is the radiant heating.Thermodynamical considerations require the application of the second law of thermodynamics or the entropy inequality.The local form of the entropy inequality is given by (see Liu [42], p. 130): Even though we do not consider the effects of the Clausius-Duhem inequality in our problem, for a complete thermo-mechanical study of this problem, the Second Law of Thermodynamics must be considered [42][43][44][45].To achieve "closure" for these equations, in general, we must provide constitutive relations for T, q, f, and r.In certain applications, some of these effects can be ignored.Nevertheless, the constitutive modeling of T and q remain a challenge in the problems or industrial applications related to thermofluid mechanics.In the next section, we discuss the various approaches taken by different researchers in formulating or obtaining a slag viscosity model.In general, these models are based on experiments.
Viscosity of Slags
In one of the earliest studies, Bills [46] reported that the addition of calcium fluoride leads to a lowering of the slag viscosity.In the blast furnace slag system, the impact of MgO on the viscosity of slag is of interest.Kim et al. [47] used the rotating spindle connected to a Brookfield digital viscometer to measure the viscosity of slag containing MgO at high concentrations of Al 2 O 3 , while Xu et al. [48] used an RTW-08 type testing instrument to measure the viscosity of CaO-Al 2 O 3 -MgO slag systems.It is interesting that fluxing compounds such as calcium oxide, which are added to coals primarily to reduce the slag viscosity, play a similar role as polymeric additives, which are added to many fluids to reduce the drag.
As the operating temperature decreases, the slag cools, and solid crystals begin to form.In such cases, the slag should be regarded as a non-Newtonian suspension, consisting of liquid silicate and crystals [49].In such cases, a better understanding of the rheological properties of the slag, such as yield stress and shear-thinning, are critical in determining the optimum operating conditions.Groen et al. [50] observed that, in slags where titanium-rich feeds are used, the melting point is lowered up to 27.5%, and, where calcium-rich feeds are used, there is an increase in the glass fluidity for CaO contents up to 30%, regardless of the amount of titanium present in the feed.They noticed that at the critical viscosity temperature, cv T , the slag changes from a homogeneous fluid to a mixture composed of fluid and a crystallizing phase (solid), where there is an increase in the (apparent) viscosity due to the presence of the crystals or the change in the melt composition.Kong et al. [51] observed that adding pulverized limestone with the effective ingredient of CaO improves slag flow properties.They also noticed that, at the temperatures below the temperature of critical viscosity, referred to in their analysis as cv T , the slag behavior is non-Newtonian.Stanmore and Budd [52] point out that ashes formed in the 0.1-10 MPa-s range, which are generally at temperatures above 1000 °C (assumed normally to behave as Newtonian fluids), are in fact very likely to be two-or multi-phase mixtures.They indicated that both yield stress and yield viscosity (Bingham viscosity) depend on temperature.They used the squeeze film rheometer to measure the viscosity.For a detailed analysis of using the programmable Brookfield LVDV-II+ viscometer and how it can be used to measure the viscosities of slag with fluxes [53].For using the rotating crucible viscometer to measure the viscosity of blast furnace type slags, we refer the reader to Saito et al. [54].Pandey et al. [55] presented a novel technique, whereby they attempted to measure the properties of mould powder slags, such as viscosity, liquidus temperature, etc., using ultrasonics.
According to Seetharaman et al. [56], most slags are ionic in nature.Their viscosities are very sensitive to the size of the ions and the electrostatic interactions, and, as more and more basic oxides are added to pure silica, the silicate network breaks down, and viscosity begins to decrease gradually.In addition to discussing the traditional methods of measuring slag viscosity, Seetharaman et al. [56] mention two other related concepts: the surface dilatational viscosity and the two-phase viscosity.They suggest that surface dilatational viscosity, although very difficult to measure at high temperatures, is an important quantity to be considered especially when foaming, coalescing bubbles, dispersion of droplets or solid particles, etc.For the two-phase viscosity, they suggest using the two well-known equations for viscosity based on the works of Einstein [57] and the later contribution by Taylor [58] and Batchleor [59].
In general, the feedstock for a given gasification process may include coal, heavy oil, coke, and wastes such as sewage sludge, biomass or even scrap tire.The gasification of petroleum coke is receiving more attention because of its heating value and low cost, while pressing special challenges due to its composition: vanadium, nickel, and iron.Park and Oh [60] studied the viscosity of Korean anthracite slag, which contains a large portion of vanadium trioxide (V 2 O 3 ).They observed that, in order to keep the slag flowing, the temperature had to be kept above 1,670 °C, which is 270 °C above the typical operating temperature for slurry-feed gasifiers.Based on their experimental results, they suggested two optimum ranges for gasification, namely glassy slag and crystalline slag.
To a certain extent, magnesia raises the viscosity of slags while increasing the liquidus temperature.However, as Ducret and Rankin [61] observed, above a certain concentration, MgO increases the viscosity.They also refer to a study performed by Broadbent et al. [62], in which 13 reputable laboratories measured the viscosity of the same synthetic slag and obtained varied results around the mean by as much as 50%.As pointed out by Sridhar [63], slags and fluxes are commonly used in many of the iron and steel or copper making and refining industries, as well as aluminum melting.In these applications, slags provide a protection layer for the molten metal surface from the atmosphere, while absorbing the impurities during the casting and creating some lubrication between the mold and metal strand, etc.The most important parameter in all these cases is the slag viscosity.Sridhar [63] provides an excellent summary of a few important models for the viscosity of slags.
Patterson and Hurst [64] presented the date for slag viscosity as a function of temperature for different kinds of Australian coal.When necessary they used limestone to lower the liquidus temperature and slag viscosity for optimum operation and slag tapping.Tonmukayakul and Nguyen [1] state that traditional methods for viscosity measurements and instrumentation might be satisfactory for coal ash slags obtained from bituminous (black) coal or those with high silica content but inaccurate for the cases with lower temperatures, where the coal may be partially molten and generally behaves as a non-Newtonian fluid.To overcome this problem, they suggested using a cone and plate rheometer, in which a volume of molten ash contained between a thin gap and a plate at a small cone angle is sheared.Although they do not provide any equations, other than to state that an Arrhenius-type equation satisfactorily represents the effects of temperature for both ash samples, this study is a valuable example of how careful measurements can provide a good deal of information for modeling purposes.For example, they observed that between 1,150 °C and 1,300 °C, the data of shear stress versus shear rate for melt as a function of temperature indicate that the oxide melt behaves as a Newtonian fluid; however, in the temperature ranges from 850 °C to 1,200 °C, coal ash was shown to have a non-linear response, thus exhibiting non-Newtonian characteristics.Specifically, and more importantly, they mention that this ash sample (the Loy Yang coal) should be modeled as viscoplastic shear-thinning fluid with a yield stress.They conclude that the presence of a high yield stress for the slag, in the range of operating temperatures, confirm previous findings [1] that a high alkali sulphate ash is more likely to agglomerate in fluidized-bed combustion than a silica rich coal.Song et al. [65] devised a high-temperature rheometer to study the rheological characteristics of slag, specifically the thixotropy and yield stress at different temperatures ranging from 500 °C to 1,550 °C.Although they did not provide any equations, their results presented in graphical forms indicate that the slags behave as a thixotropic shear-thinning non-Newtonian fluid with a yield stress.Measuring yield stress is very difficult, even under normal conditions, let alone under such high temperatures.Like other researchers, Song et al. [65] extrapolated the straight-line section of the data in the shear-rate vs. shear-stress curve to obtain the value of the yield stress.They also observed that the shear-thinning became more distinct as the temperature decreased.In the next sub-section we will look at some of the most well-known viscosity models for slags.
A Brief Review of Various Viscosity Models
Watt and Fereday [66] presented one of the earliest and most comprehensive studies of the measurement of viscosity of slags using British coals and the rotating cylinder-type viscometer.Melted ash was poured into a crucible at a temperature between 1,700 °C-1,800 °C and maintained at this temperature until the value of obtained viscosity had remained constant for an hour.Using an Arrhenius type equation, they related viscosities of slags for the entire temperature range to their compositions expressed in terms of the percent by weight of SiO 2 , Al 2 O 3 , MgO, CaO, and iron oxides, where: was changed into: where m and c are given in terms of compositions and T is the temperature in degrees C, where: They also compared their results with the so-called S 2 formula (Hoy et al. [67]), in which viscosity is given by the following equation: where S is the silica ratio and T is the temperature given in degrees K.It was shown that the new correlation provides better comparison with the data.Kato and Minowa [68] measured the viscosity of slag composed of CaO-SiO 2 -Al 2 O 3 .The effects of various other additions were also considered.They used a balanced platinum sphere viscometer and suggested the following equation: where K is constant related to the apparatus, W is the weight necessary to raise the sphere through the slag, t 1 is the time necessary to raise the sphere for 10 mm in the slag, and t 2 is the time required to raise the sphere in air 10mm.The temperature dependence of the viscosity was expressed by an Arrhenius-type equation, also known as the Andrade's equation: where A η is a frequency factor, E η is the activation energy, R is the gas constant, and T is the absolute temperature.It was observed that the value of E η increases with Al 2 O 3 or SiO 2 content, and that the slag is more viscous perhaps due to network formation.Overall, they found out that the viscosity coefficient and the activation energy of this molten slag increased with the increasing amount of Al 2 O 3 or SiO 2 while CaO lowered these values.Furthermore, it was observed that the addition of FeO, MnO, or MgO, which are popular in steel-making, lowered the viscosity but increased the activation energy.Perhaps the most widely used equation for the viscosity of slag is that of Urbain [69] who used basic ideas from statistical and molecular physics to relate the fluidity (the inverse of viscosity) to two probabilities related to the variable of the state and to the structure of the liquid (a measure of polymerization).He deduced that one of these probabilities, P e , is related to the energy level of the potential.Urbain therefore used a statistical approach suggested by Weymann [70] to obtain an exponential-type equation for the viscosity.For the other probability, P v, denoted as the 'hole' probability, Urbain suggested that it is proportional to the concentration of the 'holes' given at T via another exponential function.By combining these two functions, Urbain suggested a two-parameter expression for the viscosity: where A is a function of various parameters such as mass, volume of the structural unit, the energy of the well, and the partial molar entropy, and B is a function of the energy of the well and the partial molar enthalpy.Urbain showed that A and B are related through: where A 0 , E and T c are constants for a given liquid; this equation can be generalized to: where m and n are obtained from experimental data.For a group of 54 liquids, Urbain suggested the mean values of these two parameters to be: m = 0.29 n = 11.57 (3.10) The above model has been generalized by various authors for different conditions.Riboud and Lareecq [71] suggested that parameters A and B in the Urbain model should be polynomial functions of the composition.Kondratiev and Jak [72] used a similar argument and suggested that parameters m and n should also be functions of composition in the following manner: where the m's are model parameters and the X's are the molar fractions of Al 2 O 3 -CaO-'FeO'-SiO 2 respectively.They obtained the optimized values of the m's by fitting the experimental values of A and B. Building on the previous modifications to the Urbain model, Kondratiev and Jak [73] suggested two different continuous functions for B for two different modifiers (FeO and CaO): Generally, the temperature of critical viscosity, cv T , is defined as the temperature where the behavior of the molten ash, specifically its viscosity, changes from a Newtonian fluid to that of a non-Newtonian fluid, specifically a Bingham fluid [74,75].At this temperature, coal ash undergoes phase transformation, which can be due to nucleation and crystal growth, phase separation, etc.
Nowok [74] suggested that the viscosity of slag near the cv T depends on the well known second order equation, in which viscosity is related to the volume faction of the solid particles: where c and d are constants related to the shape of the dispersants and solid-melt interaction, is the volume fraction, and r is the viscosity of the 'residual slag'.It is suggested that a sudden increase in viscosity is due to a phase transformation, which results from both a nucleation and a spinodal decomposition.Another interesting entrained flow gasifier is the Prenflo, which operates at temperatures above ash slagging, where the molten ash accumulates at the inside walls of the gasifier, due to centrifugal forces.Between the liquid layer and the cold walls a solid slag layer is formed [76].
Reid and Cohen [77] suggested that the molten slag behaves as a Newtonian fluid above the critical temperature viscosity, T cv , and as a plastic fluid below T cv , while Johnson [78] assumed that molten ash behaves as a Bingham fluid over the entire range of temperatures considered.Seggiani (1998) [76] used the relative amounts of the basic and acidic constituents in the slag to predict the T cv and the slag viscosity; he also indicated that specific heat and thermal conductivity are important transport quantities that need to be studied.Specifically, he suggested the following correlation for T cv as a function of the acid/base ratio: where the slag components are given in weight percentages.Based on a linear regression, the T cv was given as: which is valid above the T cv , where is the viscosity in poise, S is the silica ratio, and T is the temperature given in degrees K. Hurst et al. [79] used a Haake high-temperature rotational viscometer with molybdenum and crucibles to measure viscosity of various slags.Similar to other researchers before them, they presented in contour plots the viscosity and the T cv of slags composed of 5% and 10% FeO.They used a modified Urbain model where the coefficients in the polynomial functions depend on the composition.To calculate viscosity, they suggested the following equation, based on a least squares fit of the experimental data, for the temperature range of 1,400-1,500 °C: where A and B are given in tabular forms and their values are different for each slag.For the experimental treatment of the data, Hurst et al. [79] used a modified Urbain model using the Weymann equation: or: where R is the gas constant and x and y are the normalized mole fractions given by: This study was specifically aimed at the SiO 2 -Al 2 O 3 -CaO-FeO (SACF) system at 5 and 10% wt FeO.Later, Hurst et al. [80] extended this study to include synthetic slags at the 15% wt FeO; they obtained similar results.Mills and Sridhar [81] Iida et al. [82] suggested a viscosity model where the effects of the slag through the (network) structure are taken into account by the basicity of slag.Thus: where A and E are parameters to be fitted from the experimental data, and 0 is the hypothetical viscosity of the non-networking slag expressed as a complicated exponential function, which depends on many parameters such as the molar volume at the melting temperature, the gas constant, the mole fraction, etc.The modified 'basicity' index * Bi was calculated from another complicated function that depends on many parameters, especially on the mass percentage of the various components present in the slag, for example, CaO-SiO 2 -Al 2 O 3 -MgO, etc. Reddy and Hebbar [83], based on the works of Bockris and Reddy [84] and their own earlier work [85], suggested the following equation for the viscosity of slag: where the 0 O N value takes into account the depolymerization and subsequent breakdown of the silicate network structure, and E is the energy needed to break the bond, given by a polynomial function: where are functions of temperature given by experimental correlations in the polynomial form.
It is known that in IGCC slagging gasifiers, the (coal ash) slags should be 'fluid' enough to be tapped.However, in many cases, the IGCC processes operate at conditions where there are still some solid particles in the liquid phase, and therefore, a complete understanding of the rheological behavior of slag should allow for the case of "partly crystallized slag," which contains some solid particles.Kondratiev and Jak [72,73] developed a viscosity model for the cases of completely liquid and heterogeneous and partly crystallized, using a semi-empirical model originally introduced by Urbain et al. [86] for Al 2 O 3 -CaO-FeO-SiO 2 systems in equilibrium with metallic iron.Here, the viscosity of the liquid slag was described by the Weymann-Frenkel equation: where T is the temperature in degrees Kelvin.A and B are model parameters, depending on the liquid composition, related to each other by: where m and n are model parameters.They represent B as a polynomial function depending on the slag composition, composed of the three groups: G-glass formation (SiO 2 ) Amphamphoteric oxides (Al 2 O 3 ), and Mod-modifier oxides (CaO-FeO, MgO), such that: where:
Mod
Mod Amph B =b +b α+b α X i=0,1,2,3 and α= X +X (3.28)where X G , X Mod , and X Amph are the mole fractions of glass formers, modifiers, and amphoteric oxides, respectively.Kondratiev and Jak [72,73] also tested a number of viscosity correlations for the heterogeneous liquids and found that the Roscoe equation [87] is suitable for the cases studied.This equation is appropriate for a colloidal fluid with suspension of rigid spheres of diverse sizes: where S is the viscosity of slurry and S V is the volume fraction of solid particles.In this equation, the effects of particle size or shapes are ignored, and it is assumed that the slurry behaves as a Newtonian fluid whose viscosity is given by this equation.According to the experimental results of Wright et al. [88], Roscoe's equation can be used for partly crystallized slags.Mudersbach et al. [89] also generalized the Urbain model by adjusting the coefficients of Weymann's temperature relation to depend on the CaO/SiO 2 basicity of the slag, namely: where: Thus, in this model, referred to as the FEhS model, m and n are adjusted to include the effects of the CaO/SiO 2 basicity.
Browning et al. [15] provide a brief review of the various slag viscosity models, and they observed that the Kamanovitch-Urbain method is the most accurate for SiO 2 -Al 2 O 3 -CaO-MgO slags.They also suggested an empirical method to obtain the viscosity of the slag that depends on finding the temperature shift, among other parameters.For the standard viscosity curve they suggested the following expression: Thus, if the temperature shift for a given slag composition is known, the above equation can be used to find the viscosity above the T cv .They also observed that the temperature shift depends on a weighted molar ratio A, given by the following expression: where A was given by a polynomial expression in which the coefficients or the quantities associated with each component are in terms of the mole fraction.The idea of the temperature shift is based on the observation made by Nicholls and Reid [90] who stated that, at a given viscosity, the gradient of the viscosity-temperature curve is the same as if the coal ash slag is in the Newtonian range.
Inaba and Kimura [91] measured the viscosity of carbon-bearing iron oxide pellets with the acid component of slag by using the oscillating-plate viscometer developed by Iida et al. [82].Specifically, they used the following equation: where: where is the density (kg/m3), is the viscosity (Pa•s), a E is the amplitude of vibration in the air (m), E is amplitude of vibration in the liquid, M R the impedance of the viscometer (kg•m/s), a f is the sympathetic frequency in the air (Hz), and A is the surface area of both sides of the oscillating plate (m2).The exponent 2 was found not to change with different experiments.Nakamoto et al. [92] attempted to develop a viscosity model that is non-linear in the concentration of slags.Specifically, they used the concept of "cutting-off" points, which are adjacent to non-bridging oxygen and the free oxygen ions, creating a non-linear network structure.They suggested that the viscosity of the molten slag depends on the frequency of the occurrence of "cutting-off" points and suggested the following Arrhenius-type equation: where A is a constant and V E is the activation energy for viscosity, which was assumed to be inversely proportional to the distance S, in which the "cutting-off" point moves when a stress is applied.Thus: They suggested that for multi-component systems, the activation energy V E is given by: where m is given by a complicated equation as a function of the number of oxides, the fractions of the non-bridging oxygen, and the free oxygen ions.Buhre et al. [93] discuss the method of thermomechanical analysis (TMA) to determine the slag viscosity.In this method, measurements can be made up to 2,400 °C.They showed that: where κ is the ratio of the radius of the ram to the internal radius of the crucible, R is the internal radius of the crucible (in meters), u is the velocity of the ram (m/s), L 0 is the initial length of the annular region, d is the displacement of the ram, m is the mass applied to the rim (kg), and g is the gravity (kg/m 3 ).The above relationship is the balance between the pressure applied to the ram and the flow rate.Buhre et al. [93] emphasize that their results are not valid for the cases in which the molten ash contains solids and, thus, behave as a non-Newtonian fluid.As Seok et al. [94] observed, the highly basic BOF slags exhibit much higher viscosities than those measured for normal slags.They suggested using the Einstein-Roscoe equation for the liquid melt containing solid particles: where η, η 0 , and f are the viscosity of the liquid melt with solid particles, without solid particles, and the volume fraction of the particles, respectively.The parameter 'a' is related to the inverse maximum faction of the particles, and the constant n is related to the geometrical shape of the particles and is assumed to be 2.5 for spherical particles.Kalicka et al. [95] extended Iida's model for the case of CaF 2 and observed that the presence of this component decreased the slag viscosity strongly.The conventional ash flow temperature (AFT) analysis only takes into account the bulk chemical composition of the mineral phase; that is, the compositions in the slag-liquid phase are not distinguished from those in the crystallized phase.The AFT is the main parameter that suggests the suitability of a coal type for combustion or gasification and was originally developed to study the clinker forming characteristics of ash in stoker-fired furnaces [96,97].Van Dyk et al. [97] identified two important temperature ranges: (a) between 900 °C and 1000 °C, the slag begins to form; and (b) between 1,000 °C and 1,250 °C, there is a mixture of slag-liquid and crystallized material.They used the modified "Urbain Model" to calculate the viscosities of various slags: where the Bs are given polynomial functions of where: It was observed that the viscosity decreases as the CaO content increases [96].Another area of similar behavior or response between molten steel and slag is the viscoelastic response of such materials, where, at high temperatures, the time dependent constitutive relations are needed not only for the stress-stress relationship, but also for the heat flux vector.In an important paper Kozlowski et al. [38] suggested four types of elastic-viscoplastic constitutive relations of the type: where they suggested using the standard Hooke's law for the stress-strain relationship, i.e.: where: (3.47)where is the stress rate, is the total strain rate, e is the elastic strain rate, p is the inelastic (plastic) strain rate, is the thermal strain rate, and is the temperature in Kelvin.A significant contribution of this work was the recognition of the difficulty of measuring and the importance that the Young's modulus E plays in this kind of problem.They used the experimental results of Mizukami et al. [98], where: This equation is valid for the range of 900 °C and the liquidus.For the four mentioned models, Kozlowski et al. [38] suggested various constitutive relations for p as functions of stress, temperature, carbon content, activation energy, and various adjustable parameters such as temperature dependent stress exponent, etc. (see also Thomas [99,100] for a review of this subject).In the continuous casting of steel, when mold powder is added to the free surface of the liquid steel, it begins to melt and flow.The re-solidified mold powder, also called slag forms a layer adjacent to the walls; there is an increase in its viscosity and it begins to act as a solid-like material [101,102].Once the slag cools, it creates a glassy layer.Heat conduction across the slag layer plays a major role in the operation; it is a function of the thickness of the slag and depends on the conductivity of the various layers and particles embedded in the slag.In the model that they developed, Meng and Thomas [101] suggested that the viscosity of molten slag depends on the temperature in the following way: where fsol and n are empirical constants chosen to fit the measured data, 0 is a reference viscosity measured at the reference temperature, and 0 is usually chosen to be 1,300 °C [19].Meng and Thomas [102] suggested that shear stress in the liquid slag can be represented by: where is given by the above equation, while recognizing that in the slag layer the viscosity can be represented by a position-dependent function such that: where l d is the liquid film thickness.In the continuous casting of steel, it has been observed that the viscosity of the molten slag or flux varies with the composition of the various elements present and the temperature.For example, most commercial fluxes contain (0-13%) Al 2 O 3 , (22%-45%) CaO , and (17%-56%) SiO 2 , with small amounts of fluorides (NaF, CaF 2 ), alkalis (Na 2 O, K 2 O) and other basic oxides (MgO, BaO) , [19].It is well known that the viscosity of liquid flux can be represented by an Arrhenius equation of the following type: As the powder sinters, its viscosity is increased greatly.Riboud and Larrecq [71] suggest an alternative equation: where: where is the viscosity measured in Pa•s, is the temperature in degree Kelvin, and x is the mole fraction of the constituent.For a typical flux [19], B is ~24,000.To study for a range of fluxes, Zhao et al. [19] suggested an equation of the type: where 0 is a reference temperature (1,773 K), 0 is a reference viscosity (0.05 Pa•s) and B is a parameter representing the temperature dependency of the flux viscosity.From this brief review in this section, it can be seen that the majority of applications have been concerned with the dependency of slag on temperature, time, chemical composition, concentration and shear rate [see Table 1].In the next section of this paper, we discuss various existing constitutive models for non-linear viscoelastic materials that can be used to model the rheological characteristics of slag.
Constitutive Modeling of Slag
As evidenced in Section 3 of this paper, the viscosity of slag is the most important parameter in determining the proper operating conditions for a gasifier.However, viscosity is only one of the important rheological parameters and since slag, in general, behaves as a non-linear fluid, we must study the constitutive modeling of slag.That is, if we know how the slag viscosity changes as a function of temperature, concentration, shear rate, etc., we still do not know much about the complete rheological characteristics or behavior of the slag, but only its shear viscosity or response.Whether the slag has a yield stress, or exhibits normal stress effects, or is able to demonstrate stress relaxation or creep, are questions that can only be answered if one provides constitutive models for the complete behavior of slag, and not just its response in shear.
Background
The stress tensor and the heat flux vector are two important constitutive relations needed to study flow and heat transfer in complex fluid-like materials (ignoring the effects of radiation).From an engineering perspective, this oftentimes translates into measuring viscosity and thermal conductivity.As a result, most researchers have attempted to generalize Newton's law of viscosity and Fourier's law of heat conduction to various and more complicated cases by assuming that the shear viscosity and/or thermal conductivity could depend on a host of parameters, such as shear rate, temperature, porosity, etc.A more rigorous approach is to model the stress tensor and the heat flux vector.
While a constitutive equation is a postulate or a definition from the mathematical standpoint, physical experience remains the first guide, perhaps reinforced by experimental data.Constitutive relations are also required to satisfy some general principles.Wang and Truesdell ([103], p. 135) list six general principles: (1) Determinism, (2) Local action, (3) Equipresence, (4) Universal dissipation, (5) Material frame-indifference, and (6) Material symmetry.The principle of material frame-indifference (sometimes referred to as Objectivity), which requires that the constitutive equations be invariant under changes of frame, is perhaps the most important of all.This principle is a consequence of the classical physics principle that states material properties are independent of the observer's frame of reference.It requires that constitutive relations depend only on frame-indifferent forms (or combinations thereof) of the variables pertaining to the given problem.In general, based on available experimental observations, many slags exhibit characteristics similar to those of non-linear materials.The main points of departure from linear behavior are: 1.The ability to shear-thin or shear-thicken 2. The ability to creep 3. The ability to relax stresses 4. The presence of normal stress differences in simple shear flows 5.The presence of yield stress The non-linear time-dependent response of complex fluids constitutes an important area of mathematical modeling of non-Newtonian fluids.For many practical engineering cases, where complex fluids such as paint and slurries are used, shear viscosity can be a function of one or all of the following: time, shear rate, concentration, temperature, pressure, electric field, magnetic field, etc.Thus, in where t is the time, π is some measure of the shear rate, θ is temperature, is concentration, p is pressure, E is electric field, and B is the magnetic field.Of course, in certain materials or under certain conditions, the dependence of one or more of these can be dropped.It is not clear whether a slag layer would exhibit all of the possible five non-linear responses listed above.However, based on the results available, it is clear that a few of these non-linear effects have been observed for slags.
This section of the paper is not intended to be comprehensive of all existing models, but rather the aim is to be representative and discuss a few sample cases.
Yield Stress
Although it has been recognized that the slag behaves as a yield stress type fluid, in most papers that were mentioned in Section 3 of this paper, it was taken for granted that the slag behaves as a Bingham-type fluid.In this sub-section we briefly discuss this model and point out that for a slag layer, another important parameter should be incorporated.This parameter is the temperature of the critical viscosity, T cv defined as the temperature where the behavior of the molten ash changes from a Newtonian fluid to that of a non-Newtonian fluid, specifically a Bingham fluid [74,75].
Bingham ([104], p. 215) proposed a constitutive relation for a visco-plastic material in a simple shear flow where the relationship between the shear stress (or stress T in general), and the rate of shear (or the symmetric part of the velocity gradient D) is given by the following (see Prager [105], p. 137): where ij T denotes the stress deviator and F, called the yield function, is given by: where 2 I I is the second invariant of the stress deviator, and in simple shear flows it is equal to the square of the shearing stress and K is called yield stress (a constant).For one-dimensional flow, these relationships reduce to the ones proposed by Bingham [104]: and: The constitutive relation (2a.1) is known as the Bingham model (see also Ziegler [44], p. 170).Casson [106] considered that the suspended particles flocculate into rod-like structures, which are broken into primary particles as the shear rate increases.He then developed the following widely used empirical model for the tension in rods under flow: In this equation τ o is the yield stress, μ ∞ is the suspension viscosity at infinite shear rate and γ is the shear rate.One of the inherent limitations of such empirical models is that they are, in general, one-dimensional, and it is not that easy or straightforward to generalize and obtain the appropriate three-dimensional forms, which are often necessary to solve general three-dimensional problems.Nevertheless, this equation has been successful for a range of parameters and a class of fluids.Oldroyd [107] proposed the following generalization of the Bingham solid [108]: In a coal-water paste (CWP) atomization system with a high concentration of coal and wide particle size distribution, such as those used in pressurized fluidized-bed combustion (PFBC) facilities, the CWP is often modeled as a Herschel-Bulkely model (see Tanner [109], p. 146) where the effect of yield (using Bingham plastic type model) and shear-rate dependence (using the Power-law model) are combined: where c is a critical shear rate.There are obviously other yield criteria which can be used.For example, by including the gradient of the volume fraction as one of the important parameters in proposing a constitutive equation for the stress tensor, a theory could be devised for the flow of granular materials (see Massoudi and Mehrabadi [110]).In this theory a critical yield condition called the Mohr-Coulomb emerges naturally, as does the transition between the frictional flow regimes, characterized by the absence of deformation and the viscous flow regime, characterized by deformation.More work is needed in this area before an appropriate yield-stress can be formulated for slags.
Effects of Concentration, Shear Rate, and Pressure
We will not discuss the effects of temperature on the viscosity as this was the main emphasis in Section 3 of this paper.
Concentration Effect
As we saw in Section 3, there were a few examples where the viscosity was assumed to depend on the concentration, for example, Equations (3.13), (3.29), and (3.40), all of which were polynomial functions of some type.However, as we discuss in this sub-section it is possible to have other types of dependency.
The problem of theoretical determination of the viscosity of a dilute suspension consisting of an incompressible Newtonian fluid and rigid sphere particles was studied by Einstein [57] who derived the classical formula for the effective viscosity of the suspension: where μ f is the viscosity of the fluid base and is the particle volume fraction, which was assumed to be very small compared with unity.Later, Taylor [58] showed that if the spheres are small drops of another fluid, then the viscosity of the suspension is given by: where μ d is the viscosity of the liquid drops and μ f is the viscosity of the base fluid.Batchelor and Green [111] considered the effect due to the Brownian motion of particles for an isotropic suspension of rigid sphere and spherical particles.They derived a formula for the effective viscosity including the terms of order 2 and showed that: where < 1.The non-linear dependence of the viscosity on the particle volume fraction observed here indicates significant interparticle interaction.To account for such interactions, Brinkman [112], Roscoe [87], Krieger and Dougherty [113], Nielsen [114], and Mooney [115] used the differential effective medium approach for hard sphere suspension to extend Einstein's formula to a moderate particle volume fraction of about 0.04.Some of these models are: Table 2.Additional correlations for the viscosity as a function of concentration.Mooney (1951) [115]
Normal Stress Effects and Shear-Rate Dependent Viscosity
Surprisingly, in all the papers studied in Section 3, there was no discussion on the possibility that the slag can exhibit normal stress effects.Perhaps the simplest constitutive model which can capture the normal stress effects (which could lead to phenomena such as "die-swell" and "rod-climbing"manifestations of the stresses that develop orthogonal to planes of shear) is the second grade fluid, or the Rivlin-Ericksen fluid of grade two [45,118].This model has been used and studied extensively [119] and is a special case of differential-type fluids.For a second grade fluid, the Cauchy stress tensor is given by: where p is the indeterminate part of the stress due to the constraint of incompressibility, μ is the coefficient of viscosity, α 1 and α 2 are material moduli, which are commonly referred to as the normal stress coefficients.The kinematical tensors 1 A and 2 A are defined through: The thermodynamics and stability of second grade fluids have been studied in detail by Dunn and Fosdick [119].They show that if the fluid is to be thermodynamically consistent in the sense that all motions of the fluid meet the Clausius-Duhem inequality and that the specific Helmholtz free energy of the fluid is a minimum in equilibrium, then: For such fluids, orthogonal rheometers are needed to measure the normal stress coefficients and these tests/experiments are in addition to the traditional shear viscometers which can only provide information about the shear viscosity.In an effort to obtain a model that does exhibit both normal stress effects and shear-thinning/thickening, Man [120] modified the constitutive equation for a second grade fluid by allowing the viscosity coefficient to depend upon the rate of deformation.The two proposed models are [121]: where: is the second invariant of the symmetric part of the velocity gradient, and m is a material parameter.When m < 0, the fluid is shear-thinning, and if m > 0, the fluid is shear-thickening; in Equation (4.22a)only the shear viscosity depends on the shear rate, whereas in Equation (4.22b) the viscosity and the normal stress coefficients are dependent upon shear rate.A subclass of models given by Equation (4.22) is the generalized power-law model, which can be obtained by setting α 1 = α 2 = 0: The power-law models are deficient in many ways: they cannot predict the normal stress differences or yield stresses; they cannot capture the memory or history effects.At the same time, the power-law models have been used for a variety of applications where the shear viscosity is not constant [41,122,123].Gupta and Massoudi [124] generalized the model given by Equation (4.22 a), by allowing the shear viscosity to be a function of temperature: where μ(θ) was assumed to obey the Reynolds viscosity model [125]: . This is a general model applicable to many chemical processes.
Pressure Effects
For many fluids (known as the rate-dependent models, such as Maxwell or Oldroyd models) the rate of the stress tensor T is described implicitly as a function of T and D. Another class of implicit constitutive theories have been proposed by Rajagopal and co-workers, where: is given by any of the relationships: .When n < −1, 2), the fluid is a shear thinning, and when n > 2, a shear-thickening.This equation is perhaps the simplest form of a pressure-dependent viscosity fluid.
In the final section of this paper, we present a few remarks about modeling issues related to slag viscosity.We also propose a simple yet general constitutive relation, which we think would be appropriate for slags.
Concluding Remarks
In this paper we have attempted to provide a review of the various possible ways of formulating the viscosity of slag as reported in the literature.In our opinion, a major shortcoming of these studies is that the emphasis has been put on the meaning or the measurement of the shear viscosity.In reality, however, the issue is: What constitutive equation would be a reasonable or a more appropriate representation for the stress tensor for the slag?With this perspective, the emphasis shifts to constitutive modeling of the slag as a whole and not just the measurement of slag viscosity, and as a result, with experiments as the basis of the development, we need to formulate models for slag in specific applications.Interestingly, materials that apparently have nothing in common can be expressed rheologically in a similar manner.For example, many studies indicate that for lava (Griffiths [128]) or coal slurries [129] viscosity is a function of temperature, volume fraction, and size and shape of the particles.In many applications, for example melt fraction [130] and basaltic lavas, the apparent viscosity is assumed to follow the Einstein-Roscoe relation [87,131]: Based on the experimental evidence and the review presented in this paper, it is clear that a general constitutive relation for the slag should at the very least be able to predict (or include) some type of yield stress and a viscous stress with shear-thinning capabilities, i.e., where the coefficient of viscosity not only depends on the shear rate, but also on concentration, temperature, etc.Thus, we propose: v y T T T (5.2) where in general, the yield stress can be obtained from experiments [see Section 4.2] and for the viscous stresses we suggest a model where the material exhibits normal stress effects and the shear viscosity depends on volume fraction, temperature, chemical composition and shear rate [132] where [see Sections 3, Sections 4.3.1 and 4.3.2]: where the specific form of the viscosity ) X φ, μ(θ, is given by appropriate equation based on the experimental data available and α 1 and α 2 are the material moduli, which are commonly referred to as the normal stress coefficients; Also is the second invariant of the symmetric part of the velocity gradient, and m is a material parameter.When m < 0, the fluid is shear-thinning, and if m > 0, the fluid is shear-thickening.This model is a general frame invariant model, suitable for flows of non-linear fluids with the viscosity being a function of temperature, concentration, and shear rate, and the material exhibiting both normal stress differences.Obviously the methodology that we have presented here is not very rigorous.Of course, in the studies reviewed here, the concept of normal stress was not discussed, and it is not known whether some or none of the various kinds of slag would exhibit normal stress effects.The measurement of these material parameters presents new opportunities for the slag community.Finally, among the challenging problems in understanding the flow and behaviour of slag, one can name the particle-slag interaction.Whether a carbon-containing particle of a given size would settle at the boundary on the slag layer or get entrapped inside the slag depends on many factors, such as the angle of contact, slag viscosity, forces acting on the particle, etc. [133,134].Another important parameter in understanding and controlling the slag layer is the surface tension of molten slag, which is especially significant in the refining or continuous-casting process of making steel [135].The slag layer could potentially cause degradation of the refractory liner.Chemical dissolution, erosion, chemical spalling and structural spalling are among the important parameters affecting the mechanism involved in the slag-refractory in a gasifier [136].The removal of nonmetallic inclusions using direct absorption methods into the slag layer is an important problem in steelmaking industries.Among the important forces influencing this process are drag, capillary, and fluid added mass.Shannon et al. [133,134] suggested using the Brenner drag model, which like many other drag models includes the viscosity of slag.When there is contact between an oxide particle and a molten oxide slag phase, at least two important phenomena occur [137]: (1) how the particle responds and whether it settles into the slag (depending on the interaction forces, especially drag, capillary, added mass, etc.) and (2) whether the particle will dissolve into the slag, depending on the slag and oxide properties, particle concentration, etc.
FeO' system, all obtained by optimization.
+a y+a y +a x+a xy+a xy +a x +a x y +a x y +a x +a x y+a x y(3.18) + MgO + Na O + K O + FeO + 2TiO mole fractions(3.44) 13)] where K is the crowding factorRoscoe (1952) [87] et al. (2000) [116], Kwon et al. (1998) [117] packing volume fraction.[Equation (4.18)] 29 a,b,c) where α and q are constants.They also noted that when r = 2, all the possible correlations for viscosity given by Equation (4.28) reduce to the classical Navier-Stokes viscosity, and when on the second invariants of D, and the model becomes a special case of the general Stokesian fluid.Hron et al.[127] suggested the following: where max is the maximum crystal fraction in which flow can occur, θ o and η o are reference values, and γ is a constant.
0083 SiO 2 + 0.00601 Al 2 O 3 − 0.109 c = 0.0415 SiO 2 +0.0192Al 2 O 3 +0.0276Equiv Fe 2 O 3 +0.0160CaO − 3.92 SiO 2 + Al 2 O 3 + Equiv Fe 2 O 3 +CaO+ MgO = 100 (wt%) also extended the Urbain model by making A and B functions of a correction factor, related to optical basicity, denoted by corr .They called this model the National Physical Laboratory (NPR) model and, specifically, they suggested the following correlations, which are based on experimental data:
Table 1 .
Examples of viscosity of the slag as a function of temperature, time, chemical composition, concentration, and shear rate. | 2014-10-01T00:00:00.000Z | 2013-02-07T00:00:00.000 | {
"year": 2013,
"sha1": "edf57b9c3f5bae67cf41fdf7e7458c8090cf405a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/6/2/807/pdf?version=1426591291",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "edf57b9c3f5bae67cf41fdf7e7458c8090cf405a",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
1300729 | pes2o/s2orc | v3-fos-license | Prevention and Early Detection Clinical Trials: Opportunities for Primary Care Providers and Their Patients
Enrollment into cancer prevention and early detection clinical trials represents a unique challenge compared with a diagnostic or treatment trial because it involves subjects without a diagnosis of cancer. This paper examines some of the barriers to participation in prevention and early detection trials and provides detailed information about two ongoing prevention and two ongoing early detection clinical trials open to enrollment as well as brief summaries of seven additional trials now open to enrollment.
INTRODUCTION
In the United States, cancer is a major source of disease burden, and the cost from morbidity and mortality is enormous in human and economic terms. Cancer is the second leading cause of death and the leading cause of premature mortality as measured by overall person-years of life lost. The National Cancer Institute (NCI) estimates that in 1999, Americans lost 8.3 million years of life as the result of premature death from cancer. 1 Some progress is being made, which is evident in recent trends in cancer mortality. Data from 1999 show the death rates for all cancers combined continued to decline in the United States. However, the number of cancer cases can be expected to increase because of the growth and aging of the population in coming decades. 2 Even in the presence of declining mortality rates, the importance of greater progress to reduce the burden of cancer is among the major scientific and public health challenges.
Years of scientific research have demonstrated that cancers occur not as sudden catastrophic events, but rather as the result of a complex and long-evolving process. The process of carcinogenesis can take decades to complete, providing time and opportunity to intervene to stop or to reverse its progress either before the clinical appearance of cancer or at its earliest stages. Due to the continuing burden, public health interventions have focused on prevention and early detection to reduce cancer incidence and mortality.
Logically, reducing cancer incidence through primary prevention is the most desirable goal, and early work by Doll and Peto 3 suggested major reductions in cancer incidence are possible through improved nutrition, physical activity, and avoidance of tobacco products, the latter being the only strategy with demonstrated Prevention and Early Detection Clinical Trials: Opportunities for Primary Care Providers and Their Patients efficacy and broad applicability. In addition to lifestyle factors, evidence suggests chemopreventive interventions and vaccines hold the greatest promise for reducing cancer incidence.
For some major cancers, early interventions after the detection of occult invasive disease, and in some instances precursor lesions, have been shown to offer significant advantages in reduced morbidity and mortality compared with treatment of disease after signs and symptoms are present. 4,5 In the case of breast and colorectal cancer screening, randomized clinical trials have clearly demonstrated the benefits of screening, and there is sufficient inferential evidence to support screening for cervical cancer. Evidence also exists to support offering men the opportunity to make an informed decision about testing for early prostate cancer detection after a discussion about the potential but uncertain benefits and possible harms. [6][7][8][9][10][11] While primary prevention, if possible, would be the overall preferred strategy, prevention and early detection at this time must be thought of as complementary strategies, i.e., reducing the burden of disease through prevention where possible and through earlier therapeutic interventions when prevention has failed or when a preventive strategy does not yet exist.
Early detection can also identify people who are at high risk for developing cancers because of the presence of precursor lesions, such as those with colorectal polyps or cervical squamous intraepithelial lesions. Such high-risk populations are frequently participants in prevention clinical trials. For some cancers, there are absolute genetic risks, such as Familial Adenomatous Polyposis, where people develop thousands of colorectal polyps. Clinical trials in these genetic risk populations also lead to identifying interventions that may have value in the general population.
PREVENTION AND EARLY DETECTION TRIALS:
THE PIVOTAL ROLE OF THE PRIMARY CARE PROVIDER Among the various public health and clinical strategies that might be applied to reduce the burden of cancer in average-and high-risk populations, it is well accepted that recommendations to the public and health care professionals should be based on sound science. While the foundation for these studies normally arises from smaller investigations or observations from studies focused on different endpoints, ultimately the soundness of a potential prevention or early detection strategy depends on a demonstration of efficacy in a large prospective study capable of supporting a definitive statistical analysis of the results. These investigations tend to cost tens of millions of dollars and require many years of combined intervention and follow-up, and years of investigator and sponsoring agency time to measure the outcomes of interest. Usually these are studies that will never be repeated. This level of investment can be taken as a clear indication of the potential benefit to the public health if the intervention is effective. However, at the most fundamental level, the potential for the success of these studies depends on rapid enrollment of participants and their adherence with the study protocol.
Enrollment into prevention and early detection trials represents a unique challenge compared with a diagnostic or treatment trial because it involves subjects without a diagnosis of cancer. Once a person has been diagnosed with cancer, interest in beginning treatment and discussion of treatment options are paramount, and thus consideration of participation in a trial is more clearly relevant. However, for most individuals who are asymptomatic for cancer and in good health, unless a physician suggests they participate in a prevention study, they are likely to remain unaware of this option.
What are some of the barriers to participation in prevention and early detection trials? Research has shown physicians and individuals often lack awareness that studies are taking place in their communities. Physicians may fear losing control of their patient's care, and likewise, individuals usually are unwilling to go against physicians' advice or direction. If their doctor does not recommend a trial as an option for cancer prevention or early detection, they are unlikely to participate. Some physicians and many individuals are fearful, distrusting, or suspicious of research, and for many people the idea of being "randomized" to something other than the standard of care is simply unacceptable. For others, the possibility of being randomized to the current standard of care rather than to the new intervention being tested is unacceptable.
Individuals also face personal or practical obstacles, such as financial costs, time and travel, family considerations, and concerns about even temporarily leaving the care of their physician to participate in a trial. Likewise, today's busy clinicians may not want to take the time or may not feel they have the time to identify and explain these study opportunities. Generalists have to be both knowledgeable and enthusiastic about seeking participation to make the effort needed for a successful referral to prevention and early detection studies.
Trial investigators must depend a great deal on the supporting role of the referring physician. Successful study accrual relies on collaboration among study investigators, the referring clinician, and the study participants. All participants receive, at least, the best standard treatment available. If a participant is taking a promising new agent or being screened with a new technology, they may be among the first to demonstrate benefit from the innovation. Many participants enjoy a sense of pride from their contribution to the advancement of medical knowledge that could improve care for others.
Below we describe two ongoing prevention and two ongoing early detection clinical trials open to enrollment.These studies address three of the leading cancers affecting men and women, specifically cancers of the lung, prostate, and breast. These three cancers represent the most common cancers affecting men and women, and each of these malignancies also is a major cause of death. This year, the American Cancer Society (ACS) estimates 171,900 men and women will be diagnosed with lung cancer, and 157,200 will die of this disease. 12 Not all lung cancer is caused by smoking, but the attributable risk from smoking is far greater than the combined attributable risks of all other risk factors known thus far. Even those who quit smoking remain at increased risk for lung cancer for a number of years; about half of all diagnosed lung cancers occur in former smokers. Although there is an obvious strategy to prevent this disease, i.e., not starting smoking or quitting, at present there are an estimated 90 million current and former smokers in the United States. Because thousands continue to take up smoking every day, at least for the next several decades a substantial number of individuals will be at high risk of developing lung cancer. If screening for lung cancer with newer technology proves to be efficacious, it could prevent tens of thousands of lung cancer deaths each year.
Prostate cancer is the second leading cause of death from cancer in men. In the United States, the ACS estimates that nearly 221,000 men will be diagnosed with prostate cancer in 2003, and approximately 29,000 will die from this disease. 12 At this time, it is uncertain whether testing for early detection reduces prostate cancer mortality, although inferential data are suggestive of a benefit. 9,10 However, even if screening does reduce prostate cancer mortality, as long as there are significant risks of side effects from therapy and an inability to distinguish incidental from life-threatening prostate cancers, it will be difficult to articulate with certainty an explicit disease control strategy. 13 Clearly, new disease control strategies, including an intervention for the prevention of prostate cancer, are a high priority.
Breast cancer is the most common cancer in women and the second leading cause of death from cancer in women. This year the ACS estimates 211,300 women will be diagnosed with breast cancer, and approximately 40,000 women will die from this disease. 12 Screening for breast cancer has been shown to be effective in reducing mortality, but the current technology is imperfect and costly. Here again, primary prevention is preferable. However, improvements in early detection technology, which include new imaging technologies that address the fundamental limitations of conventional screen-film radiography as well as new technologies that are not based on imaging, are important areas for continuing investigation.
The Study of Tamoxifen and Raloxifene
The Study of Tamoxifen and Raloxifene (STAR) is a clinical trial to determine whether the osteoporosis drug raloxifene (Evista ® ) has equivalent breast cancer risk reduction benefits, with a reduced risk of side effects, when compared with tamoxifen (Nolvadex ® ) in postmenopausal women who are at an increased risk of developing the disease.Tamoxifen is the only drug approved by the US Food and Drug Administration (FDA) to reduce the incidence of breast cancer in women at increased risk, based on the 1998 results of the Breast Cancer Prevention Trial (BCPT), a study of more than 13,000 pre-and postmenopausal high-risk women aged 35 and older who took either tamoxifen or a placebo for up to five years. Women who took tamoxifen had a 50% reduction in the incidence of breast cancer compared with women on placebo. STAR is the follow-up trial to the BCPT. 14 STAR is funded primarily by the National Cancer Institute and coordinated by researchers with the National Surgical Adjuvant Breast and Bowel Project (NSABP). NSABP investigators are conducting STAR at more than 500 centers across the United States, Puerto Rico, and Canada ( Figure 1).
About the Study Drugs
Both tamoxifen and raloxifene are Selective Estrogen Receptor Modulators (SERMs), agents that have estrogen-like activity in some tissues, but block the action of estrogen in others. 15 For both tamoxifen and raloxifene, the antiestrogenic effects on breast cancer risk reduction apply predominantly to ER-positive breast cancer. 14,16 Tamoxifen has been used for more than 30 years to treat patients with breast cancer, 17 and works, in part, by its interference with the activity of estrogen. In October 1998, the FDA approved tamoxifen to reduce the incidence of breast cancer in women at increased risk for the disease based on the results of the BCPT. 14 In December 1997, the FDA approved raloxifene for the prevention of osteoporosis in postmenopausal women. Raloxifene is being tested because large clinical trials of its effectiveness against osteoporosis have suggested that women at a low risk for breast cancer taking the drug developed fewer breast cancers than women taking a placebo. 16 Like most medications, tamoxifen and raloxifene cause adverse effects in some women. The less serious effects experienced most often by women taking either drug are hot flashes and vaginal symptoms, including discharge, dryness, or itching. Treatments that may minimize or eliminate most of these side effects are available to the participants. Both drugs also have rare but serious side effects that can be life threatening.
Serious Side Effects of Tamoxifen
Tamoxifen increases the risk of two types of cancer that can develop in women with an intact uterus: endometrial cancer and uterine sarcoma. In the BCPT, women who took tamoxifen had more than twice the chance of developing endometrial cancer compared with women who took a placebo. 14 The risk of endometrial cancer in women taking tamoxifen was comparable to the risk in postmenopausal women taking single-agent estrogen replacement therapy, about two cases of endometrial cancer per 1,000 women taking tamoxifen each year. In BCPT, all endometrial cancers that occurred in women taking tamoxifen were Stage I and were cured, suggesting that heightened surveillance for abnormal vaginal bleeding is important and may be an effective strategy for women who choose to take tamoxifen.
In 2001, the ACS issued new guidelines related to early detection of endometrial cancer in average-and high-risk women. 18 The ACS recommended women at elevated risk for endometrial cancer from tamoxifen therapy should: (1) be informed about the risks and symptoms of endometrial cancer and strongly encouraged to report any unexpected bleeding or spotting to their physicians; and (2) be informed about the potential benefits, risks, and limitations of testing for early endometrial cancer detection in order to insure informed decisions. 18 Information collected by the FDA indicates that women who have used tamoxifen for breast cancer treatment or prevention also have an increased risk of developing uterine sarcoma. 19
Study of Tamoxifen and Raloxifene (STAR) Sites
Research to date indicates that uterine sarcomas are more likely to be diagnosed at later stages than endometrial cancers, and may therefore be harder to control and more life threatening than endometrial cancer.
Women taking tamoxifen in BCPT had three times the chance of developing a pulmonary embolism as women who took the placebo, and were also more likely to have a deep vein thrombosis.Women taking tamoxifen also appeared to have an increased chance of stroke.
Serious Side Effects of Raloxifene
Information about raloxifene is limited compared with the data available on tamoxifen because of the shorter time it has been studied (about eight years) and the smaller number of women who have been studied. Studies of raloxifene have generally involved women who received the drug to determine its effect on osteoporosis, and the duration of both therapy and follow-up has been short. Women taking raloxifene in clinical trials have about three times the chance of developing a deep vein thrombosis or pulmonary embolism as women on a placebo. 21 In osteoporosis studies of raloxifene, the drug did not increase the risk of endometrial cancer. An important part of STAR will be to assess the long-term safety of raloxifene versus tamoxifen in women at increased risk for breast cancer.
Design and Eligibility
Women at increased risk for developing breast cancer, who have gone through menopause and are at least 35 years old, can participate in STAR (Table 1). STAR is limited to postmenopausal women because the drug raloxifene has yet to be adequately tested for long-term safety in premenopausal women. All women must have an increased risk of breast cancer equivalent to or greater than that of an average 60-to 64-year-old woman.At that age, about 17 of every 1,000 women are expected to develop breast cancer within five years.
Increased risk of breast cancer is determined in one of two ways. The risk for most women
Study of Tamoxifen and Raloxifene (STAR)
Objective STAR will determine whether the osteoporosis drug raloxifene has equivalent breast cancer riskreduction benefits with reduced risk of side effects as compared with tamoxifen.
Sponsor
National Cancer Institute.
Total Sites
More than 500 in the United States, Puerto Rico, and Canada.
Intervention
Either 20 mg tamoxifen or 60 mg raloxifene daily for five years.
Specimen Bank
Serum, white blood cells, tissue specimens from cancers diagnosed.
Ancillary Studies
Assessment of the intervention agents on cognitive aging; Quality of life.
Results/Findings
Final 2007 (Projected). Women diagnosed as having lobular carcinoma in situ (LCIS) also are eligible based on that diagnosis alone, as long as any treatment for the condition was limited to local excision. A history of mastectomy, radiation, or systemic therapy would disqualify a woman with LCIS from the study.
For More Information
Each potential participant will complete a one-page questionnaire (risk assessment form) that is forwarded to the NSABP by the local STAR clinical staff. The NSABP will use computer software to generate an individualized risk profile based on the information provided and will return the profile to the local STAR site so that it can be given to the potential participant. The profile estimates an individual woman's chance of developing breast cancer over the next five years and in her lifetime, and will also present her with the potential risks and benefits of the study drugs (described above).The woman can then use this information to help her decide whether she is interested in participating in STAR.
Health professionals at the STAR site will discuss existing health conditions that affect eligibility with each potential participant. For example, women with a history of cancer (except basal or squamous cell skin cancer), blood clots, stroke, or certain types of arrhythmias cannot participate; nor can those whose hypertension or diabetes is not controlled. Women taking menopausal hor-mone therapy cannot take part in the trial unless they stop taking this medication for three months. Women who have taken tamoxifen or raloxifene for no more than three months are eligible for the study, but they must also stop the medication for three months before joining STAR.
STAR is a double-blind randomized study. Participants in STAR will be randomized to receive either tamoxifen or raloxifene, and neither the participant nor her physician will know which she is receiving. All women in the study will take two pills a day for five years; half will take active tamoxifen and a raloxifene placebo, the other half will take active raloxifene and a tamoxifen placebo. All women will receive one of the active drugs; no one in STAR will receive only the placebo. The dosages of the active drugs are 20 mg of tamoxifen and 60 mg raloxifene.
The original sample size for STAR was estimated to be 22,000 women based on the minimum eligibility for study entry. The women who have joined STAR have, on average, had a greatly increased breast cancer risk, so the study sample size was reduced to 19,000. As of February 14, 2003, 15,390 women were enrolled in STAR or about 81 percent of the total.
Exams and Costs for Participants
Participants are required to have blood tests, a mammogram, a breast exam, and a gynecologic exam before they are accepted into the study. These tests will be repeated at intervals during the trial. Physicians' fees and the costs of these medical tests will be charged to the participant in the same fashion as if she were not part of the trial; however, the costs for these tests generally are covered by insurance. The maker of tamoxifen, AstraZeneca, in Wilmington, DE, and the maker of raloxifene, Eli Lilly and Company, in Indianapolis, IN, are providing the active pills and the look-alike placebos without charge. Every effort is made to contain the costs specifically associated with participation in this trial, and financial assistance is available for some women.
For More Information
To locate the nearest STAR center in the United States (including Puerto Rico) by phone, call the NCI's Cancer Information Service at 1-800-4-CANCER (1-800-422-6237). The number for callers with TTY equipment is 1-800-332-8615. In Canada, participating centers can be located by calling the Canadian Cancer Society's Cancer Information Service at 1-888-939-3333. Information about STAR can also be found on NCI's Web site at http://cancer.gov/star on the Internet. Women who are interested in having their breast cancer risk assessed online and locating a STAR center near them can also use http://breastcancer prevention.com (a Web site of NSABP).
The Selenium and Vitamin E Cancer Prevention Trial
SELECT, the Selenium and Vitamin E Cancer Prevention Trial, is a clinical trial to determine if seven to twelve years of daily supplements of selenium and/or vitamin E reduce the risk of developing prostate cancer. SELECT is a necessary first step to substantiate earlier secondary endpoint findings from large prospective trials suggesting selenium and vitamin E reduce the risk of prostate cancer. Other objectives of the trial are to assess the impact of selenium and vitamin E on the incidence of lung and colon cancer as well as survival among individuals diagnosed with these diseases. SELECT will study the molecular genetics of cancer risk, associations between diet and cancer, assess age-related memory loss and quality of life.
The trial is funded by the NCI and coordinated by the Southwest Oncology Group (SWOG), an international network of research institutions.
About the Study Supplements
Selenium is a nonmetallic trace element present in water and food-especially seafood, meats, and Brazil nuts-that is an antioxidant believed to protect against the action of free radicals and prevent oxidative damage, limit the effect of a number of cell mutagens, and alter the metabolism of other carcinogens. 24,25 In SELECT, the dose of selenium (provided as l-selenomethionine) is 200 micrograms (µg) daily. An earlier prospective trial by Clark, et al., 26 designed to evaluate whether selenium could reduce the risk of nonmelanoma skin cancer, suggested that selenium might be an effective chemopreventive agent to reduce the risk of prostate cancer.The trial did not show a benefit from selenium in preventing skin cancer, but did show a 60% reduction in the number of new cases of prostate cancer in those men taking selenium compared with men who did not.
Vitamin E, a naturally occurring nutrient found in a wide range of foods-especially vegetables, vegetable oils, nuts, and egg yolksis also an antioxidant believed to help control oxidative damage that can lead to cancer. The amount of vitamin E (provided as dl-alphatocopherol acetate) is 400 mg, which is equivalent to 400 International Units per day. In a 1998 study of 29,000 male smokers in Finland, those who took vitamin E to prevent lung cancer had 32 percent fewer new cases of prostate cancer than those who took the placebo. 27
Design and Eligibility
SELECT is a double-blind, randomized trial of 32,400 men divided into four intervention groups: (1) selenium and vitamin E; (2) selenium and a placebo; (3) vitamin E and a placebo; and (4) two placebos. Enrollment began in August 2001 and will last for approximately five years, unless rapid accrual will reduce this period.The study will continue for seven years after enrollment is complete; men will participate for seven to twelve years, depending on when they join the study. More than 400 sites in the United States, Puerto Rico, and Canada are taking part in the study ( Figure 2). As of January 31, 2003, 18,881 are enrolled or about 58 percent of the total needed for the trial.
To participate in SELECT, African-American men must be age 50 or older, and men of other races and ethnicities must be 55 or older (Table 2). Because African-American men develop prostate cancer at a younger age, they are eligible to enroll in SELECT at a younger age. Men who are taking selenium, vitamin E, or a multivitamin must stop using these supplements and use only what is provided by SELECT. Participants are provided supplements free of charge, including a multivitamin that does not contain selenium or vitamin E. Past use of selenium and vitamin E supplements does not disqualify men from joining SELECT.
Participants must be generally in good health and have no history of prostate or any other cancer except nonmelanoma skin cancer. Men with benign prostatic hyperplasia (BPH) can join SELECT; more than half of the men in the
Selenium and Vitamin E Cancer Prevention Trial (SELECT) Sites
United States between the ages of 60 and 70 and as many as 90 percent of men between the ages of 70 and 90 have symptoms of BPH.
Potential participants must have a digital rectal examination (DRE) that shows no signs of prostate cancer and a prostate-specific antigen (PSA) test level of less than or equal to 4.0 ng/ml. While enrolled in SELECT, DREs and PSA tests are suggested, but not required, on an annual basis.
Participant Costs
The supplements, placebos, and trial multivitamins are provided at no charge to SELECT participants. Physician, medical examination, and general clinic costs are charged to the participant in the same way as if he were not part of the trial. However, the costs of these tests may be covered by a participant's health insurance. Financial assistance may be available for some men. Men with questions about insurance coverage or reimbursement should check with their local SELECT site.
Total Sites
More than 400 in the United States, Puerto Rico, and Canada.
Specimen Bank
Serum, white blood cells, toenail clippings, prostate biopsy tissues.
Ancillary Studies
Assessment of age-related memory loss, effect of intervention on cataract and macular degeneration, molecular genetics of cancer risk, associations between diet and cancer, quality of life.
Results/Findings
Final 2012 (Projected). The National Lung Screening Trial (NLST) is a cancer screening trial to compare two ways of testing for early lung cancer in current and former heavy smokers: spiral computed tomography (CT) and single-view chest x-ray. Both chest x-rays and spiral CT scans have been used in clinical practice to detect lung cancers in asymptomatic individuals as well as to evaluate signs and symptoms associated with lung cancer. So far, however, the scientific evidence is inconclusive as to whether screening for lung cancer with either method will reduce lung cancer mortality. 28 NLST aims to determine which test will be better at reducing deaths from this disease and will examine the relative risks and benefits of both tests. Conducted by NCI, the trial will involve approximately 30 centers across the United States ( Figure 3). Ten of the centers are those currently conducting the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO), while the remaining 20 centers are members of the American College of Radiology Imaging Network (ACRIN). ACRIN is an NCI-funded cooperative group that manages clinical research trials of imaging technologies as they relate to cancer (for more information about ACRIN, visit http:// www.acrin.org).
Design and Eligibility
Launched in September of 2002, the NLST is a randomized controlled trial and will enroll 50,000 participants over two years.The trial has sufficient power to detect a 20 percent or greater difference in lung cancer mortality between screening spiral CT and chest x-ray. Participants are randomized to receive either a spiral CT scan or a chest x-ray on their initial visit, and two follow-up screens on an annual basis. Depending upon outcomes, researchers may contact participants by phone or mail at annual or semi-annual intervals until 2009 to monitor their health status and smoking behaviors. Some of the ACRIN sites are also collecting specimens of blood, urine, sputum, and resected tissue as part of a specimen biorepository that can be used to validate future potential biological and genetic markers of lung cancer. In addition, ACRIN sites will be addressing secondary endpoints related to costs, radiation exposures from screening, and the impact of screening on quality of life and smoking behaviors. All participants and their health care providers will be informed of the results of screening tests. Those with positive screening tests will be given suggested options for diagnostic follow-up based upon current state of knowledge practices that will be revised as new data become available during the trial.
Current or former heavy smokers between the ages of 55 and 74 may be eligible for this study (Table 3). Former smokers must have quit smoking within the past 15 years. Potential participants should be in general good health, must not have a history of lung cancer, and must not, in the past five years, have been treated for or have had evidence of any cancer, other than nonmelanoma skin cancer or most in situ cancers. Potential participants cannot be enrolled in any other cancer screening or cancer prevention trial other than smoking cessation studies and must not have had a CT scan of the chest or lungs within the prior 18 months. Participants who are current smokers and want to quit smoking will be referred to smoking cessation resources.
Objective
The primary goal of NLST is to determine if lung cancer mortality is reduced in long-term or heavy current and former smokers by screening with low-dose helical computed tomography (spiral CT) compared with chest x-ray, and to determine the risk/benefit ratio of these tests.
Sponsor
National Cancer Institute.
Coordinating Group
National
Intervention
Spiral CT or chest x-ray once per year for three years.
Specimen Bank
Division of Cancer Prevention: Under discussion-tumor tissue, blood collection. ACRIN: blood, urine, sputum, tumor tissue.
Ancillary Studies
In planning stages.
About the Tests
The sensitivity of chest x-rays is dependent on the size and location of the lesion, technical factors that influence image quality, and the skill of the interpreting physician. Lesions on the order of 20 mm diameter are commonly visible on chest radiographs, but may be missed for several reasons, including: (1) obscuration of the lesion by other chest anatomy (e.g., ribs, etc.), so called "structured noise;" (2) obscuration due to low contrast relative to surround, such as with lesions situated in the subdiaphragmatic or retrocardiac regions; (3) perceptual errors; and (4) satisfaction of search, meaning premature completion of image review based upon finding other less significant pathology. 29 To date, trials of lung cancer screening with chest x-ray have provided insufficient evidence to conclude that chest x-ray is an effective screening tool, for which reason its potential is currently being evaluated in a large US trial. 30 Spiral CT, also called helical CT, and lowdose CT, also uses x-ray technology, but has much greater spatial and contrast resolution than conventional chest radiography, particularly for lesions within the lung.With spiral CT, the patient is moved continuously through a doughnut shaped scanner. The resulting x-ray data is a single volume of the whole chest, from which individual slices are reconstructed by a computer.With spiral CT, imaging times range from 10 to 25 seconds, typically the duration of a single, large breath-hold, and the volume data set enables both 2-D and 3-D reconstructions. Although low-dose CT involves roughly 10 to 15 times the radiation exposure of chest radiography, it is a more sensitive test for small pulmonary nodules, hence, its potential to detect lung malignancies at an earlier stage. 29
What the Current Data Tell Us About Lung Cancer Screening
The data currently being reported from the single arm observational trials using chest x-ray or spiral CT underscore some of the challenges of both technologies.The first report was issued from the Early Lung Cancer Action Project (ELCAP) and compared the use of spiral CT and chest x-ray in a screened cohort of 1,000 individuals at risk of lung cancer. In this study, all subjects received both imaging tests. The authors reported that low-dose CT significantly outperformed conventional chest x-ray in the detection of small pulmonary nodules. 31 Lowdose CT identified 233 participants with noncalcified nodules. Of these, there were 27 lung cancers; 23 cancers were Stage I at diagnosis. In contrast, conventional chest x-ray identified 68 noncalcified nodules, of which seven were malignant and four were Stage I. The diagnostic work-up of positive CT screens was based on initial nodule size or change in size on repeat imaging. Based on the average tumor size in the ELCAP study, the authors project a five-year survival of 80 percent for cases diagnosed using low-dose CT.
The Mayo Lung Trial has also published its results in an initial cohort of 1,520 participants who have undergone baseline and annual incidence screening with spiral CT. 32 They have observed that 51 percent or more of baseline screens and up to 14 percent of annual incidence screens are positive for lung nodules, of which over 98 percent represent benign nodules. Among 40 lung cancers diagnosed thus far, 21 (60 percent) have been Stage I at diagnosis. Thus far, eight participants have undergone surgery for the resection of benign disease.
These studies highlight some of the issues surrounding evaluating a screening test. First, albeit more sensitive than chest x-ray, spiral CT is nonspecific. The high false positive rate imposes the potential for psychological, economic, and medical hardship on individuals who must undergo additional diagnostic tests based upon the finding of a nonspecific lung nodule on CT, and the challenge to identify best practices for minimizing adverse effects should not be neglected. Secondly, although lung cancers detected by CT are earlier stage than those detected with chest x-ray, it is not yet certain whether this apparent stage shift will result in a reduction in lung cancer mortality. We do not know, with measurable confidence, whether the detection of small lung cancers is tantamount to the detection of "early" curable cancers.
More than half of the hospitals in the United States have at least one spiral CT unit. While these machines are routinely used for diagnostic purposes in patients with unexplained signs or symptoms, recently some hospitals have begun performing spiral CT scans as a screening test in asymptomatic smokers and former smokers. 33,34 A recent decision analysis by Mahadevia and colleagues was critical of this practice, since it promotes spiral CT in a manner that implies known effectiveness before definitive data are available, both in terms of efficacy, but also cost effectiveness. 34 Although their modeling exercise showed that screening for lung cancer met conventional criteria for cost effectiveness when very modest mortality reductions were associated with very favorable estimates of screening program performance, their analysis also showed that under circumstances of less favorable performance, screening for lung cancer was not cost effective.
Participant Costs
Participants in the trial will be screened free of charge with either spiral CT or chest x-ray. However, the costs of any diagnostic evaluations or treatments for lung cancer or other medical conditions will be borne by the participants or their health insurance according to the provisions of their plan policies, in the same way as if they were not part of the trial. If a participant has no insurance, aid may be available at the local level to pay for diagnostic evaluation or treatment, and participants may be assisted in finding county or other regional medical resources for under-or uninsured individuals.
For More Information
To locate the nearest NLST study site or screening center, call the NCI's Cancer Information Service toll free Monday through Friday, 9:00 AM to 4:30 PM, at 1-800-4-CANCER (1-800-422-6237) for information about the trial in English or Spanish. The number for callers with TTY equipment is 1-800-332-8615. Information about NLST can also be found at http://cancer.gov/nlst (NCI's Web site).
The Digital Mammographic Imaging Screening Trial
The Digital Mammographic Imaging Screening Trial (DMIST) is a three-year multicenter study of digital mammography. Although screen-film mammography is still the "gold standard" for early detection of breast cancer, digital x-ray mammography may offer significant improvements. 35 The primary aim of DMIST is the comparison of the diagnostic accuracy of digital mammography versus screen-film mammography for breast cancer screening. Secondary aims will address issues associated with the cost effectiveness of digital mammography and the impact of false positives on health-related quality of life issues.The study is sponsored by the NCI and is being conducted by ACRIN. DMIST was launched on October 29, 2001.
Design and Eligibility
DMIST is designed to compare the diagnostic accuracy of digital mammography versus screenfilm mammography in women with no breast symptoms. The study will compare these two technologies with respect to their relative success in finding asymptomatic breast cancers.
Women who ordinarily undergo screening mammography at one of the 35 participating centers are eligible for participation in the trial. All eligible women are approached regarding their interest in participating in the trial at the time they are scheduled to be present for their regular screening mammogram. At this point, women are entered into the study and followed annually for two to three years (Table 4). Women who are not eligible to participate in the study include those with a history of breast cancer treated with lumpectomy, a focal dominant mass or a bloody or clear nipple discharge, breast implants, any woman who is pregnant or has reason to believe she might be pregnant, and women who cannot, for any reason, undergo follow-up mammography at the participating institution or provide mammograms from another institution for review for one year after study entry.
After informed consent, all women will undergo both digital and screen-film mammography. Both studies are performed on the same day by the same breast-imaging technologist. As per local center protocols, two separate readers interpret the exams and the woman is informed of her results and, if applicable, the need for further work-up. For the majority of women, both tests are negative and only routine follow-up mammography is recommended. Each examination will be interpreted independently and work-up of any detected abnormalities will proceed based on the findings of either study. For those with abnormal mammograms, either digital or screen-film, further work-up takes place as recommended by the local radiologists. This includes extra mammographic views, 96 CA A Cancer Journal for Clinicians
Prevention and Early Detection Clinical Trials
Digital Mammographic Imaging Screening Trial (DMIST)
Primary Objective
The primary goal of DMIST is to determine if digital mammography is as good as or better than standard screen-film (x-ray) mammography on the basis of sensitivity, specificity, and positive and negative predictive values.
Sponsor
National Cancer Institute.
Intervention
All women will undergo both digital and screen-film mammography.
Specimen Bank
None.
Ancillary Studies
In planning stages. sonograms, magnetic resonance imaging and biopsy, as indicated. Women with benign biopsies are followed as recommended by the local radiologists. Truth regarding breast cancer status for all patients will be determined either through the results of breast biopsy, if that occurs, or as a result of one-year follow-up. An expert breast pathologist will reinterpret all pathologic specimens. A total of 49,500 asymptomatic women presenting for screening mammography will be enrolled into the trial at 35 centers in the United States and Canada (Figure 4).
About the Tests
Conventional, screen-film mammography has been studied through many large randomized screening trials and has been shown to be associated with a reduction in breast cancer mortality of approximately 30 percent. 6,36,37 These studies have been the basis for strong consensus that regular mammography beginning in the forties is an important part of women's preventive health care. 38,39 Modern mammography is performed on dedicated mammography systems that are optimized to provide a high quality, low-dose image of the soft tissues of the breast. The examination normally consists of two standard views of each breast, i.e., one mediolateral oblique (MLO) view and one craniocaudal (CC) view. However, this technology is not perfectly sensitive or specific, due partially because dense breast tissue can obscure lesions and the radiographic appearance of some histologic types. Further, a major limitation of screen-film mammography is film itself, since it serves as the medium of image acquisition, storage, and display. In addition, once a screenfilm mammogram is obtained, the features of the image cannot be significantly manipulated. Contrast loss due to film underexposure, especially for dense glandular tissues, cannot be regained through manipulation of the image Digital Mammographic Imaging Screening Trial (DMIST) Sites display. Improvements in visualization of lesion features require the acquisition of additional images, possibly with magnification or focal compression. This often requires a return visit by the patient, anxiety during the waiting period, and additional radiation exposure. Digital detectors offer the prospect for improved detection because they provide better efficiency of x-ray photon absorption, a linear response over a wide range of incident radiation intensities, and low system noise. 40 Digital acquisition systems directly quantify x-ray photons and decouple the process of x-ray photon detection from image display. Digital images can be processed by a computer and either printed to film or displayed on a 98 CA A Cancer Journal for Clinicians
Investigator
Carole Fabian, MD, University of Kansas Medical Center.
Purpose
To determine how COX-2 inhibitors change potential biological markers of breast cancer risk.
Eligibility
• Premenopausal women aged 21 or older.
• High risk of developing breast cancer or history of unilateral breast cancer without evidence of recurrence for two years. • Stable hormonal milieu for duration of study and prior six months.
Purpose
To determine if targretin, a synthetic retinoid, can modify markers related to breast cancer progression in women who are at genetically increased risk of breast cancer.
Eligibility
• Healthy women age 18 and older.
• Individuals with genetic mutations in BRCA 1 or BRCA 2, or with high likelihood of carrying such a mutation.
• No pregnancy or planned pregnancy (contraceptive or abstinence required).
• At least one breast never involved with cancer nor irradiated.
Study Size 100 women.
Location
Texas.
Purpose
To compare the efficacy of celecoxib, a cyclooxygenase-2 inhibitor used to treat pain, with and without eflornithine, on the change in polyps in adults with familial adenomatous polyposis.
• No anticipated surgery to remove colon within eight months.
• No concurrent use of NSAIDS or aspirin at any dose.
Size of Trial 120 participants.
Selenium for Chemoprevention of Prostate Cancer Among Men With High-Grade Prostatic Intraepithelial Neoplasia (HGPIN) Investigators
Jim Marshall, PhD, Southwest Oncology Group; David Jarrad, MD, Eastern Cooperative Oncology Group; William Robert Lee, MD, Cancer and Leukemia Group B.
Purpose
To compare the effects of selenium to placebo on the three-year incidence rate of prostate cancer in men with highgrade prostatic intraepithelial neoplasia, a condition that increases risk of prostate cancer.
Eligibility
• Diagnosis of high-grade intraepithelial neoplasia with no evidence of cancer.
Locations
Multiple sites across US.
monitor. Since the steps of image acquisition and display are separated, each can be optimized. Because lesion conspicuity can be affected by contrast manipulations, 41 it is believed digital mammography might improve breast cancer detection and breast lesion characterization. The ability to manipulate the image already has been shown to reduce the call-back rate for abnormalities that would require special views with conventional mammography, and investigators are optimistic that digital systems eventually will be more sensitive than screen-film systems used today. 42 Benefits, other than those predicted from the primary and secondary DMIST aims, also may result from digital mammography. First, the digital image capture will allow for the electronic transmission of images. This could enhance access to experienced mammographers from remote areas of the country or world that do not have direct access to mammography centers or trained mammographers. Electronic transfer of images would facilitate opportunities for breast imaging radiologists to consult with each other and aid in the training of future mammographers. Second, the digital images generated by digital mammography will provide a ready source of data that can be used in computer-aided detection (CAD) systems. In addition, image storage, transmission, and retrieval could be vastly more efficient compared with today's hard copy images.
Purpose
To determine whether zileuton, a lipoxygenase inhibitor used to treat asthma, can reverse bronchial dysplasia, a precursor to lung cancer.
Eligibility
• General good health.
• Histologically proven bronchial dysplasia AND EITHER • age 40 and older and smoker or former smoker (quit within past 10 years) with a 30-pack-year history of smoking (packs smoked per day multiplied by years smoked) OR • age 18 and older and Stage I non-small cell lung cancer or Stage I-II head and neck cancer patient, 12 months post-curative therapy.
Size of Trial 134 participants.
Lung Cancer Chemoprevention With Celecoxib in Ex-Smokers
Investigator Jenny Mao, MD, University of California, Los Angeles.
Purpose
To test the effectiveness of celecoxib in potentially preventing non-small cell lung cancers in former smokers.
Eligibility
• Older than 45 years of age.
• Former smokers with a 30-pack-year history of smoking (number of packs per day multiplied by number of years smoking = pack-year). • Diagnosed airflow obstruction or prior Stage I non-small cell lung cancer.
Size of Trial
180 participants.
Location
Los Angeles, CA.
Purpose
To determine the safety and efficacy of celecoxib, a cyclooxygenase-2 (COX-2) inhibitor in regressing Barrett's dysplasia (a precursor to esophageal cancer) in patients with low-or high-grade disease.
Eligibility
• No cancer of the esophagus.
• At least six months since corticosteroids.
• No concurrent NSAIDS except low-dose aspirin.
Possible Risks from Screening
The risks involved in this study are low. Women will receive a small amount of additional radiation beyond the amount they would normally receive with their standard mammogram. Also, because women are undergoing two imaging studies, there may be a greater risk of a false positive result that could cause anxiety and/or extra procedures to be performed.
Participant Costs
There is no cost to participants in the trial for digital mammography, and conventional mammography should be covered by the patient's insurance.The costs for any diagnostic evaluation would be covered by the participant's medical insurance according to the plan's policies.
For More Information
To locate the nearest DMIST study site, call the NCI's Cancer Information Service toll free Monday through Friday, 9:00 AM to 4:30 PM, at 1-800-4-CANCER (1-800-422-6237) for information about the trial in English or Spanish. The number for callers with TTY equipment is 1-800-332-8615. Information about DMIST can also be found at www.dmist.org and/or at http://cancer.gov/ DMIST (NCI's Web site).
CONCLUSION
Well-designed, well-run clinical trials are the only way to determine the true effectiveness of a promising intervention. The four trials featured in this review were chosen to illustrate the importance and the potential benefit derived from such studies. For clinicians not familiar with the diversity of prevention and early detection studies currently accruing patients, we are including brief summaries of seven additional trials (Table 5). These include prevention studies for high-risk or average-risk patients and utilize a variety of nutritional interventions and drugs. In addition to NCIsponsored trials, many cancer centers conduct smaller trials of early detection methods, and of pharmacologic, nutritional, and lifestyle interventions for cancer prevention.
To answer the most pressing questions about cancer-and to do so quickly-many more adults must participate in clinical trials. To encourage participation, the NCI, ACS, and other organizations provide information to ensure that health care professionals and the people in their care understand clinical trials, consider them as an option, and can easily locate them in their communities. Clinical trials should not be considered as opportunities only for people who already have cancer. They may also present prevention and early detection options for people at average and increased risk for developing cancer.
Ultimately, knowledge gained from prevention and early detection trials could have a vastly greater effect on reducing morbidity and mortality from cancer because the findings are applied to the entire at-risk population. In this era of evidence-based medicine, our progress toward identifying interventions and best practices will come faster and at a lower cost if clinicians and researchers can promote awareness of these studies, and play the essential role of aiding patients to make an informed decision about participation. | 2018-04-03T03:13:58.287Z | 2003-03-01T00:00:00.000 | {
"year": 2003,
"sha1": "7b6813f8c9b0ba3a419c7ccac53afc003c9e1900",
"oa_license": null,
"oa_url": "https://doi.org/10.3322/canjclin.53.2.82",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "92b10fd4ee45a5c70d84b9cc6643a8715f884ebd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257932733 | pes2o/s2orc | v3-fos-license | Spontaneous facial hematoma induced by vitamin K antagonist therapy: a rare case report
Introduction and Importance: The benefits of anti-vitamin K drugs have been demonstrated in several indications; however, it is always counterbalanced by an increased risk of bleeding, that can occur in different areas. Facial hematoma is a rare bleeding complication to our knowledge, this is the first report of a rapidly expanding atraumatic facial hematoma secondary to vitamin K antagonist over coagulation. Case Presentation: The authors report the case of an 80-year-old woman with a medical history of hypertension, and pulmonary embolism after 15 days of immobilization following a hip fracture treated surgically 3 years earlier, who has been on vitamin K antagonist therapy since then, without any follow-up, came into our emergency department complaining of a sudden onset of progressive left facial swelling for one day, and vision loss in her left eye. Her blood investigations revealed a high international normalized ratio of prothrombin up to 10. A computed tomography scan exposing face, orbit, and oromaxillofacial area objectived spontaneously hyperdense collection in the left masticator space suggestive of an hematoma. An intraoral incision was done by oromaxillar surgeons, and drainage were performed with a favorable evolution. Clinical Discussion In this mini review, the authors aim to describe this rare complication and to insist on the necessity of regular follow-up with international normalized ratio values and early warning signs of bleeding to prevent such fatal complications. Conclusion: Immediate recognition and management of such complication is very important to avoid complications.
Introduction
Vitamin K antagonist (VKA) is an anticoagulant frequently used for the prevention of thromboembolic disease. Spontaneous hemorrhage is a major and the most lifethreatning complication of this treatment. Non traumatic facial hematoma can be considered as a rare complication of over-anticoagulation by VKA therapy, which has never been reported in the literature.
We are reporting a very interesting case of a spontaneous facial hematoma (SFH) during a mistaking of therapy with VKA, which was initially taken for the treatment of pulmonary embolism. The rarity of this complication prompted us to report this case.
Patient and observation
An 80-year-old woman with a medical history of hypertension, and pulmonary embolism after 15 days of immobilization following a hip fracture treated surgically 3 years earlier, who has been on VKA therapy since then without any follow-up, came into our emergency department complaining of a sudden onset of progressive left facial swelling for one day and vision loss in her left eye. She had no recent history of facial surgery or trauma.
On examination, the patient was conscious, oriented, and had normal vital signs, with a massive hematoma involving the preauricular area, compromising eye opening, and extending toward the submandibular area and upper lip mucosa (Fig. 1A).
HIGHLIGHTS
• Spontaneous facial hematoma is a very rare complication of vitamin K antagonist therapy over coagulation. • The treatment of these injuries can be complex and may have a significant impact on the patient's facial function and esthetics, thus it's required a special medical care. • Rapid reversal of vitamin K antagonist therapy must be done in such cases. • Hemorrhagic complications secondary to anticoagulant therapy can occur at various areas, especially in patients with risk factors. • Patients should be informed of the necessity of regular follow-up with international normalized ratio values and early warning signs of bleeding.
Her blood investigations revealed a high international normalized ratio (INR) of prothrombin up to 10, hemoglobin was 12 mg/dl, and platelet count was 233 000/mm3. The rest of the biochemical profiles were within normal ranges. Immediate suspension of VKA was done, and she received 10 mg of vitamin K; a computed tomography scan exposing the face, orbit, and oromaxillofacial area was done, which objectived spontaneously hyperdense collection in the left masticator space suggestive of an hematoma and spontaneously hyperdense retro-orbital collection, infiltrating the lateral rectus muscle, and filling the inter and extra conical fat. (Fig. 2) After a discussion among the ophthalmologist and or maxillofacial surgeon, intraoral incision and drainage were performed to reduce the size of the submandibular and retro-orbital hematoma, then hypotensive eye drops with lubricant eye ointments was prescribed.
After 3 weeks, the patient had no swelling, and facial expressions were preserved without any discoloration of the facial skin (Fig. 1B).
Discussion
Oral anticoagulants are widely used for treating and preventing thromboembolic complications in patients with atrial fibrillation, mechanical heart valves, and venous thromboembolism. Despite the high efficacy of this treatment, hemorrhagic complications are a common event. In a series of 6814 VKA-treated patients, the incidence of overall bleeding complications was 16.5 per 100 treatment years [1] , mainly gastrointestinal and intracranial hemorrhage. However, rapidly expanding facial hematomas are a very rare complication of this treatment, particularly without invasive medical procedures or preceding trauma.
The pathogenesis of SFH formation in patients on anticoagulant therapy may be multifactorial, the most important risk factors associated with anticoagulation-induced bleeding are advanced age, female sex, a history of gastrointestinal bleeding, many drugs such as aspirin, and a previous medical history of INR levels over the therapeutic range. Chronic diseases have also been related to VKA over coagulation, including hypertension, heart disease, diabetes mellitus, a prior stroke, a chronic renal disorder, or extensive malignancy [2,3] . It must be pointed out that the intensity and quality of anticoagulation control have major influence on the risk of hemorrhage, since the patients with a target INR above 3.0 have a twice higher incidence of major bleeding compared to those with a target INR between 2.0 and 3.0, and patients with time in the therapeutic range less than 60% have significantly higher rates of both major bleeding and mortality (3.85 and 4.20%, respectively), in comparison with those with time therapeutic range above 75% (1.58 and 1.69%, respectively) [4] .
Facial soft-tissue hematoma is often reported after trauma, especially motor vehicle accidents. However, no reports of SFH secondary to anticoagulant therapy were found in the literature. The treatment of these injuries can be complicated even if they are rarely life-threatening and they can affect the patient's facial function and esthetics, thus it is required a special medical care. It can expose to many complications, including infection, flap or wound-edge necrosis, nasal septum necrosis, permanent deformity, and loss of function related to nerve injury or scarring [5] , thus a rigorous and focused physical exam should be done to evaluate soft-tissue injuries and determinate the steps of management, computed tomography has greater sensitivity in defining the extent of lesions.
In this context, the management of SFH should include a cessation of anticoagulation, blood transfusion as needed and rapid reversal of VKA therapy, which can be performed by the administration of fresh frozen plasma or non activated prothrombin complex concentrates in addition to intravenous vitamin K. Soft-tissue injuries can be managed by intraoral incision and drainage to prevent external scar and injury to vital structures.
In our case, various factors had predisposed our patient to spontaneous bleeding and the formation of a facial hematoma, including her advanced age, her hypertension history, and her poor medication adherence without any follow-up.
The Surgical CAse REport (SCARE) Guidelines were used in the writing of this paper [6] .
Conclusion
SFH is a very rare complication resulting from over-anticoagulation. We are reporting this case to highlight the necessity of more vigilance while prescribing oral anticoagulation and to remind physicians of the frequency of hemorrhagic complications that can occur at various areas, especially in patients with risk factors, thus all patient regarding should be informed of the necessity of regular follow-up with INR values and early warning signs of bleeding to prevent such fatal complications.
Ethical approval
The ethical committee approval was not required give the article type (case report). However, the written consent to publish the clinical data of the patients was given and is available to check by the handling editor if needed.
Consent
Written informed Consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editorin-Chief of this journal on request. | 2023-04-05T15:38:31.881Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "dd8dc7e4bae2a0115d3867a5e116a9aa1cb0358a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/ms9.0000000000000358",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc0bfc19a7c83147aa8075d5d29bfd3bc4d045ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252383443 | pes2o/s2orc | v3-fos-license | SCGG: A deep structure-conditioned graph generative model
Deep learning-based graph generation approaches have remarkable capacities for graph data modeling, allowing them to solve a wide range of real-world problems. Making these methods able to consider different conditions during the generation procedure even increases their effectiveness by empowering them to generate new graph samples that meet the desired criteria. This paper presents a conditional deep graph generation method called SCGG that considers a particular type of structural conditions. Specifically, our proposed SCGG model takes an initial subgraph and autoregressively generates new nodes and their corresponding edges on top of the given conditioning substructure. The architecture of SCGG consists of a graph representation learning network and an autoregressive generative model, which is trained end-to-end. More precisely, the graph representation learning network is designed to compute continuous representations for each node in a graph, which are not only affected by the features of adjacent nodes, but also by the ones of farther nodes. This network is primarily responsible for providing the generation procedure with the structural condition, while the autoregressive generative model mainly maintains the generation history. Using this model, we can address graph completion, a rampant and inherently difficult problem of recovering missing nodes and their associated edges of partially observed graphs. The computational complexity of the SCGG method is shown to be linear in the number of graph nodes. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our method compared with state-of-the-art baselines.
Introduction
With the ever-increasing growth of data collection and production technologies, large amounts of data is readily accessible.In many cases, some kind of relationship exists between data entities, which, if taken into consideration, can lead to more precise data analyses.Such relationships are mostly represented by graph data structures, and that is why graph-related research has become a widely discussed topic in many areas including chemistry [1], medical applications [2], social network studies [3], and knowledge graph-related research [4].Most recent studies are dedicated to graph representation learning [5,6], aiming to obtain suitable representations of nodes, edges or the entire graph in continuous space to be further utilized by downstream tasks.
Graph generation is another important branch of graph-related research, which often benefits from the results of graph representation learning studies.This research field has a history of several decades.It has recently been revived by receiving renewed attention from scholars, mainly due to the advances in machine learning, and in particular deep learning techniques.The goal of graph generation is to provide models that can generate new graph samples from the desired data distributions.Thus, similar to generative methods in other data domains such as image [7], text [8], and speech [9], graph generative approaches can bring substantial capacity for graph data modeling to address various real-world problems such as drug design [10], understanding and modeling the interactions in social networks [11], and human diseases diagnosis [12].
One of the desired and essential properties of generative methods is their ability to carry out the generation procedure in a controlled manner so that the produced samples comply with predetermined conditions by having the required characteristics.In this regard, numerous studies September 21 have been conducted to develop conditional generative models in different data domains, such as image [13] and text [14].Initial steps [15][16][17][18][19] have also been taken to make graph generators conditional, however, compared to the work performed in other data domains and also compared to the needs and capacities of this field, much remains to be done.
In addition to what we have discussed so far, there is a common problem manifesting itself when working with different types of data.Specifically, in many cases, the data is not completely available, which can be caused due to various reasons such as limitations of data collection tools, issues related to privacy, or inadequacy of storage space.This can significantly degrade the performance of data analysis methods.Therefore, it is often crucial to recover the missing part of the data before processing it; hence, various methods have been proposed in different data domains to address this challenge.Regarding the graph data, many methods have also been developed for years [20,21] to predict missing links between graph nodes, and researchers are still seriously pursuing a solution for it [22].However, an intrinsically more complicated challenge arises when the graph nodes are missing.We will refer to this problem as graph completion, which, unlike the widely investigated problem of link prediction, has been much less addressed despite its importance and pervasiveness.
To address the issues mentioned above, we propose Structure-Conditioned Graph Generator (SCGG), an end-to-end deep learning-based conditional graph generative approach.The SCGG model takes an initial subgraph as the structural condition.It then autoregressively performs the graph generation procedure by adding new nodes and predicting the inter-links between the new nodes and those in the conditioning subgraph, as well as the intra-links between the new nodes themselves.In this way, our generative model ensures the existence of desired subgraphs in final generated graphs, which can have several applications in both molecular and non-molecular domains.Specifically, for designing molecular graphs, the existence of desired chemical substructures can bring certain chemical properties to the final molecules.Moreover, regarding the non-molecular graphs, the SCGG model can be best utilized to solve the graph completion problem in which some graph nodes and their corresponding edges are totally missing.Our study focuses on the latter application, but the proposed SCGG model can be easily extended to be used in molecular applications as well.In this regard, a partially observed graph is given to the model as a structural condition.Then the generated nodes by the model and their associated edges will be treated as the recovered missing nodes and the edges connecting them to each other, as well as to the partially observed graph nodes.
In summary, we present the following contributions in this work: • We introduce SCGG, a conditional graph generation approach, which autoregressively generates graphs based on a given structural condition.
• The architecture of our SCGG model consists of a graph representation learning network and a recurrent neural network (RNN), where the former is mainly used to take into account the structural condition, and the latter captures the generation history.
• We use our proposed SCGG model to address the graph completion problem to benefit from the power and potential of a deep generative model for solving an inherently difficult and complex problem, which as a result has been relatively less investigated so far.To the extent of our knowledge, this is the first time that a completely deep learning-based model is designed in such a way that it can specifically tackle this problem.
problem.In Section 4, we explain our proposed SCGG model in detail.Experimental details and results are discussed in Section 5. Finally, in Section 6 we conclude the paper.
Related work
In line with what we discussed earlier, our proposed SCGG model is a structure-based conditional graph generation approach that one of its main applications is graph completion.Therefore, in the following, we review the literature in two related areas.
Graph generation
Graph generation is a field of research seeking to generate new graph structures with certain characteristics, which dates back to several decades ago, and is still a hot topic for research.In contrast to the early methods [23][24][25][26], which relied on manually-designed procedures to construct graphs with predetermined statistical properties, the more recent ones are data-driven, utilizing the available graph samples in datasets to train models that can more effectively generate new graphs.The latter approaches typically employ different deep learning techniques and generation strategies, and accordingly, they can be classified into several categories [27].
The autoregressive approaches, which adopt step-by-step strategies for generating graphs, are the most relevant methods to our research.DeepGMG [28] is an example of them proposing a repetitive decision-making process to generate graphs gradually.GraphRNN [29] is among the well-known and influential approaches, which first maps each graph into a sequence of nodes and then processes one node per time step using RNNs to model the distribution of the resulting sequences.The method has inspired a number of subsequent approaches like MolecularRNN [30], which extends GraphRNN to generate molecular graphs with specific chemical features.Bacciu et al. [31], GraphGen [32], and GHRNN [33], on the other hand, convert graphs to sequences of edges instead of nodes, and then go through distribution modeling with RNNs.Besides, there are some other autoregressive methods that utilize the attention mechanism to empower their generative models.Regarding this, GRAN [34] proposes to add a block of new nodes in each step, and to compute the representations of the graph nodes, it employs an attentive message passing mechanism.
A key point to notice is that regardless of what category these methods fall into and what techniques they employ to solve the problem, an important capability of them is to consider desired conditions during generation so that the resulting graphs meet the expected characteristics.Hence, the problem of conditional graph generation arises.In this regard, GraphVAE [15] conditions both the encoder and the decoder of its VAE on a label vector for the molecular graph generation.CONDGEN [16] adopts a similar approach (i.e., concatenating a condition vector to the VAE latent variable) to incline the model towards generating graphs with desired characteristics.Lim et al. [17] and HierVAE [18] guarantee the existence of intended chemical substructures in the output molecular graphs.CCGG [19] makes the GRAN [34] model class-conditional, allowing it to generate graphs of desired classes.However, despite the efforts that have gone into conditional graph generation, there is still a vital need to develop more and more approaches that can capture various types of conditions.In this regard, the SCGG model is a generative method designed to handle special conditions, which are of structural type.
Graph completion
In many cases, a part of a graph structure is unavailable for various reasons.Hence, it is necessary to reconstruct the missing information prior to further processing.Most of the methods developed for this purpose try to perform link prediction [46][47][48], although a more complicated problem arises when the graph nodes are missing.Therefore, due to the complexity of addressing this problem, which we refer to as graph completion, so far few methods have been presented to solve it.Regarding this, KronEM [49] utilizes a combination of the Expectation-Maximization framework and the Kronecker graphs model to infer the missing nodes and their corresponding edges.SAMI [50] adopts a clustering approach for solving the missing node problem by heavily relying on the existence of missing node indicators, which are often unattainable in real scenarios.Masrour et al. [51] and JCSL [52] utilize side information about the graph nodes to perform network completion; however, this information may not be accessible in all cases.More recently, DeepNC [53] was introduced, which first learns the likelihood of the data by training the GraphRNN [29] model.It then uncovers the missing parts of a graph by proposing a greedy optimization algorithm, aiming to maximize the obtained likelihood.Although DeepNC is an innovative approach that has obtained satisfactory results, it is not learning-based, so it cannot directly learn from the data for the specific task of graph completion.However, our proposed method trains an end-to-end model to address this problem.Furthermore, unlike some graph completion methods mentioned above, the SCGG does not depend on the existence of side information, which may not be reachable in many situations.
Notations and problem definition
In this section, we define the notations used in the paper and present the problem definition.For convenience, we summarize the notations in Table 1.
We denote an initial graph as 0 = ( 0 , 0 ), where 0 and 0 are the node and the edge sets, respectively, and | 0 | = .Under an ordering of these nodes, we represent the -th node's links by the following sequence: where takes value of 1 if the -th node is connected to the -th node and 0 otherwise.Considering 0 as the structural condition, the objective of our research is to learn to sample from the conditional probability distribution ( | 0 ) in order to generate graph = ( , ), which includes 0 as a subgraph, i.e., 0 ⊂ and 0 ⊂ .This can be done by first adding the node set ̃ , with | ̃ | = and ̃ = − 0 .Then, to connect new nodes, the edge set ̃ will be generated, where ̃ = − 0 .More specifically, ̃ consists of: 1. the inter-connections between new nodes and those in 0 , 2. the intra-connections between the new nodes themselves.To represent the inter-connections between new nodes and the -th node of 0 under the ordering , we use the below sequence: where we consider a node ordering of the new nodes, and is 1 if the -th node of 0 has a link to the -th new node and 0 otherwise.Moreover, regarding the intra-connections, we denote the -th new node's connections to the nodes in ̃ by the following sequence: where similarly to the previous formulas, takes the value of 1 if there is a link connecting the -th and the -th new nodes (under the ordering ) and 0 otherwise.The node set of 0 . 0 The edge set of 0 .
n The number of nodes in 0 , n=| 0 |.
An ordering of 0 nodes.
The sequence representing how the -th node of 0 under the ordering connects to 0 's nodes.
The graph that contains an initial graph 0 as a subgraph.The node set of .
The edge set of .
The random variable associated with graph structures.
̃
The set of new nodes added to 0 to form the graph .
̃
The set of edges connecting the new nodes to each other, as well as to those nodes in 0 .
The number of new nodes, m=| ̃ |.An ordering of the new nodes ̃ .
̃
The sequence representing the links connecting the -th node of 0 under the ordering to the new nodes ordered by .
̃
The sequence representing the links between the -th new node and each of the new nodes under the ordering .
̃ ,
The notational abbreviation for The graph induced from by removing the intra-connections between the set of new nodes
SCGG: Structure-Conditioned Graph Generator
We approach a specific type of structure-conditioned graph generation that takes an initial substructure and starts to generate new nodes and their associated edges on top of the given conditioning substructure.To this end, we propose the SCGG model, whose architecture is composed of a graph representation learning network and an autoregressive generative model, which is trained in an end-to-end manner.In this section, we present the details of the SCGG model.In this regard, we first elucidate the problem formulation and the model architecture.
Next, we describe the procedure employed to prepare the data for model training.Then, we discuss the training and inference phases and elaborate on the implementation details.
Formulation
As mentioned in the Section 3, in this work we intend to learn to sample from the distribution ( | 0 ) to conditionally generate the graph given an arbitrary initial graph 0 .To do so, our SCGG model first estimates this conditional probability distribution and then samples from the resulting estimated distribution.As it is not easy to work directly in the graph space, we reformulate the problem to deal with the following distribution: where 0 and ̃ , are the notational abbreviations for 1 , ⋯ , ̃ }, respectively, and the new problem formulation relates to the original one through the below equation: To further decompose the probability in Eq. 4, we follow the chain rule and therefore this conditional probability can be rewritten as follows: Our proposed SCGG method trains a novel network architecture in an end-to-end manner to model the complex distribution in Eq. 6.
Model architecture
The model architecture of SCGG consists of two main components, namely, a graph representation learning network and an autoregressive generative model (i.e., an RNN).In the following, we explain these components in detail and discuss the role each plays in the task of structure-conditioned graph generation.September 21, 2022 6/33
Graph Feature Learning Network
The SCGG method needs appropriate representations of graph nodes beforehand to perform distribution modeling.Therefore, it utilizes a graph representation learning network that employs both a graph convolutional network (GCN) and a Transformer network to learn meaningful node features.Below, we give a brief background of GCNs and Transformers.Furthermore, we elaborate on how each of them contributes to obtaining the final nodes' features in our model.
• Graph Convolutional Network (GCN)
It is often difficult to directly work in the complex and discrete graph space.Therefore, in many cases, obtaining continuous representations of nodes, edges, or the whole graph is necessary prior to any upcoming tasks.Employing Graph Convolutional Networks addresses this problem.The main idea of GCNs originates from the fact that a node's representation can be obtained by taking into account the features of its own and its neighbors.This is because the neighbors in a graph (i.e., directly or indirectly connected nodes) usually share some common characteristics and information.Formally, the layer-wise propagation rule of GCNs can be generally formulated as below: where ∈ R × is the nodes' feature matrix at the -th GCN layer, is the number of graph nodes, is the number of features obtained for a node by the previous GCN layer, and 0 is set to be the initial feature matrix given as input to the GCN; ∈ R × is the adjacency matrix [54,55] or a variant of it [56,57]; ∈ R × +1 is the learnable parameter matrix of the -th GCN layer, which maps feature channels to +1 channels; is a non-linear activation function; +1 ∈ R × +1 is the output feature matrix produced by the -th GCN layer.Considering this background, our proposed Graph Feature Learning Network first applies layers of GCN to the input graph.This way, a continuous representation is computed for each graph node based on its neighbors' information.
• Transformer network
In this work, we intend to autoregressively model the distribution in Eq. 4, which is conditioned on { 1 , ⋯ , }.We do so by feeding the representations of graph nodes one at a time into the RNN.Thus, in order to perform conditional distribution modeling in this way, it is necessary to learn rich node representations so that all the graph nodes can make their own contribution to compute each node's embedding.In other words, we need the representation of a node not only to contain the information of its close neighbors, but also to include the information of relatively distant nodes that share some similar characteristics with it.However, an -layer GCN only considers information in -hop neighborhoods to obtain node representations, even if there are some dependencies between farther nodes.Therefore, our proposed Graph Feature Learning Network utilizes a Transformer encoder, which has shown promising results in contextualized representation learning.The following gives a quick overview of its architecture and workflow.
According to [58], the Transformer encoder layer consists of a multi-head attention block and a feedforward network, each followed by a residual addition and a layer normalization.A multi-head attention block consists of multiple attention heads, each working in a separate subspace to compute new contextualized representations corresponding to different aspects of dependencies between data entities.To be more precise, each attention head takes as input ∈ R × (in our case it is the feature matrix computed for graph's nodes by applying layers of GCN) and projects it into three matrices matrices.Then, the attention scores for each query are computed over the rows of the value matrix by performing an inner product of that query and all the key matrix rows.By doing so, a new contextualized representation is calculated for each query as a weighted summation of the value matrix rows.
Considering these remarks regarding the GCN and the Transformer, the final nodes' representations are obtained via concatenating the features computed by each of the two networks.Fig. 1 shows an overview of the proposed Graph Feature Learning Network.
Autoregressive generative model
As mentioned earlier, we want to model the conditional distribution in Eq. 4. To do so, we decompose it as the product of + conditional distributions in Eq. 6, and then go through modeling them.Each condition in Eq. 6 can be divided into two parts: (a) { 1 , ⋯ , } that is the initial structural condition regarding to 0 and (b) The remaining part of the condition derived by applying the chain rule, which relates to the generation history.The former is primarily captured by our Graph Feature Learning Network, and the latter is handled using an autoregressive generative model, namely an RNN.More specifically, the embeddings obtained by the Graph Feature Learning Network are fed into the RNN one at a time, and the RNN proceeds.This way, the RNN keeps the generation history such that at each step, the corresponding hidden state maintains the information of the graph generated until that time.
Data preparation
Making the data suitable as an input to our SCGG model is a prerequisite for training.Therefore, we perform a data preparation procedure before feeding it to the model.This procedure includes determining the set of new nodes ̃ , identifying the resulting initial graph 0 , and applying orderings on these two sets of nodes.An example of the data preparation procedure before model training is illustrated in Fig. 2. First, nodes are randomly selected from the main graph to form the set of new nodes.Therefore, the unselected nodes and those edges connecting them to each other are further treated as the initial graph 0 .The reason behind this random node selection is that each subset of nodes (i.e., the unselected ones) from the original graph has the chance to contribute to the model training as an initial graph.Thus, the model gains the ability to perform structure-conditioned graph generation given an arbitrary graph 0 at test time.Afterwards, orderings are applied to the nodes such that the initial graph nodes are ordered by , and the new nodes follow the order specified by .A number of nodes are selected at random to be further treated as the new nodes.In this picture, = 2 and the selected nodes (i.e., the green and the purple ones) are shown with thick borders.Furthermore, the inter-connections between new nodes and those in 0 are depicted by blue lines, and the only intra-connection between the new nodes is shown using a red line.(c) An ordering is applied to the nodes in 0 .Moreover, another node ordering, denoted by , is applied to the new nodes.
Training
To train the SCGG model, we first give it two versions of each graph .The first version corresponds to the initial graph 0 .The second version, which we denote by ′ , is obtained by removing the intra-connections between pairs of nodes belonging to ̃ .The Graph Feature Learning Network takes these two graphs as inputs and separately calculates nodes' representations for each of them, as formulated below: Next, a subset of the computed representations are fed into the RNN one by one in the order specified by and .More precisely, the RNN first takes the representations of 0 's nodes computed based on the first version of the graph.Then, it receives as input the representations of the new nodes obtained by feeding the ′ into the Graph Feature Learning Network.To put it another way, the final representations to be fed into the RNN are as follows: The reason for this is that at test time, we only have access to an initial graph 0 knowing nothing about how the set of new nodes are connected to each other as well as to the rest of the graph, but as the RNN proceeds, it predicts the inter-connections between the new nodes and the nodes of 0 .Thus, when the RNN finishes processing the last node of 0 , all inter-connections have been predicted and ′ can be constructed on top of 0 .At this point, it is time to complete the graph structure by predicting the intra-links between the new nodes.This requires that we have a proper representation for each new node, which can be obtained based on the most complete available version of the graph structure, i.e., the ′ .
September 21, 2022 9/33 Moreover, each cell of the RNN takes as its second input the ground truth labels of the previous cell.Therefore, the input for the -th RNN cell is obtained as follows: where ′′ is the representation of the -th node and −1 ∈ R is the vector of ground truth labels determining whether the − 1-th node has links to each of the new nodes or not.Next, by considering both the current input and the previous hidden state ℎ −1 , the RNN outputs probabilities regarding the link existence between the current node and each new node.This is done using two functions and according to the following formulations: where ∈ R is the -th step probabilistic output.Furthermore, the step loss is a binary cross entropy (BCE) between the predicted outputs and the ground truth labels, which is formulated in the below equation: The whole network, including the Graph Feature Learning Network and the RNN, is trained in an end-to-end manner.Algorithm 1 summarizes the training procedure of our SCGG model.for ∀ ∈ do end for 19: end for An example showing the SCGG model at training time is presented in Figures 3 and 4, where the graph of Fig. 2 is used as training data.First, the representations of the nodes in both 0 and ′ are computed by the Graph Feature Learning Network, which is illustrated in Fig. 3.Then, the obtained representations for the 0 's nodes (see the left half of Fig. 3 (d)) are given to the September 21, 2022 10/33 RNN in the order specified by .Accordingly, as depicted in Fig. 4, in the first RNN step, it is the turn of node 1 (indicated by a yellow circle) to be processed, and thus its features are passed on to the first recurrent unit.The network then estimates the conditional probability distribution ( ̃ 1 | 1 , 2 , 3 ), i.e., the probability of connecting the yellow node to each of the new nodes (the green and the purple ones).Afterwards, the step loss is calculated by taking the network output and the true labels (the first label is 1 because the yellow and the purple nodes are connected, and the second label is 0 as there is no edge between the yellow and the green nodes).In the second step, the second node's features (indicated by orange color) along with the true labels of the previous (yellow) node and the previous hidden state are given to the recurrent cell.Then the network outputs an estimation of the ( ̃ 2 | 1 , 2 , 3 , ̃ 1 ).The same procedure continues until all nodes of , including the ones in 0 and the set of new nodes (i.e., ̃ ), are fed into the network.Thus, in the third step, the network outputs the probability of ) by taking into account the features of the third (pink) node in graph 0 .In the subsequent step, when all the initial graph's nodes have been processed, it is time to go through the new nodes in the order specified by .Thus, the features computed for the first new node (displayed in purple color in the right half of Fig. 3 (d)) is given to the RNN to generate the probability Next, in the fifth step, the second new node's features (indicated in green) are fed into the recurrent network to produce the probability distribution . In order to elaborate a bit more on Fig. 4, it is worth mentioning that each step's hidden state contains the information of a subgraph of the main graph (i.e., ).This subgraph includes the already processed graph nodes and the links connecting them to each other as well as their connections to each of the new nodes.It also includes links between the current node and the previous ones.For example, in Fig. 4, in the third training step, two nodes (i.e., the yellow and the orange ones) have been processed and the pink node's features are fed into the recurrent unit as part of its input.Hence, the hidden state ℎ 3 maintains a subgraph containing the link between the yellow and the orange nodes as well as the links between these nodes and the new nodes (shown by blue lines).It also retains the links between the current (pink) node and both the yellow and the orange ones that have been fed into the network in the first two steps.
Inference
In the inference stage, an initial graph 0 is given as the structural condition.Then, using the learned functions , , and , the model starts generating graph by adding new nodes to 0 and predicting the inter-links between the new nodes and those of 0 , as well as the intra-links between the new nodes themselves.Algorithm 2 describes the steps of the SCGG model at inference time.Moreover, Fig. 5 illustrates the inference workflow of the SCGG by a toy example.
Implementation details
The proposed model is implemented using the PyTorch Library [59].As previously discussed, the function consists of a graph convolutional network (GCN) and a Transformer network.In this regard, we use a two-layer GCN with the embedding size of each layer equaling 16.ReLU activation followed by a batch normalization layer are used between the two GCN layers.Besides, our Transformer has one encoder layer with 8 attention heads and a dropout of 0.1.We use 4 layers of GRU cells with a 128-dimensional hidden state to implement the function .For the function , a two-layer multilayer perceptron (MLP) is employed with 64 hidden units in the middle and a ReLU nonlinearity between the layers.Further, the Adam optimizer is used with the learning rate of 0.003, and the model is trained for 100 epochs with a minibatch size of 32.Moreover, for the choice of and , we use uniform random orderings to maximize an approximation of the marginal likelihood in Eq. 5, which becomes intractable to compute exactly as the size of graphs increases.September 21, 2022 11/33
Experiments
In this section, we first elaborate on both the synthetic and the real-world datasets we used for evaluation purpose.Then, we outline the state-of-the-art baselines with which we compare our SCGG model.Next, the evaluation metric is explained, followed by describing the experimental setup.Finally, we discuss the results of our proposed approach, as well as the ones of the competitor methods.
Datasets
We evaluate the performance of our proposed method on a variety of synthetic and real-world datasets.In the following, we provide a brief description of each dataset.Moreover, Table 2 summarizes the key statistics of them.
• Grid: It is a synthetic dataset consisting of standard 2D grid graphs.
• IMDBBINARY: This dataset consists of ego-networks derived from actor/actress collaborations based on the information of movies belonging to the Action and Romance genres on IMDB.For each graph, nodes represent actors/actresses, and if a pair of them appears in the same movie, a link connects their corresponding nodes in the graph.
September 21, 2022 12/33 For each graph node, including those in the initial graph (i.e., the 0 ) and the ones in the set of new nodes (i.e., the ̃ ), the model outputs a probability distribution of link existence between that node and each new node (the probabilistic outputs are depicted by grey squares, and the darker the colors, the higher the probabilities).To do this, at each step, a recurrent unit takes the features computed for one of the graph nodes (see Fig. 3), as well as the previous node's true connections and the hidden state of the previous recurrent unit.In this regard, the nodes of 0 (ordered by ) are first fed into the model, which are then followed by the new nodes (ordered by ).Thus, the model learns to first generate the inter-links between the new nodes and those of 0 , and then predict the intra-links between the new nodes.The parameters of both the Graph Feature Learning Network and the RNN are updated by minimizing the total loss that is obtained via aggregating the step losses .∼ ⊳ Sample the inter-connections between the -th node of 0 and the set of new nodes 8: end for 9: Construct graph ′ on top of 0 using the sampled links [ 1 , 2 , ⋯ , ] 10: 13: ℎ = ( , ℎ −1 ) 14:
= (ℎ )
15: ∼ ⊳ Sample the intra-connections between the − -th new node and each of the new nodes 16: end for 17: Construct graph on top of ′ using the sampled links [ +1 , +2 , ⋯ , + ] • Enzymes: This dataset consists of graphs each representing a protein tertiary structure from the BRENDA enzyme database [60].More precisely, a graph's nodes represent secondary structure elements (SSEs) and an edge connects two nodes if their corresponding SSEs are neighbors along the amino acid sequence or one of the three nearest neighbors in space.
• NCI1: It is a biological graph dataset published by the National Cancer Institute (NCI).Each graph in the dataset represents a chemical compound screened for its activity against the growth of human tumors.
• Protein: This dataset contains protein graphs [61].Each graph represents a protein with nodes corresponding to amino acids.If the distance between two amino acids of a protein is less than 6 Angstroms, their corresponding nodes are connected in the graph.
State-of-the-art approaches
We compare our approach with several well-known state-of-the-art methods, explanations of which are provided in the following.
• KronEM [49].This is an old and well-known network completion method that combines the Expectation-Maximization (EM) framework with the Kronecker graphs model [62] to infer missing nodes and their corresponding edges in partially observed graphs.To do this, in each EM iteration, the method first utilizes the observed part of a graph to estimate model parameters (the M-step), and then it infers the missing part of that graph using the estimated model (the E-step).
• GraphRNN-S [29].This is a very famous autoregressive deep graph generator that first transforms graphs into sequences and then models the corresponding data distribution using RNNs.At each step, the method adds a new node to the currently generated graph and predicts the links connecting it to the previous nodes.Aside from that, GraphRNN-S September 21, 2022 14/33 An example illustrating the SCGG model at inference time.In this example, = 3 and a graph 0 consisting of two nodes is given to the model as the structural condition.At first, the Graph Feature Learning Network computes representations for the 0 's nodes, which are then used as part of the RNN input.Next, the RNN proceeds for two steps and outputs the probabilities of the inter-connections between these two nodes and each of the new nodes.Therefore, all the inter-links are generated by sampling from the produced probabilities.At this point, it is time to construct graph ′ based on the 0 and the generated links.Next, ′ is passed into the Graph Feature Learning Network to calculate the representations of its nodes.In this step, the representations of the new nodes are given to the RNN one by one in order to generate the intra-connections.Finally, the graph is constructed on top of the ′ by considering the generated intra-links.makes a simplistic assumption that a node's links are independent of each other, and therefore models them by a multi-layer perceptron.
• GraphRNN [29].This is the full GraphRNN model, which is relatively similar to GraphRNN-S, with the difference that it does not take into account the edge independence simplifying assumption.Therefore, to capture the interdependencies between a node's edges, it employs another recurrent neural network called the edge-level RNN.
• DeepNC [53].This is the most recent graph completion baseline that utilizes a deep generative model of graphs, namely GraphRNN-S, to infer the missing parts of a partially observable network.To this end, the method first learns a likelihood over data by training the GraphRNN-S model.Then, it proposes a sequence of algorithmic steps to recover the network in a greedy fashion, trying to maximize the learned likelihood.The fact needed to be noted here is that although this method uses the probabilities generated by a deep generative model of graphs to make algorithmic decisions, it is not considered a totally September 21, 2022 15/33 deep learning-based approach.However, if a model is specifically trained to address the problem of graph completion, it can achieve higher performance.
• EvoGraph [63].This is a graph upscaling method, which expands an initial input graph 0 = ( 0 , 0 ) in stages by adding | 0 | new edges at each stage.The method considers a set of candidate new nodes in every expansion phase, and adds each new edge by choosing one of its endpoints from the current nodes and the other from the candidate ones.In order to provide a fair comparison between EvoGraph and other methods, we make a slight change to its upscaling process by terminating it right after the insertion of the -th new node.
Evaluation metric
Similar to [53], we use Graph Edit Distance (GED) [64] as the evaluation metric to assess the performance of our SCGG method and the baselines.In this regard, if we denote a generated or completed graph by ̂ and its corresponding ground truth graph by , the GED between these two graphs, which shows how dissimilar they are, can be formulized as follows: where ( ̂ , ) is the set of all edit paths converting ̂ to a graph that is isomorphic to .Moreover, ( ) is the cost of an edit operation , which in the same way as [53], we set it to 1 for all operations.Additionally, as with [53], we normalize the GED computed for each pair of graphs by the average of their sizes.Along with our brief overview of GED, one important point to note is that enumerating all the discussed edit paths requires employing a combinatorial search procedure with exponential time complexity, and therefore the exact solution to this problem is NP-complete [65].Hence, we utilize an approximation approach [66] for computing GED scores.
Experimental setup
In addition to what we have explained in Section 4.6 concerning the details of implementing our SCGG model, in this subsection, we elaborate on the remainder of the details regarding the experimental setup.In this respect, to train our model, we select a random subset of 80% of the graphs in each dataset.A similar approach is also followed to train other learning-based baselines (i.e., GraphRNN-S and GraphRNN).We then make use of the remaining 20% of graphs for model testing.More specifically, for each graph in the test set, we perform the following two steps for 10 iterations: • We randomly choose a number of nodes from the original test graph and remove these nodes and their associated edges to acquire a subgraph 0 .
• We then feed the obtained subgraph to all the competing methods and compare their results to the ground truth graph .
Afterwards, for each graph in the test data, we average the GED scores calculated in 10 iterations and compute their standard deviation.Finally, for each value of the parameter , we report the average of the GED scores, as well as the average of standard deviations computed over the whole test set.
Results and discussion
In this subsection, the experiments conducted to evaluate the performance of our proposed method against the baselines are presented in three parts.In the first part, we set the maximum September 21, 2022 16/33 possible value for the parameter such that the competing methods can be evaluated on all datasets.Then, we compare the obtained results and report the gain of SCGG over the baselines.
In the second part, we discretely change the value of from the lowest to the highest possible amount in such a way that all datasets can be utilized for model testing.Then we study how the performances of various methods are affected by increasing the value of .Finally, in the third part, we raise to much higher values and evaluate the efficacy of all approaches on the dataset that offers this possibility.
We first analyze the performance of different methods for the case where = 10.The reason for choosing this value for is that, as outlined in Table 2, the minimum number of nodes among graphs of all datasets is 11.Hence, to construct initial graphs 0 , a maximum of 10 nodes can be removed from the original graphs.We report the obtained results in Table 3, from which it is evident that for all datasets, SCGG is the best performing method in terms of the lowest average GED score.More precisely, SCGG obtains an average gain of 51.74% over other approaches based on the experiments conducted on all datasets, with the lowest gain value of 2.65% and the highest gain of 88.15%.Furthermore, in most cases, the standard deviations of our results are less than those of the baselines.
Besides, the results of Table 3 reveal that KronEM does not perform well in general, so that, unlike other methods, its average GED has never been lower than 0.52.There can be several reasons for this.First, unlike SCGG, GraphRNN-S, GraphRNN, and to some extent DeepNC, this method is not trained on a dataset of graphs, but rather it processes each graph in the test set separately, i.e., it completes the structure of each partially observed graph based solely on the available part of it.Another reason for the underperformance of KronEM might be due to the fact that the Kronecker graphs model generates graphs with 2 nodes.Therefore, when an initial graph 0 is given to KronEM, it increases the number of its nodes to the nearest power of 2. This can lead to a significant difference between the ground truth and the completed graph September 21, 2022 17/33 regarding the number of nodes, thereby causing the GED score to be raised.
In addition to what we have discussed so far regarding the results in Table 3, they also indicate that EvoGraph considerably underperforms on the IMDBBINARY and IMDBMULTI datasets.This is because the upscaling process of EvoGraph tends to establish connections with new nodes that have not yet been linked to the graph.In other words, adding new edges is performed with a high priority to connect new nodes to the already generated graph, meaning that setting up more connections between the previously added nodes and the nodes of the initial graph 0 is carried out with a relatively low priority.Thus, it is not surprising that the graphs produced by EvoGraph generally contain fewer edges than the ones belonging to the IMDBBINARY or IMDBMULTI datasets, which according to the statistics listed in Table 2, have low edge sparsity.In light of this, we can expect a decrease in the performance of EvoGraph on these two datasets.
In the second part of the experiments, we vary the value of discretely from 1 to 10 and study the performance of different methods as a function of the parameter .In this regard, Figures 6, 7, 8, 9, 10, and 11 demonstrate the obtained results on the Grid, IMDBBINARY, IMDBMULTI, Enzymes, NCI1, and Protein datasets, respectively.Moreover, since a part of the results are somewhat visually overlapped, which may affect their readability, we provide the readers with another view of them.In this regard, a pairwise comparison between our SCGG approach and each of the baselines is depicted in a separate subplot for all datasets.Accordingly, the second appearance of the results in Figures 6, 7 Fig. 6 shows the effect of increasing the value of on the performance of various methods on the Grid dataset.According to these results, the GED values of most methods (i.e., SCGG, GraphRNN-S, GraphRNN, and EvoGraph) increase almost uniformly with the growth of , which makes sense since as increases, the task becomes more difficult.A noteworthy point here is that our proposed SCGG approach performs the best (lowest GED score).In addition, as gets higher values, the GED of our approach increases with a lower slope.This figure also demonstrates the poor performance of KronEM (both in terms of the relatively high average GED score and the high standard deviations), which is in accordance with what we discussed before.The results also indicate that DeepNC underperforms on the Grid dataset.This may be due to the fact that DeepNC, unlike other competitors, does not conduct its processing steps by taking into account the whole initial graph 0 at once.To put it another way, other methods receive an initial graph 0 and start adding new nodes on top of it.Meanwhile, DeepNC starts constructing the graph from scratch, and at each stage, it randomly decides whether to choose the next node from the set of initial graph nodes, or add a new one.Therefore, since the graphs of the Grid dataset follow a highly regular structural pattern, not considering whole information of initial graphs at once prior to processing can lead to the performance drop of DeepNC by constructing graphs that are substantially different from the expected ones.
Figures 7 and 8 show the results obtained on the IMDBBINARY and IMDBMULTI datasets, respectively.They reveal that for all values of , the SCGG outperforms the baselines.It is also evident from these results that EvoGraph has achieved the worst performance among other competitors.This, as explained earlier, can be due to the tendency of EvoGraph to complete the graph structures by adding a small number of edges to 0 , which is in contrast to the non-sparsity of the graphs belonging to these two datasets.
The results on datasets Enzymes and NCI1 are depicted in Figures 9 and 10, respectively.Since these two datasets share relatively similar statistical properties, as listed in Table 2, somewhat similar results are observed on them.In this regard, our SCGG approach achieves the best performance compared to other methods.Specifically, in almost all cases it offers the lowest average GED score.Moreover, in the vast majority of circumstances, the standard deviations of the results obtained by our method are lower compared to the other approaches.These results also demonstrate that GraphRNN-S and GraphRNN perform the worst as the value the baseline methods in almost all cases, and as the value of goes up, this performance superiority more clearly manifests itself.In addition, the weak performance of KronEM can be evidently seen in these results, the reasons for which have been discussed in detail previously.
In the third part of the experiments, we study the performance of all competing approaches in the case where a much larger number of nodes are supposed to be added to initial graphs 0 .Accordingly, we conduct the experiments on the Protein dataset, which, due to the large size of September 21, 2022 20/33 its graphs, gives us this opportunity.More precisely, we increase the value of the parameter from 10 to 90 (i.e., the maximum possible value that does not exceed the minimum number of nodes in this dataset) in steps of 10.The results of these experiments are illustrated in Fig. 12.
As we can see, our method achieves the best results in terms of the lowest GED score for all values of .Furthermore, in the majority of cases and especially as gets higher values, our results show smaller standard deviations than those of other approaches.We can also observe that for the higher values of , for which both the tasks of graph completion and structure-conditioned graph generation become much more challenging, the performance of GraphRNN-S, GraphRNN, and EvoGraph deteriorate rapidly.This can be interpreted according to the fact that these approaches are not particularly designed to address such tasks.Conversely, as the parameter rises to its highest values, SCGG, DeepNC, and KronEM offer the best results, respectively.Another perspective of the results in Fig. 12
Conclusions
In this work, we have presented SCGG, a novel structure-conditioned graph generation approach that autoregressively generates a graph by adding new nodes and their corresponding edges on top of a given initial substructure 0 .Specifically, the architecture of our model consists of a specific graph representation learning network, which is the main responsible for considering the conditioning substructure, and an autoregressive generative model (i.e., a recurrent neural network) that mostly maintains the generation history.We then have employed this model to address the intrinsically hard-to-solve problem of network completion, in which the goal is to complete the structure of a partially observed graph, some of whose nodes are totally unknown.
To demonstrate the superiority of our proposed SCGG model, we have conducted intensive experiments on both synthetic and real-world datasets and compared the performance of our method against state-of-the-art baselines for the task of graph completion.The experimental results illustrate that SCGG outperforms the baselines in terms of the GED score, which indicates that the graphs generated by our model, on average, are the closest to the ground truth graphs.To the best of our knowledge, this is the first time a completely deep learning-based approach addresses the graph completion problem.Potential research pathways to be explored in the future include extending the SCGG model in such a way that it can be used for molecular graph generation, in which the existence of September 21, 2022 22/33 predetermined chemical substructures in the final designed molecules confers specific chemical properties to them.Furthermore, another future research direction is to enhance model scalability, so that the SCGG can generate even much larger graphs.
Fig 1 .
Fig 1.An illustration of the Graph Feature Learning Network and its workflow.(a) An input graph.(b) The Graph Convolutional Network.(c) Continuous representations learned for graph nodes by the GCN.(d) The Transformer network that takes the node embeddings computed by the GCN as input and outputs new contextualized features of graph nodes.(e) The node features learned by the Transformer network (shown using small squares colored with radial gradients).(f) The final representations of graph nodes acquired by concatenating the embeddings computed by the GCN and the Transformer network.Here, dashed arrows are drawn to easily track what sub-features a final node feature consists of.
Fig 2 .
Fig 2.An illustration of the procedure of preparing the training data.(a) An input graph.(b)A number of nodes are selected at random to be further treated as the new nodes.In this picture, = 2 and the selected nodes (i.e., the green and the purple ones) are shown with thick borders.Furthermore, the inter-connections between new nodes and those in 0 are depicted by blue lines, and the only intra-connection between the new nodes is shown using a red line.(c) An ordering is applied to the nodes in 0 .Moreover, another node ordering, denoted by , is applied to the new nodes.
Algorithm 1 2 :
Training Algorithm of SCGG Model Input: Dataset of training graphs , number of new nodes Output: Learned functions , , and 1: for ∀ ∈ do Build 0 and ′ from the graph 3: end for 4: for number of training iterations do 5:
Fig 3 .
Fig 3.An overview of the workflow employed to obtain the required nodes' features in the training phase.(a) An input training graph after applying the preparation procedure shown in Fig. 2 (b) Two versions are made from the main graph.The one on the left will be treated as the initial graph (i.e., the 0 ), and the graph on the right, which we denote in the paper by ′ , is obtained from the original graph by removing the intra-connection between the new nodes, i.e., the red link.(c) The Graph Feature Learning Network, whose architecture is illustrated in detail in Fig.1.(d) The features computed for each node of the graphs.The ones around which blue dashed ovals are drawn will be further used by the RNN.
Fig 4 .
Fig 4.An example of the SCGG model at training time.For each graph node, including those in the initial graph (i.e., the 0 ) and the ones in the set of new nodes (i.e., the ̃ ), the model outputs a probability distribution of link existence between that node and each new node (the probabilistic outputs are depicted by grey squares, and the darker the colors, the higher the probabilities).To do this, at each step, a recurrent unit takes the features computed for one of the graph nodes (see Fig.3), as well as the previous node's true connections and the hidden state of the previous recurrent unit.In this regard, the nodes of 0 (ordered by ) are first fed into the model, which are then followed by the new nodes (ordered by ).Thus, the model learns to first generate the inter-links between the new nodes and those of 0 , and then predict the intra-links between the new nodes.The parameters of both the Graph Feature Learning Network and the RNN are updated by minimizing the total loss that is obtained via aggregating the step losses .
0 ) 2 :
0 = sos; Initialize ℎ 0 3: for from 1 to do Fig 5.An example illustrating the SCGG model at inference time.In this example, = 3 and a graph 0 consisting of two nodes is given to the model as the structural condition.At first, the Graph Feature Learning Network computes representations for the 0 's nodes, which are then used as part of the RNN input.Next, the RNN proceeds for two steps and outputs the probabilities of the inter-connections between these two nodes and each of the new nodes.Therefore, all the inter-links are generated by sampling from the produced probabilities.At this point, it is time to construct graph ′ based on the 0 and the generated links.Next, ′ is passed into the Graph Feature Learning Network to calculate the representations of its nodes.In this step, the representations of the new nodes are given to the RNN one by one in order to generate the intra-connections.Finally, the graph is constructed on top of the ′ by considering the generated intra-links.
Fig 9 .
Fig 9.Performance comparison on the Enzymes dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ).
Fig 10 .
Fig 10.Performance comparison on the NCI1 dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ).
Fig 11 .
Fig 11.Performance comparison on the Protein dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ).
can be found in S7 Fig, providing the readers with a pairwise comparison of our SCGG model and each of the baselines.
Fig 12 .
Fig 12. Performance comparison on the Protein dataset in terms of GED (lower is better) as a function of the parameter , which varies discretely from 10 to 90 in steps of 10.
Table 1 . Notations in this paper. Notation Description
0
Table 2 . Statistics of datasets used in the experiments.
The same explanation given for the IMDBBINARY dataset is valid for this dataset as well, except that the movies belong to the Comedy, Romance, and Sci-Fi genres.
Table 3 . Comparison of SCGG with its competitors for
Performance comparison on the Grid dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ).Performance comparison on the IMDBBINARY dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ).of increases.This is because these two are general graph generation approaches, which are not specifically designed to solve problems such as structure-conditioned graph generation or graph completion.Therefore, although they have achieved acceptable performance in some cases, it is not surprising that in some other cases they perform poorly compared to the baselines.Finally, Fig.11depicts the results on the Protein dataset, in which the value of varies discretely from 1 to 10.The results indicate that the SCGG method obtains a lower GED than Performance comparison on the IMDBMULTI dataset in terms of GED (lower is better) as a function of the number of new nodes to be added (i.e., ). | 2022-09-21T06:42:46.159Z | 2022-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "bb17e5058e55a3095f1a4084ee223779010ae7be",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0277887&type=printable",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "bb17e5058e55a3095f1a4084ee223779010ae7be",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
210874925 | pes2o/s2orc | v3-fos-license | Institutions and the Productivity Challenge for European Regions
Europe has witnessed a considerable labour productivity slowdown in recent decades. Many potential explanations have been proposed to address this productivity 'puzzle'. However, how the quality of local institutions influences labour productivity has been overlooked by the literature. This paper addresses this gap by evaluating how institutional quality affects labour productivity growth and, particularly, its determinants at the regional level during the period 2003-2015. The results indicate that institutional quality influences regions' labour productivity growth both directly -as improvements in institutional quality drive productivity growth- and indirectly -as the short- and long-run returns of human capital and innovation on labour productivity growth are affected by regional variations in institutional quality.
Introduction
Productivity growth in the European Union (EU) has been low and tended to decline in recent decades. It has been low relative to past performance and relative to other areas of the world. Productivity growth in the 1960s in the EU-15 was a healthy 4.6% per annum (Carone et al., 2006), but has been declining decade on decade since then. 1 Between 2008 and 2016 labour productivity change in the Eurozone was just 0.35% per annum (Draghi, 2016).
The decline in productivity over time has been accompanied by a significant worsening of the EU's position relative to other areas of the world. Since the mid-1990s, productivity growth in the Eurozone has been year-on-year lower than that observed in other advanced economies and, except for 1999, in emerging market economies (Draghi, 2016).
Not all countries in the EU have fared equally. Post-2004 Member States in Central and Eastern Europe still enjoy relatively healthy levels of productivity growth. By contrast, in the former EU-15 productivity has been hovering barely above zero (Marrocu et al., 2013). A growing gap between a more productive and competitive North and a stagnant South is also becoming increasingly evident (Gopinath et al., 2017).
A considerable amount of research has tried to explain the reasons for this productivity 'puzzle', i.e. the general productivity slowdown and the internal differences in productivity paths within Europe, using both a macro (country-level) and a micro (firm-level) perspective. However, productivity differences go beyond what happens at the level of the firm and differ considerably within countries, especially in a period that has witnessed an increasing concentration of advanced economic activity in a small number of economically dynamic areas of Europe (Rosés and Wolf, 2019).
The aim of this paper is to address this gap about changes in productivity -defined as output per person employed-and to develop policy recommendations for improvements in productivity at the regional level in Europe. In particular, the analysis will focus on how skill, innovation, and institutional deficiencies in many regions of Europe represent a barrier for productivity growth, and how these deficiencies not only lead to substantial economic waste, but also threaten economic, social, and political stability during a period in which developments in artificial intelligence and an increasing use of robots are widening the European regional productivity gap.
In order to do this, the paper analyses the sources of regional labour productivity growth across 248 regions in 19 EU countries for which full datasets are available between 2003 and 2015. The hypotheses driving the research are that, first, differences in changes in regional productivity across the EU depend on a combination of territorial variations in physical and human capital endowments, as well as a region's innovative capacity, and, second, that the impact of each of these factors on productivity changes in Europe is highly dependent on the quality of institutions in each region. The analysis focuses on short-run labour productivity growth, but provides evidence concerning its longrun dynamics as well. Previous research on how institutional quality affects regional economic outcomes has focused on other dimensions, such as economic growth, innovation, or entrepreneurship (e.g. Nistotskaya et al., 2015;Rodríguez-Pose and Ketterer, 2020). However, it has completely neglected how institutional quality affects regional productivity in the EU, meaning that the knowledge of how variations in institutional quality shape the productivity slowdown at a regional level in Europe is extremely limited.
The results highlight that productivity growth across European regions is both directly and indirectly associated with regional institutional quality. First, improvements in institutional quality drive productivity growth. Second, the link between human capital and innovation outputs, on the one hand, and productivity growth, on the other, is far weaker that what could be expected, as variations in local institutional quality strongly mediate the effects of both factors on productivity changes. Regions with low institutional quality encounter strong barriers in translating skills and training into greater productivity in the labour market. Hence, addressing enduring institutional bottlenecks represents a key element for tackling the productivity challenge in Europe.
In order to reach these conclusions, the paper is structured as follows. A short description of the productivity challenge in Europe at the regional level follows this introduction. The third Section presents the data and the modelling, and the estimation approach. The empirical results are depicted in the fourth Section. The last section presents the conclusions and some policy implications.
The productivity challenge in Europe and its regional dimension
In a Europe that is affected by a large number of challenges, ranging from the increasing competition derived from globalisation and economic integration to ageing and rising environmental risks, labour productivity growth is often regarded as the most feasible way to confront uncertainty and secure the viability of the European social model. As argued by Mokyr (2010), sustained economic growth, especially in advanced economies, requires constant and sustained technological change. Sustained technological change is generally a result of improvements in both physical and human capital, as well as greater investment and progress in innovation capacity (Quatraro, 2009). Yet while Europe has experienced non-negligible improvements in the educational achievements of its population, in investment in physical capital, and its innovation capacity has continued to grow, productivity has stagnated and, in many parts of the Continent, declined (Decker et al., 2017).
Especially over the last two decades Europe has grappled with a productivity slowdown, which is not just a result of the Great Recession but actually precedes it (Cette et al., 2016). In 1995 most large European economies had productivity levels that were roughly equivalent to those found in the United States (US). France, Germany, Italy and the United Kingdom (UK) were as productive as the US. Spain was somewhat behind, albeit having experienced a rapid period of convergence since the 1950s. Since then, the tide has turned and the European economies are not just losing out to the US, but also to the rest of the world (Cette et al., 2016). Such decline has accelerated recently, putting Europe in a difficult position. As Figure 1 shows, since 2003 productivity growth in Europe has stagnated. The Great Recession produced a trough in productivity growth -productivity growth in 2008 was negative-from which Europe still must recover. The post-2008 rates of productivity growth remained lower than in pre-crisis times, at least until 2015. On the whole, during what van Ark (2016) has called the post-2005 era of the 'new digital economy', labour productivity has recorded a marginally positive -and almost linear-growth trend, while its growth has remained well below what is needed to preserve both the competitiveness of the European economy and to maintain its social welfare model. 2 Moreover, the distribution of labour productivity is becoming more unequal. In the 'new digital economy' increases in productivity are more and more concentrated in frontier firms, i.e. those at the top 5% of the distribution (Andrews et al., 2016). And as research and development (R&D) expenditure projects become larger -the top 10% of Scoreboard firms concentrate 71% of R&D expenditure (Veugelers, 2018)-the 'new digital economy' implies that productivity changes are ever more the privilege of a number of superstar firms (Veugelers, 2018). What determines these differences in productivity growth across regions of Europe? Much research has been conducted trying to solve this productivity 'puzzle' (e.g. Broersma and van Dijk, 2008;Barnett et al., 2014;Martin et al., 2018). Traditional analyses have delved into the basic factors behind productivity in order to explain why productivity has stagnated badly in some areas and economic sectors, while in others it has remained relatively healthy. Pessoa and Van Reenen (2014), for example, when studying the productivity slowdown in the UK, focused on issues related to wage flexibility and the underutilisation of resources. A decline in intangible and telecoms investment and low total factor productivity growth are the main culprits for Goodridge et al. (2013). Lucidi and Kleinknecht (2010) have highlighted labour market flexibility as a key shortcoming for Italian firms' labour productivity growth, while Naastepad (2006) identified the decline in real wages growth as the main cause of the Dutch productivity crisis. Low capital investment in ICT and a lack of capacity to reallocate resources within sectors affected by fast changes in technology have also been the object of attention (Iammarino and Jona-Lasinio, 2015;van Ark, 2016;Calligaris et al., 2018), while Benos and Karagiannis (2016) have put the emphasis on skills and education.
The focus on physical and human capital and innovation to explain the slowdown in productivity is logical. After all, technology, knowledge, and efficient knowledge are the key components behind productivity changes (Acemoglu, 2012). This is particularly relevant for European regions, for which the technology gap to the leader and human capital endowment appear as the key drivers of productivity growth. Differences in human capital endowment between Italy -with one of the lowest levels of formal skills among the adult population in the EU-and most of the rest of Europe can, for example, explain Italy's productivity growth slowdown. The same applies to lower capital formation in Greece.
However, the impact of diversity in physical and human capital endowment and technological capacity for labour productivity may be enhanced by the pervasive differences in institutional quality across regions of Europe. As indicated by North (1990North ( , 1991, economic success depends to a large extent on the quality of institutions. At country level, increasing evidence shows how heterogeneity in institutional quality expounds differences in productivity and economic performance (e.g. Hall and Jones, 1999;Olson et al., 2000), with higher institutional quality magnifying the productivity returns of physical and human capital (Hall et al., 2010) and R&D (Égert, 2017).
As in the case of country-level institutions, local institutions also contribute to create the conditions and incentives that reduce transaction costs and make the development of economic activity more viable (Rodríguez-Pose, 2013). Institutions are at the heart of innovative activity (Rodríguez-Pose and Di Cataldo, 2015). But the role of institutions for innovation goes beyond that linked to the creation of formal bodies, such as the presence of intellectual property rights protection, to encompass more informal arrangements (Mokyr, 2009), such as the building of trust among different economic actors (Putnam et al., 1994). Good institutions also facilitate innovation at all levels, as they contribute to generate both the right environment for scientific breakthroughs and the conditions for the assimilation of innovation (Mokyr, 2009). All these are essential factors for the adoption of innovation by firms and, consequently, for increases in labour productivity. Moreover, effective institutions can have an important indirect role in facilitating the efficient use of physical and human capital and innovation in the market place, once again leading to increases in productivity. In this respect, good institutions are at the heart of the trust-based networks that connect researchers to industrialists (Mokyr, 2009) and that make an easier diffusion of new knowledge among economic actors possible (Rodríguez-Pose and Di Cataldo, 2015).
The geographical scale at which institutions can be more effective is also changing, especially in the most developed countries. Increasingly in the rich countries of the world most public investment is being conducted at sub-national level. 73% of public investment in the OECD, for example, is carried out by sub-national tiers of government (Hulbert and Vammalle, 2014). The regional scale is also one where, often, the cohesiveness and accountability of economic actors tends to be greater, as existing social capital facilitates collaboration and networking (Laursen et al., 2012;Huggins et al., 2012).
In this respect, the regional approach to institutional quality complements country-level analyses by capturing the wide within-country heterogeneity existing in the EU in terms of both factor endowment and productivity trajectories. Yet the role of how local institutions influence local productivity both directly and indirectly -through their effects on physical and human capital and local innovationhas, so far, attracted limited attention. This paper covers this gap in our knowledge by assessing the extent to which the productivity challenge at the regional level in the EU depends on more than just improvements in physical and human capital and innovation, evaluating how differences in institutional quality in the places where economic actors operate may represent an asset/barrier to productivity growth.
Modelling and data
The empirical analysis investigates the determinants of recent regional labour productivity dynamics in the EU. Two interrelated dimensions are covered. First, we examine the role that capital investments, skills, innovation, and institutional factors play in directly shaping short-run regional productivity growth. Second, we zoom into whether and how institutional quality across the regions of Europe becomes a productivity-enhancing force -or, conversely, an obstacle-by intensifying -or reducing-the returns on productivity of physical and human capital investments and of the innovation effort.
The empirical model proposed for regional productivity growth is derived from the standard neoclassical Solow-Swan growth model (Solow, 1956;Swan, 1956), which specifies regional productivity according to the following production function: , = ( , , , , , , , ) where productivity in region at time ( , ) is defined as a function of technology ( , ), physical capital ( , ), human capital ( , ) and labour ( , ).
We hypothesise that local institutional differences -reflecting the quality, efficiency, accountability of governments, the relevance of corruption in a territory, and the state of local bureaucracy and of the judicial systems-shape changes in regional productivity. This implies assuming that productivity growth is constrained by government capability, with the quality of government being a force able to influence both technical and non-technical regional growth parameters.
In order to assess whether this is the case, we define the technology parameter ( , ) as a combination of technological know-how -i.e. productive efficiency ( , ) which, in turn, is determined by technology adoption choices made by profit-maximising firms-and by the quality of regional institutions ( , ). Thus, the technology parameter can be specified as a function of productive efficiency and institutional quality as follows: Based on this, we develop the traditional Solow-Swan growth framework considering both physical and human capital aspects à la Mankiw et al. (1992) and complementing the model with institutional regional parameters. Assuming a Cobb-Douglas production function setting with constant returns to scale, the substitution of Equation (2) into Equation (1) yields the following specification: where the term , denotes the institutional factor, and the term , reflects companies' productive efficiency. Assuming that regions differ in their initial level of technology (Mankiw et al., 1992), we compute steady-state values of human and physical capital per effective unit of labour and, taking natural logarithms, adopt the following structural equation for a region's long-run output per capita levels: log( , ) = log( ,0 ) + log( ,0 ) − + 1 − − log( , + + ) + 1 − − log( , ) where , denotes labour productivity of region at time ; , represents investments; , ℎ denotes human capital; , indicates population growth; is the exogenous growth rate of technology; and the depreciation rate. These are the factors that, as indicated in the previous section, recent research has brought to the fore as the main productivity-inducing factors. Based on existing theory, the model predicts higher productivity in territories with higher levels of investment, human capital, technological progress, and better institutional conditions. By developing the previous theoretical model empirically and disentangling the investments component into physical capital and investments leading to innovation, the following augmented empirical equation for short-run labour productivity growth is specified: where ∆ , = log( , ) − log( , −1 ) denotes the annual regional labour productivity growth; with labour productivity ( , ) defined as total Gross Value Added (GVA) over total employment; the regional observational unit = 1, … , 248 defined at the geographic level 2 of the Nomenclature des Unités Territoriales Statistiques (NUTS) adopted by the EU; and the temporal dimension defined over the period 2003-2015.
The right-hand side of Equation (5) includes variables for: the growth-initial labour productivity level ( , −1 ); physical capital ( , −1 ), defined as Gross Fixed Capital Formation (GFCF) as percentage of Gross Domestic Product (GDP); population growth rate between times − 1 and − 2 (∆ , −1 ), with technological change ( ) and depreciation rate ( ) assumed as constant and equal to 0.02 and 0.05, respectively ; population density ( , −1 ), defined as population per square kilometre, and aimed at controlling for agglomeration-related forces and capturing regional features related to population distribution and concentration of economic activities; human capital ( , −1 ), measured as the share of the population aged 25-64 years with tertiary education; innovative capacity ( , −1 ), defined as the number of patent applications -filed under the European Patent Office (EPO), by inventors' country of residence and priority year-; and quality of regional institutions ( ,−1 ). and are region and time fixed effects (FE), respectively, while , denotes the error term. 3 The variable for regional institutional quality ( , −1 ) is defined using data drawn from the 2013 wave of the European Quality of Government Index (EQGI) dataset provided by the Quality of Government Institute of the University of Gothenburg. The EQGI contains individual-level information derived from a citizen-based survey on the perception and experience of individuals in their own locality with respect to corruption, quality, and impartiality in terms of education, public health care, and law enforcement. 4 The concept of institutional quality encompasses factors such as corruption, rule of law, and the impartiality of the public sector, capturing the capacity of regional governments to provide and administer public services impartially, effectively, and in a non-corrupt manner (Rothstein and Teorell, 2008;Charron et al., 2014Charron et al., , 2015. Hence, the EGQI aims at capturing the 'quality', rather than the 'quantity', of public services delivered by regional governments. In this respect, regional institutional quality is defined based on four main 'pillars', including the degree of corruption of the local public sector, the strength of the rule of law, the level of voice and accountability in terms of corruption-free local elections and local media freedom, and the effectiveness of local governments in providing high-quality services in an impartial manner (Charron et al., 2014).
Following the approach proposed by Charron et al. (2014, 83) and widely employed in the empirical literature analysing regional institutions in the EU (e.g. Rodríguez-Pose and Di Cataldo, 2015;Crescenzi et al., 2016;Ketterer and Rodríguez-Pose, 2018;Ganau and Rodríguez-Pose, 2019), the 16 survey questions of the EQGI dataset have been adapted to, and interpolated with, four of the six institutional 'pillars' defining the country-level Worldwide Governance Indicators (WGI) dataset developed by the World Bank (Kaufmann et al., 2010). Specifically, the four 'pillars' considered are government effectiveness, rule of law, voice and accountability, and control of corruption. 5 This interpolation of the region-and country-specific indicators has a series of advantages. First, it allows us to cover the entire period of analysis. Second, it captures country-specific dimensions -e.g. legal system, immigration, trade, security-which are not considered in the survey-based data. Third, it can overcome potential biases affecting the regional index, induced by the limited number of respondents per region (Charron et al., 2014). 3 Data on GVA, employment, GFCF, GDP, population, surface, population with tertiary education, and patents are drawn from the Regio database provided by Eurostat. Missing values in the regional series for population, human capital, and patents have been filled in by linearly interpolating country-level data provided by Eurostat. According to Eurostat, GFCF is defined as resident producers' acquisitions (less disposals) of fixed assets (e.g. machinery and equipment, vehicles, buildings, structures, computer software) during a given period, plus additions of non-produced assets realised by the productive activity of producer or institutional units. Formally, the region-specific time-varying institutional quality index ( , −1 ) is constructed as follows (Charron et al., 2014): where ̅̅̅̅̅̅ , −1 denotes the average of the four mean-standardised institutional 'pillars' from the WGI dataset in country at time − 1; , represents the region-specific score derived from the corresponding four survey-based institutional 'pillars'; and ̅̅̅̅̅ denotes the country-specific, population-weighted average of the survey-based regional score. 6 The regional index defined in Equation (6) is subsequently normalised in the interval [0,1] -from the lowest to the highest level of institutional quality-to obtain the variable depicting regional institutional quality ( , −1 ). 7 The final sample includes 248 NUTS-2 regions in 19 EU countries. In particular, it covers 96.88% of all sub-national territories of the countries considered in the analysis (see Online Appendix Table A1) and represents 95.65% of GVA, 93.74% of employment, and 93.47% of population of the EU-28 area (see Online Appendix Table A2). Online Appendix Table A3 reports some descriptive statistics of the dependent and explanatory variables entering Equation (5), while Online Appendix Table A4 presents the correlation matrix of the explanatory variables.
Considerable heterogeneity in institutional quality is in evidence both across and within countries (Online Appendix Figure A2). Across Europe, regions with good institutions -mainly located in Scandinavia, the Netherlands, Germany and Austria-coexist with regions with relatively low institutional quality, fundamentally in the south eastern corner of Europe, from the south of Italy, to Greece, Bulgaria and Romania. In between, regions in the remaining post-2004 Member States of the EU (Czechia, Hungary, Poland, Slovakia and Slovenia) also suffer from weak institutional quality. However, the institutional conditions there are better than in the South East of the EU. The final group consists of regions in Belgium, the British Isles, France, the Iberian Peninsula and northern Italy.
Here, the local government quality is either slightly above average (Belgium, France, Ireland, the UK) or right on the average of the sample, as in the case of Portugal and Spain. Although institutional quality has remained, on average, fairly stable over the time period considered (see Online Appendix Table A5), there has been a tendency for cross-country variation in institutional quality to increase between 2003 and 2015 (see Online Appendix Figure A3).
60.89% of the regions in the sample had levels of institutional quality throughout the period of analysis which were above the sample mean (see Online Appendix Figure A4). In particular, the best institutional setting, according to the survey, was found in the Danish region of Midtjylland, while the Bulgarian region of Yugozapaden had the lowest score. All regions in Austria, Denmark, 6 Charron et al. (2014, 83) classify the 16 survey questions of the EQGI into four 'pillars': government effectiveness; rule of law; voice and accountability; and control of corruption. This allows constructing region-specific indexes reflecting these four 'pillars'. The mean-standardised four indexes are then averaged to obtain the region-specific score for institutional quality ( , ).
7 Time-varying, interpolated variables capturing the four institutional 'pillars' have been constructed following the same approach. Let ̅̅̅̅̅̅ , , −1 denote the mean-standardised value for institutional 'pillar' from the WGI dataset in country at time − 1; let , , denote the mean-standardised region-specific score derived from the 'pillar'-specific survey questions; let ̅̅̅̅̅ , denote the country-specific, population-weighted average of the survey-based, 'pillar'-specific regional score; then, the region-specific, time-varying index for 'pillar' is defined as follows: Germany, Ireland, the Netherlands, Sweden and the UK were above the sample mean, while all regions in Bulgaria, Czechia, Hungary, Greece, Poland, Romania and Slovakia were below the mean. The percentage of regions lying above the sample average value in the remainder of countries was 45.5% in Belgium, 54.6% in France, 52.4% in Italy, 40% in Spain, and 62.5% in Portugal.
Estimation approach
Equation (5) is estimated through a two-way FE estimator, which allows relaxing issues related to unobserved heterogeneity and omitted variables. However, potential endogeneity of the institutional quality variable is likely to bias the FE estimation of Equation (5). Endogeneity can emerge for several reasons, among which reverse causality -if the best performing regions are also those with a better institutional setting, because strong institutions are a consequence of a good economic environmentand measurement errors -because the institutional index defined in Equation (6) represents only a partial proxy of what is, by nature, a complex phenomenon which is hard to capture, measure and operationalise.
The empirical literature has suggested to correct for potential endogeneity of institutional variables with historical and geographic instrumental variables (IV) (e.g. Acemoglu et al., 2001;Glaeser et al., 2004;Rodrik et al., 2004;Rodríguez-Pose and Di Cataldo, 2015;Ketterer and Rodríguez-Pose, 2018). Along these lines, the proposed identification strategy follows Buggle and Durante (2017), who analyse the historical and enduring relationship between economic risk and social cooperation and find a positive association between climate variability in the pre-industrialisation period and current social trust in European regions. Drawing on this evidence, the proposed identification strategy exploits regional variations in precipitation variability during the growing season in the preindustrialisation period (1500-1750) to instrument current levels of regional institutional quality. The rationale of the identification strategy relies on the idea that high levels of weather risk -captured by precipitation variability during the growing season-at a time when individuals' subsistence was based on agricultural production, called for the development of efficient and effective local institutions able to cope with weather-related economic risks. Under the new institutionalist idea of path dependency (North, 1990), current institutional frameworks are the result and keep traces of past (formal and informal) institutions. As institutions are historically and geographically rooted, current regional institutional quality is expected to reflect the quality of past regional institutional settings. In addition, the validity of the identification strategy is guaranteed by the fact that climate variability in the agriculture-based pre-industrialised Europe is likely to be exogenous to labour productivity growth in recent times.
The region-specific variable capturing precipitation variability in the pre-industrialisation period is defined using reconstructed paleoclimatic data available for 1500-1750. Paleoclimatic data are drawn from the European Seasonal Temperature and Precipitation Reconstruction (ESTPR) database, which provides grid cells of 0.5° width, each containing yearly seasonal observations for 1500-2000 (see Luterbacher et al. (2004) and Pauling et al. (2006) for details). 8 Two alternative IVs are constructed to capture historical precipitation variability. The first IV is defined as a time-varying variable. It is constructed by considering precipitation variability over 20year intervals in the pe-industrialisation period (1500-1740), making it straightforward to instrument the time-varying institutional quality variable within a two-way FE estimation approach. The second IV is defined as a time-invariant variable. It is built using precipitation variability over the entire preindustrialisation period. The rationale for also considering a time-invariant version of the IV is that climate-related phenomena may have gradually changed at a time free from industrial production and human-related pollution. 9 Formally, let denote precipitations; let denote seasons (winter, spring, summer, autumn); let denote the grid cell, with ∈ and representing the NUTS-2 region; and let indicate the year, with = 1500, … ,1750. This leads to construct the variable capturing precipitation variability during the growing season as follows. A season-specific inter-annual standard deviation measure is calculated at the cell level for , , over either 20-year intervals between 1500 and 1740, or all years between 1500 and 1750, before averaging the cell-level standard deviation measures over all cells within a region in order to obtain region-and seasonspecific measures of precipitation variability. Then, the region-and season-specific inter-annual standard deviation measures defined over either 20-year intervals between 1500 and 1740, or the entire period 1500-1750 are averaged with respect to the growing seasons identified with spring and summer. Thus, the IVs capture the mean variability during the growing season averaged over either 20-year intervals between 1500 and 1740, or the years from 1500 to 1750, i.e. from the first available year of information to what can be considered as the starting decade for the Industrial Revolution.
On the one hand, the time-varying IV defined over 20-year intervals between 1500 and 1740 allows relying on a two-way FE-IV estimation approach, instrumenting the time-varying institutional quality variable with the time-varying IV. On the other, the time-invariant IV makes a two-way FE estimation not feasible. In order to overcome this issue, the time-invariant IV is employed within a two-stage model where the first-stage equation is estimated using a Correlated Random Effects (CRE) approach (Mundlak, 1978), while the second-stage equation is estimated by relying on a two-way FE estimator. The CRE estimator allows controlling for region-specific effects by including region-specific mean values of all the time-varying variables entering the model, while simultaneously including timeinvariant variables. 10 Thus, the first-stage equation is specified having regional institutional quality as the dependent variable and the time-invariant IV as additional exogenous explanatory variable together with the region-specific mean values of the time-varying variables entering Equation (5), plus time FEs. Then, the second-stage equation is specified using the estimated (time-varying) predicted values of institutional quality from the first-stage equation in place of the observed institutional quality variable as explanatory variable for labour productivity growth. It is estimated by relying on a two-way FE estimation approach.
For the sake of comparability, both the two-way FE-IV estimation approach relying on the timevarying excluded IV and the two-stage equation system estimated using the time-invariant excluded IV are implemented by applying a bootstrapping procedure to correct standard errors. The errors are clustered at the regional level.
Baseline results
The two-way FE estimation of Equation (5) allows examining the short-run relationship between the endowments in physical and human capital, the level of innovation, and institutional quality in each region, on the one hand, and changes in productivity, on the other (see Table 1). We also assess how institutional quality contributes to shape the returns on short-run labour productivity growth of the other three factors by augmenting Equation (5) with a series of interaction terms between the institutional quality variable and the variables for physical capital, human capital, and innovative capacity. Specifications (1) to (7) report the results related to a series of modified versions of Equation (5) aimed at testing the consistency of the explanatory variables, while specification (8) refers to the complete model, including all explanatory variables.
The results suggest that regional convergence in labour productivity is taking place across Europe, as the coefficient of the beginning-of-the-period productivity variable is negative and statistically significant. As expected, labour productivity growth is positively associated with investments in physical capital. This result seems to be fundamentally driven by productivity growth in central and eastern European regions (Bijsterbosch and Kolasa, 2010), while a negative association emerges with human capital. This negative connection can be explained by the incapacity of labour markets in many European regions to transform skills into jobs, productivity and growth. Problems linked to either low educational attainment, low quality of education, a severe mismatch between educational supply and labour demand, and, last but not least, overeducation issues may determine the weak returns of human capital on labour productivity changes across regions (Rodríguez-Pose and Vilalta-Bufí, 2005; Leuven and Oosterbeek, 2011). Moreover, tight labour market regulations restricting entry of younger and more skilled workers may also drive this result. The coefficients for population growth, population density, and innovative capacity are negligible, while overall institutional quality at a regional level is positively associated with labour productivity growth. It is estimated that a unit change in institutional quality can lead to a 19.5% increase in short-run labour productivity growth.
Specification (9) in Table 1 presents the results of an augmented version of Equation (5), which dwells on the more indirect effects of local institutional quality on labour productivity change at a regional level in Europe. The aim of this exercise is to test whether and how regional institutions shape the returns of other productivity-driving factors on labour productivity growth. Notes: * < 0.1; ** < 0.05; *** < 0.01; **** < 0.001. Robust standard errors in parentheses. The estimated marginal effects refer to specification (9) in Table 1.
The use of interaction terms yields crucial insights about how institutional quality shapes the impact of other factors on labour productivity. The estimated effects of interacting local institutional quality with physical capital, human capital and innovative capacity, respectively, suggest that the quality of regional institutions shapes to a considerable extent the returns of these factors on labour productivity growth. These impacts are expanded in Table 2, which presents the estimated returns of physical and human capital and innovative capacity at selected percentiles of the distribution of the institutional quality variable. On the one hand, the positive association between physical capital and labour productivity growth decreases as the quality of institutions in the regions increases, up to a point in which any increases in physical capital become negative -although marginally statistically significant-for labour productivity growth (roughly for the regions in the top 1% of the institutional quality distribution). In accordance with the neo-classical growth model (Solow, 1956), physical capital accumulation drives the labour productivity of less developed territories that are those also typically characterised by low-quality and still-evolving institutional settings. On the other hand, better regional institutions boost the impact of both human capital and innovative capacity on labour productivity growth. Not only does the estimated negative effect of human capital decrease as the level of institutional quality increases, up to a point in which it becomes negligible, but also the estimated negligible effect of innovative capacity becomes positive and statistically significant for very high levels of institutional quality. 11 Therefore, the quality of regional institutions affects changes in labour productivity both directly and indirectly: the direct association is positive -better local institutions promote increases in labour productivity-, while the indirect association depends on the productivity factor considered, with more efficient institutions increasing the returns of human capital endowment and regional innovation capacity. Regional institutions thus emerge as a key factor behind the growth dynamics of regions in the EU and as an essential element to solve the European productivity challenge. 12
Dealing with endogeneity
As previously discussed, the estimated institutional quality-labour productivity growth relationship could be biased by the potential endogeneity of the institutional quality variable. Therefore, the robustness of the results reported in specifications (8) and (9) in Table 1 is tested by means of an IV approach. Specifications (1) and (2) in Table 3 report the results obtained through a two-way FE-IV estimator, employing the time-varying IV capturing precipitation variability in the growing season over 20-year intervals from 1500 to 1740. Specifications (3) and (4) show the results of estimating the two-stage equation system -based on a first-stage CRE estimator and a second-stage two-way FE estimator. We rely on a time-invariant IV capturing precipitation variability in the growing season 11 It is worth noting that the identification of the institutional quality variable -as defined in Equation (6)-in the twoway FE estimations presented in specifications (5) to (8) in Table 1 exploits only time variations from the country-level component of the variable due to the inclusion of region FEs. It still exploits cross-regional variations from the regionspecific component of the variable in the two-way FE estimation presented in specification (9), where the institutional quality variable is interacted with the three labour productivity growth determinants. The robustness of the results reported in specifications (8) and (9) in Table 1 has been tested through an Ordinary Least Squares (OLS) estimator which controls for time FEs, but not for region FEs. This relaxes identification issues on the institutional quality variable related to the inclusion of region FEs. The results of this exercise are reported in the Online Appendix Tables A6 and A7. They confirm those reported in Table 1. A second potential issue affecting the two-way FE estimation of Equation (5) and its augmented version including interaction terms concerns Nickell's (1981) bias. As indicated by Islam (1995), the inclusion of region FEs in a model where the initial-growth level is added as explanatory variable makes the panel data specification a dynamic model. This makes the FE formulation no longer consistent with a relatively small number of observational units. Following Elhorst et al. (2010), we have dealt with the potential Nickell's (1981) bias through a two-step difference Generalised Method of Moments (GMM) estimator that allows removing region FEs through first-differencing and instrumenting the explanatory variables using internally generated GMM-type instruments (Arellano and Bond, 1991). The results of this exercise are reported in the Online Appendix Tables A8 and A9. They, again, confirm those reported in Table 1. Third, we have also relied on a CRE estimator using a time-invariant institutional quality variable defined without interpolating the region-specific component with the country-level, time-varying component, to test the reliability of our approach in constructing the institutional quality variable. The results of this exercise are reported in the Online Appendix Tables A10 and A11 and confirm those presented in Table 1. Finally, we have tested the robustness of the results presented in specification (9) in Table 1 by considering the three interaction terms separately. The two-way FE estimates are reported in the Online Appendix Tables A12 and A13. They confirm those reported in Table 1. 12 Two further analyses have been performed to provide a more complete picture of the forces driving the short-run dynamics of labour productivity. First, Equation (5) has been modified considering the four 'pillars' for government effectiveness, rule of law, voice and accountability, and control of corruption. Online Appendix Table A14 reports the results of the two-way FE estimates obtained by analysing the institutional 'pillars', both individually and together. When considered all together, voice and accountability and control of corruption show positive and significant coefficients, while, by contrast, government effectiveness and rule of law show negative but insignificant coefficients. The second additional analysis examines annual changes in -rather than levels of-institutional quality and the four 'pillars'. Growth rates are defined as simultaneous with respect to the dependent variable for labour productivity growth. Despite this change, the two-way FE estimates reported in the Online Appendix Table A15 confirms the majority of the previous findings: a) changes in institutional quality are positively associated with changes in labour productivity; b) changes in all institutional dimensions but government effectiveness are positively connected to labour productivity growth. In brief, regions in Europe that managed to improve local institutions the most experienced the greatest rises in labour productivity (Rodríguez-Pose and Ketterer, 2020). over the entire pre-industrialisation period 1500-1750. 13 Notes: * < 0.1; ** < 0.05; *** < 0.01; **** < 0.001. Robust standard errors (bootstrapped via 1,000 replications, and clustered at the regional level) in parentheses. Predicted values of the institutional quality variable and its interaction terms obtained from the firststage estimations are included in the second-stage equations, rather than the observed values. The first-stage estimates for specifications (1) and (2) are obtained through a two-way FE estimation approach, where a time-varying excluded IV is specified to capture regional precipitation variability in the growing season over 20-year intervals during the pre-industrialization period 1500-1740. The first-stage estimates for specifications (3) and (4) are obtained through a CRE estimation approach which includes region-specific mean values of time-varying variables and interaction terms, as well as year dummies, and where the excluded IV is specified as time-invariant to capture regional precipitation variability in the growing season during the entire pre-industrialization period 1500-1750. The interaction terms entering specifications (2) and (4) are instrumented using the interaction between the excluded IV and each of the three variables for physical capital, human capital, and innovative capacity.
The first-stage F statistics on the excluded IVs are higher than the conservative cut-off value of 10, suggesting that weather-related economic risk in the pre-industrialisation period represents a good predictor of current institutional quality in EU regions. The second-stage IV estimates confirm the direct positive short-run effect of institutional quality on labour productivity growth, as well as the indirect role played by institutional quality in shaping the relationship between physical and human capital and innovative capacity, on the one hand, and short-run labour productivity growth, on the other.
The estimated marginal effects of physical capital, human capital and innovative capacity at the different levels of institutional quality -presented in Table 4-generally confirm those of Table 2.
The results reveal that physical capital is a short-run labour productivity growth-enhancing factor only in those regions characterised by low-quality institutions, while its growth returns disappear in regions with high-quality institutions. The short-run returns of both human capital and innovative capacity on labour productivity growth are, in part, driven by institutional quality, such that their estimated effects are negative or negligible at low levels of institutional quality, but become positive and statistically significant at high levels of institutional quality. Overall, these results confirm that regional institutions have both a positive direct effect on labour productivity growth and a positive indirect effect by inducing positive returns of human capital and innovative capacity on productivity growth at least in those regions which are characterised by a strong institutional environment. 14 Table 3 (2) Notes: * < 0.1; ** < 0.05; *** < 0.01; **** < 0.001. Robust standard errors (bootstrapped via 1,000 replications, and clustered at the regional level) in parentheses. The estimated marginal effects refer to specifications (2) and (4) in Table 3.
Long-run analysis
We complement the short-run evidence presented in the previous two sub-sections with the analysis of the long-run relationship between institutional quality and labour productivity growth. To this aim, Equation (5) has been modified within a cross-sectional framework as follows: where The right-hand side of Equation (7) includes the initial-growth level of the variables for labour productivity ( , ), physical capital ( , ), population density ( , ), human capital ( , ), and innovative capacity ( , ), as well as the growth rate of population between 2003 and 2015 (∆ , ), with technological change ( ) and depreciation rate ( ) defined as before. The equation includes also the vector , of region-specific geographic controls, namely: distance to Brussels, to capture the relative location of a region with respect to the geographic 'core' of the EU; land surface, to capture the absolute size of a region; and the latitude and longitude coordinates of the region's centroids, to capture the location of a region. The term represents a vector of country dummies, while , is the error term.
Two approaches have been considered in defining the institutional quality variable ( , ). First, Equation (7) has been estimated using the non-interpolated variable for institutional quality normalised in the interval [0, 1], i.e. the institutional quality variable constructed using data drawn from the 2013 wave of the EQGI dataset without further interpolation with the country-specific data derived from the WGI dataset. 15 Second, it has been estimated using the year 2003 value of the interpolated institutional quality variable defined in Equation (6) and normalised in the interval [0, 1]. Equation (7) has been estimated using Two-Stage Least Squares (TSLS), with the institutional quality variable instrumented using the IV capturing precipitation variability in the growing season over the entire pre-industrialisation period 1500-1750. Table 5 displays the results of the TSLS estimation of Equation (7) and its augmented versionwhich adds the interaction terms between the institutional quality variable and the variables for physical capital, human capital, and innovative capacity. The non-interpolated institutional quality variable is considered in specifications (1) and (2), while the 2003 value of the interpolated institutional quality variable is considered in specifications (3) and (4). The results of the first-stage F statistics are higher than the cut-off value of 10, suggesting a good predictive power of the IV. The second-stage results are consistent with respect to the two operationalisation choices concerning the institutional quality variable. Looking at specifications (1) and (3), both physical and human capital are positive determinants of long-run labour productivity growth, while the variable for innovative capacity has a positive but statistically insignificant coefficient. Overall, the results confirm the positive association between institutional quality and labour productivity growth. Notes: * < 0.1; ** < 0.05; *** < 0.01; **** < 0.001. Robust standard errors in parentheses. The dependent variable captures regional growth rate between the years 2003 and 2015. The institutional quality variable included in specifications (1) and (2) is defined using the survey data drawn from the 2013 wave of the EQGI dataset without further interpolation with the country-level data drawn from the WGI dataset. The institutional quality variable included in specifications (3) and (4) All the other explanatory variables refer to the year 2003. The set of region-specific geographic controls include: distance to Brussels; land surface; latitude and longitude of the region's centroid. The excluded IV captures regional precipitation variability in the growing season during the pre-industrialization period 1500-1750. Table 6 complements Table 5 by presenting the estimated marginal effects of the variables for physical and human capital and innovative capacity on long-run labour productivity growth at selected levels of institutional quality. As a whole, the results concerning the long-run analysis confirm the short-run findings. On the one hand, physical capital seems to matter only in regions with low-quality institutions; on the other, improvements in institutional quality make human capital and innovative capacity positive determinants of long-run labour productivity growth. Table 5 (2) Notes: * < 0.1; ** < 0.05; *** < 0.01; **** < 0.001. Robust standard errors in parentheses. The estimated marginal effects refer to specifications (2) and (4) in Table 5.
Conclusions
Europe has been facing in recent decades an important productivity challenge. Its productivity growth has fallen below that of other areas of the world and this slowdown is affecting its capacity to compete in the broader world stage and its position at the economic and political vanguard. This productivity challenge, however, does not affect all countries and regions in Europe in the same way. Low productivity growth has been far more pervasive in countries like Italy or Greece, for reasons that range from structural factors, such as ageing or rigid labour markets, to a greater vulnerability of many of their economic sectors to international competition. Low levels of institutional quality have, however, also possibly contributed to low labour productivity. Low productivity growth has been in evidence in many of the regions with the lowest quality of institutions in Europe. Hence, poor local institutions can stunt productivity growth and become a fundamental barrier for translating local human capital and innovation potential into greater productivity.
Yet, despite the evidence of a link between weak institutions and the productivity 'puzzle', how and to what extent local institutions shape changes in productivity has been absent from most of the empirical productivity analysis. This paper has addressed this gap by examining the direct and indirect role played by institutional quality in regional productivity change across regions of Europe during the period between 2003 and 2015.
The results of the analysis have shown that local institutions across Europe shape both short-and long-run changes in productivity to a considerable extent. In first place, good local institutions have enhanced productivity growth in those regions with the best institutional quality. But the effect is not only direct. The returns of physical and human capital and local innovative capacity for productivity are also greatly conditioned by local institutional quality. Good government and good local institutions can considerably enhance the impact of human capital and local innovative capacity on labour productivity growth.
Hence, institutional quality is at the heart of the productivity challenge in Europe. No solution to the low productivity growth conundrum can be achieved without a significant improvement in the quality of local and regional institutions, especially in those areas of Europe where lack of transparency and accountability, high levels of corruption, or poor governance performance drag economic activity and innovation down. As we have shown, relatively marginal improvements in institutional quality can directly lift barriers to changes in productivity, as well as eliminate many of the factors that have thwarted reaping greater returns from investments in human capital and innovation in the market place. Hence, addressing the productivity challenge requires, among others, tackling the institutional problems of Europe. Figure A3. Yearly cross-country coefficient of variation in institutional quality. | 2020-01-23T01:06:07.501Z | 2021-01-17T00:00:00.000 | {
"year": 2021,
"sha1": "5112529bf8d56d27ea87db59bce6967ebe4e8de1",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/joeg/advance-article-pdf/doi/10.1093/jeg/lbab003/40486695/lbab003.pdf",
"oa_status": "HYBRID",
"pdf_src": "ElsevierPush",
"pdf_hash": "da113cdd303c807446dc243ebe265048a8f39e29",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
11704064 | pes2o/s2orc | v3-fos-license | The Great Brain Books, Revisited
In 1999, Cerebrum published a list of books about the brain, guiding regular readers to ―the great books, past and present, that capture the unfolding story of the brain and how brain research is changing our ideas about memory and emotion, life span and language, neurological disorders and psychiatric syndromes.‖ Eleven years later, the need for such a list is even greater—more than 30,000 brain-related books in English are in print or will soon be published, according to Bowker's Books in Print. What is a reader to do?
We decided to update our list, with help from our readers and our science experts. To draw out the best current and classic books from the crowded field, we selected 10 categories, based on our earlier list. Then we opened up a poll to Dana.org readers, asking them to nominate their favorites in as many of the categories as they wished.
More than 70 of you responded-thank you! Your choices were forwarded to Dana Alliance members, who were asked to send in their top choices. After we heard from more than 30 members, we made a final tally, which was reviewed by Cerebrum's scientific advisors.
Here are the top three or four books in each category listed in order of the number of votes they received, along with some runners-up. Many books fit in multiple categories; if a book is listed in a category that doesn't interest you, don't let that stop you from taking a look at it-it may have been pigeonholed to fit the format. Of course, there are many other books in each category worth reading; this list is just a starting point. We hope it will lead you to enjoy some great science writing.
General Books About the Brain
The Brain that Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science By Norman Doidge. Viking, 2007.
This book was far and away the most popular choice of Dana.org readers. Exploring the plasticity of the brain-something that was only recently proven to exist-Dr. Doidge uses case studies to discuss how people remember, recovery from injury, love, and learn.
Rhythms of the Brain By Gyorgy Buzsaki. Oxford University Press, 2006.
How did our brains evolve, and what makes them work? Dr. Buzsaki looks into how coordinated neuronal firing developed and hypothesizes that it plays a large role in the brain's many functions, including information processing and retrieval. Until recently, neurons were the brain cell that got all the attention, but now it's clear that glial cells are more than just glue that holds the brain together. Dr. Fields looks at the many functions of glia that are being uncovered and the scientific breakthroughs that could come from a better understanding of these cells. neurological disorders including visual agnosia, aphasia, and Korsakoff's syndrome (an inability to form new memories).
The Number Sense: How the Mind Creates Mathematics
By Stanislas Dehaene. Oxford University Press, 1999. Our brain is wired for mathematics from birth, writes Dr. Dehaene. Imaging technologies have allowed researchers to begin identifying the regions of the brain responsible for computation. The book also explores the invention of number systems and whether people have an innate number sense.
Dr. Ramachandran and Ms. Blakeslee investigate neurological oddities, from hallucinations to phantom limbs. Such strange cases lead to more-general conclusions about the brain's circuitry and plasticity.
Brave New Brain: Conquering Mental Illness in the Era of the Genome
By Nancy Andreasen. Oxford University Press, 2001Press, , 2004 Dr. Andreasen discusses the causes and effects of schizophrenia, manic depression, anxiety disorders, and dementia in the context of the overlapping fields of genetics and neurobiology. The intersection of these fields could improve our understanding of the mechanisms behind the disorders and lead to new methods of treatment. Also:
My Stroke of Insight: A Brain Scientist's Personal Journey
By Jill Bolte Taylor. Penguin, 2009.
Memoirs and Personal Experience
In Search of Memory: The Emergence of a New Science of Mind By Eric R. Kandel. W. W. Norton, 2007. In Search of Memory was the most popular selection of the responding Dana Alliance members. Dr. Kandel reflects on his five decades of research-including his Nobel Prize-winning work on the role of synapses in learning and memory function-and his family's escape from Nazi Germany.
The Diving Bell and the Butterfly: A Memoir of Life in Death
By Jean-Dominique Bauby. Vintage, 1998.
At the age of 44, Mr. Bauby, then editor in chief of Elle magazine, suffered a stroke that left him a victim of locked-in syndrome, able to move only one eyelid. In this astonishing book, painstakingly dictated one letter at a time, he looks back on his life and details the realities of being trapped inside his body. For more than two decades, Dr. Sapolsky studied the social behavior of baboons in Kenya. Here, he chronicles his field studies, looking not only at the lives of the baboons but also at the changing life in Africa and the challenges and personalities he encountered away from camp. Also: | 2018-04-03T04:55:40.394Z | 2010-11-10T00:00:00.000 | {
"year": 2010,
"sha1": "004c64a2bdf1e87e0873bb039e95125eb73161e6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "004c64a2bdf1e87e0873bb039e95125eb73161e6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"History",
"Medicine"
]
} |
235768097 | pes2o/s2orc | v3-fos-license | Impact of microclimatic conditions and resource availability on spring and autumn phenology of temperate tree seedlings
Summary Microclimatic effects (light, temperature) are often neglected in phenological studies and little information is known about the impact of resource availability (nutrient and water) on tree’s phenological cycles. Here we experimentally studied spring and autumn phenology in four temperate trees in response to changes in bud albedo (white‐painted vs black‐painted buds), light conditions (nonshaded vs c. 70% shaded), water availability (irrigated, control and reduced precipitation) and nutrients (low vs high availability). We found that higher bud albedo or shade delayed budburst (up to +12 d), indicating that temperature is sensed locally within each bud. Leaf senescence was delayed by high nutrient availability (up to +7 d) and shade conditions (up to +39 d) in all species, except oak. Autumn phenological responses to summer droughts depended on species, with a delay for cherry (+7 d) and an advance for beech (−7 d). The strong phenological effects of bud albedo and light exposure reveal an important role of microclimatic variation on phenology. In addition to the temperature and photoperiod effects, our results suggest a tight interplay between source and sink processes in regulating the end of the seasonal vegetation cycle, which can be largely influenced by resource availability (light, water and nutrients).
Introduction
The phenological responses of plants to environmental cues play a prominent role in shaping species' distribution ranges (Chuine & Beaubien, 2001;Körner et al., 2016) and Earth's climate (Richardson et al., 2013). Over recent years, a profusion of studies based on ground observations or remote sensing have documented phenological shifts in response to global warming, consistently showing earlier occurrence of spring phenophases, such as leaf-out or flowering, and, in some cases, later autumn phenophases, such as fruit maturation or leaf senescence (Garonna et al., 2016;Piao et al., 2019). Changes in phenology have a major impact on the global carbon balance, and earlier leaf-out timing has been shown to compensate for the increasing carbon loss in summer due to more severe and prolonged drought (Wolf et al., 2016) but earlier phenology may also accelerate and amplify drought in early summer as plants begin to take up water earlier (Ma et al., 2016;Xu et al., 2020;Meier et al., 2021). For these reasons, increasing efforts have been made to include phenological models in global models of species distribution and forest carbon balance (Delpierre et al., 2016;Zohner et al., 2020). Phenological models are still however unable to accurately predict the progression of winter dormancy, which is essential for predicting the beginning of bud development in spring (Basler, 2016;Chuine et al., 2016;Wang et al., 2020), as well as the time of leaf senescence in autumn . A striking example is that simplistic spring phenology models that ignore chilling and photoperiod often perform similarly compared with more complex phenological models that include these cues (Fu et al., 2012;Basler, 2016). This contrasts with numerous experimental observations of temperate and boreal perennial plants, which have long shown that chilling and photoperiod play a significant role in dormancy release and bud development (e.g. Coville, 1920;Wareing, 1953;Murray et al., 1989;Heide, 1993;Rousi & Pusenius, 2005;Viherä-Aarnio et al., 2006).
A major limitation of plant physiological and phenological studies carried out under natural conditions is that they usually use the temperature recorded at standard weather stations as an approximation of the temperature perceived by the plant. The microclimate in forests or near buds can largely deviate from 2-m height air temperature (for example buds of seedlings and saplings are close to the ground whereas buds of adult trees can be at a height of 30 m with more exposure to wind and solar radiation) and even more from standard air temperature measured outside of the forest (De Frenne et al., 2019). Yet, microclimatic conditions have a huge effect on plant performance, and changes in microclimate have even been shown to outweigh macroclimate effects on plant community composition (Zellweger et al., 2020). In fact, there is evidence that the temperature triggering cell growth in the apical meristems of the buds is directly sensed within each individual bud, likely to be by the meristems themselves, as experimentally shown for species used in horticulture such as Cucumis sativus L. (Savvides et al., 2016). Similarly, daylength is perceived at the individual bud level by phytochromes within the leaf primordia (Zohner & Renner, 2015). When plants are growing in an open area, meristem temperature is generally higher compared with standard air temperature during the day and lower during the night, especially during bright days and clear nights due to shortwave and longwave radiative forcing and cooling, respectively (Savvides et al., 2013). More accurate microclimatic records reflecting the actual meristem temperature are therefore necessary to improve models of plant phenology and the associated physiological processes.
The timing of autumn leaf senescence is strongly regulated by autumn temperature and photoperiod in many temperate tree species (Keskitalo et al., 2005;Vitasse et al., 2009;Fu et al., 2018). However, spring and summer photosynthesis (Zani et al., 2020), CO 2 concentration (Sigurdsson, 2001), soil nutrient status (Weih & Karlsson, 1999;Sigurdsson, 2001;Estiarte & Peñuelas, 2015;Fu et al., 2019a,b) and water availability (Xie et al., 2015;Arend et al., 2016a,b) can have a large influence as well. These factors have often been studied in situ, not accounting for micro-environmental heterogeneities, but they have rarely been studied under experimental conditions (but see e.g. Arend et al., 2016b;Fu et al., 2019a,b;Zani et al., 2020), complicating conclusive results about their respective effects and interactions. In addition, inconsistent results have been found for the progress of leaf senescence in response to moderate drought, hot spells and nutritional status under natural conditions in different species (Estiarte & Peñuelas, 2015;Xie et al., 2018;Chen et al., 2020;Mariën et al., 2021), underscoring the importance of controlled experiments (but see Fu et al., 2019a,b;Zani et al., 2020).
Recently, leaf senescence timing of temperate trees has been proposed to be regulated by sink limitation of photosynthesis, which has been experimentally demonstrated under contrasting light conditions, temperature and CO 2 levels (Zani et al., 2020). The hypothesis that photosynthesis is regulated by the strength of the carbon sink (i.e. the use of photoassimilates for growth) has been first formulated by Boussingault (1868). Accordingly, at the end of the season when tree primary and secondary growth ceases, there is an increasing imbalance between the production of carbohydrates (source) and their use for growth (sink). During this period, carbohydrates generally accumulate faster in leaves and other organs, even though they can be, to some extent, actively regulated by the plant (Dietze et al., 2014;Gilson et al., 2014). This excess of carbohydrates at a time when growth demand is limited could lead to a downregulation of the photosynthetic genes and accelerate the induction of leaf senescence (Paul & Foyer, 2001). In addition, environmental stress, such as limited water, high solar radiation or extreme temperature, has been shown to accelerate leaf senescence of temperate trees (Gallé et al., 2007). By interacting with endogenous factors (e.g. hormones), these environmental stressors can induce degradation of chlorophyll and photosystems, leading to a decline in the capacity to dissipate excess excitation energy in chloroplasts and, in turn, the accumulation of reactive oxygen species (ROS) and the acceleration of leaf senescence (Juvany et al., 2013). ROS concentration increases during drought-induced leaf senescence (Munné-Bosch & Alegre, 2004), but the ability to recover after such stress depends on the species and can be high as for example in pubescent oak (Gallé et al., 2007). As such, the sensitivity of leaf senescence to environmental stress appears to depend on species' resistance strategies and the severity of the stressor, which can lead to contrasting results among co-existing trees (e.g. delay rather than an advance of leaf senescence under moderate stress, see Xie et al., 2018). The regulation of leaf senescence therefore appears to result from a complex balance between sink and source strength and stress responses, which needs to be explored under controlled conditions to understand and forecast phenological changes under continued global warming.
Here we experimentally assessed the effects of light ('sun', 100% of photosynthetically active radiation (PAR) vs 'shade', c. 30% PAR) and bud albedo (white vs black-painted buds) on budburst timing and the effect of light, soil water availability (irrigated, control and reduced precipitation) and soil nutrients (low vs high) on leaf senescence timing of 2-4-yr-old temperate trees (Fagus sylvatica L., Fraxinus excelsior L., Prunus avium L. and Quercus robur L.). We aimed to address the following questions: (1) To what extent is leaf-out regulated by microclimatic conditions, that is does high bud albedo and shade delay leaf-out at the individual level?
(2) Do source-sink feedbacks and/or stress responses explain the effects of nutrient availability, solar radiation and soil moisture on autumn leaf senescence? Specifically, does elevated sink strength (high nutrients) lead to delayed senescence, does increased light availability (elevated photosynthesis) advance senescence, and how does water availability interact with these patterns? (3) Are the different responses among species related to their tolerance to drought or shade?
Assuming that bud meristems are the temperature-sensitive part ('thermometer') of the plant, we expected that white-painted buds and shade would delay budburst as a result of lower temperature experienced by buds. We expected earlier senescence under full sun conditions because carbohydrate reserves would be faster accumulated according to the sink-limitation hypothesis, and/or due to higher oxidative stress damaging the photosystems (photooxidative stress hypothesis), especially for shade intolerant species. Furthermore, we expected delayed leaf senescence under elevated nutrient availability as a result of increased sink strength, which may compensate the cost of maintaining leaves alive (Paul & Foyer, 2001). Finally, we expected a mixed response of the timing of leaf senescence to drought depending on speciesspecific sensitive to drought.
Study species
We investigated microclimate and nutrient effects on leaf phenology of four species: Prunus avium L., Fraxinus excelsior L., Fagus sylvatica L. and Quercus robur L. For clarity and brevity, we refer from this point forwards to each species by its common name, that is cherry, ash, beech and oak, respectively. These species were selected due to their large variation in spring and autumn phenology and their differences in shade and drought tolerance. In the study area at the juvenile life-stage, cherry and ash are amongst the first tree species to flush in spring and senesce in autumn, whereas beech and oak are rather late-flushing and late-senescing species (Vitasse et al., 2013 and see Supporting Information Fig. S1). Beech is the most shade-tolerant species followed by cherry, ash and oak, whereas oak and cherry are more drought tolerant than ash and beech (see shade and drought tolerance indexes extracted from Niinemets & Valladares, 2006 in Table S1). Seedlings of each species except ash were purchased at a local nursery (Wiler, 455 m asl, 47°09 0 N, 7°33 0°E ) and came from local forests (see details in Table S1). The ash seedlings were taken from a forest near the experimental site (Lenzburg, 400 m asl, 47°24 0 N, 8°09 0 E) and were directly transplanted into the experimental boxes on 15 November 2018. Seedlings were 2-to 4-yr-old and c. 47 cm tall (see Table S1 for more details).
Experimental design and treatments
The experiment took place in a common garden at WSL Research Institute in north-eastern Switzerland (47°21 0 38″N, 8°27 0 16″E; 550 m asl; mean annual temperature 9.3°C, mean annual precipitation 1134 mm, MeteoSwiss station Fluntern, 1981-2010. The design consisted of 54 wooden containers (1 m × 1 m and 0.5 m deep) arranged in groups of three, which was the unit for climate manipulation (called from this point forwards 'plot'; see Fig. S1). The 18 plots, containing each three containers, were then arranged in three rows (six plots per row), considered as blocks in the experimental design, to account for possible microclimatic heterogeneity, that is each treatment was replicated three times. Only the two outer containers were used per plot, which are from this point forwards referred to as mesocosms (n = 36). The central container was filled with soil but left without any plants (see Fig. S1). Each mesocosm was filled with a mixture of quartz sand, fibric peat, expanded schist and pumice, and the bottom of each mesocosm was covered by a permeable plastic foil to avoid water retention and ensure a good drainage after rainfall. This mixture was designed to be nutrient poor and sandy to facilitate soil nutrient and moisture manipulation by adding fertiliser and water, respectively . On 15 November 2018, 20 seedlings were planted in each study mesocosm (four rows of five individuals), mixing and alternating two species per mesocosm. To ensure homogenous plant height and minimise competition for light, ash and cherry, and oak and beech respectively were planted together (cherry and ash were slightly taller than oak and beech, see Table S1). In total, 720 seedlings (4 species × 10 replicates × 6 treatments × 3 blocks) were planted and monitored for phenology and growth. Six treatments were used to analyse spring and autumn phenology, of which four treatments were used to test their effect on both spring and autumn phenology (Table 1). In the 'sun treatment', trees were exposed to full sun (100% PAR). In the 'shade treatment', trees were exposed to shade conditions, using a shading net that intercepted c. 70.3 AE 2.1% PAR (mean AE SE, PARmesocosm/PARambient × 100; measured on four different days in February and September 2019 between 13:30 and 15:30 under either sunny or cloudy conditions in all three blocks using a Li-Cor Li189 quantum PAR light sensor). In the 'drought' treatment, natural rainfall was intercepted, using a roof with plastic channels that removed c. 50% of the ambient precipitation (using V-shaped plastic channels mounted upwards at c. 2.5 m from ground above the plants and covering c. 50% of the mesocosm surface; see picture in Fig. S1). The 'control-drought' treatment served as control for the drought treatment, using the same roof infrastructure as used in the drought treatment, but that allowed almost 100% precipitation throughfall (using V-shaped plastic channels mounted downwards). Because the soil moisture between the drought and the drought-control treatment differed significantly during the summer but not before budburst in which it remained relatively high (80-100% of the field capacity; Fig. S2a), these two treatments were only used to study the effects on autumn senescence. As additional budburst treatments, we modified the albedo of the buds by painting half of the buds of the plants either in black (low albedo treatment, called from this point forwards 'black' treatment) or in white (high albedo treatment, called from this point forwards 'white' treatment), using tinting dispersion paints (Schöner Wohnen Vollton-& Abtönfarbe) applied on 23 January 2019 (see photographs in Fig. S1). No potential deleterious impact of the paint was detected as the leaves emerged normally and were growing as much as the ones that originated from the nonpainted buds. According to the manufacturer, the painting does not contain relevant persistent, bioaccumulative and toxic substances. Plants with buds painted in white or black were kept in the same mesocosm, which reduced the replicates to 5 instead of 10 per block compared with the other treatments. After leaf-out, the shade, sun, drought and control-drought treatments were maintained through the growing season. To test the effect of nutrient and water availability on leaf senescence, we added two additional treatments after leafout. In the 'water' treatment, the mesocosms were watered regularly, at least every week from 6 June to 24 October 2019 (25 times). Each mesocosm of the 'high-moisture' treatment was watered manually for 2 min (two times for 1 min with a 5 min break in-between) using a spray lance, which emitted 30 l water min −1 . During a heatwave in June 2019, all mesocosms were watered manually for 5 s (6, 24 and 28 June 2019) to prevent mortality. In the 'nutrient' treatment, we added a substantial amount of slow-release fertiliser (30 g of Gesal Floranid slowrelease lawn fertiliser in a granule form (composition 20% N, 5% P 2 O 5 , 8% K 2 O)) on 24 May and 29 July, by spreading the granules evenly on the surface of the mesocosms. Because the soil was extremely poor in nutrients, we added 5 g of this fertiliser to all the other mesocosms on 24 May 2019. These two treatments were assigned randomly to mesocosms that previously contained the black and white treatment plants (same conditions as the full sun treatment). Table 1 summarises the different treatments used during the spring and autumn phenology monitoring in 2019.
Microclimatic measurements
Soil moisture was recorded in every mesocosm at 30 min intervals using EC-5 soil moisture sensors (Decagon, Pullman, WA, USA) measuring volumetric soil water content. Because these sensors are rather sensitive to differences in soil compaction, we standardised the records of each sensor by the value obtained after irrigating the mesocosm at saturation on 21 November 2019 (using the mean value between 06:00 h and 10:00 h on the following day, that is c. 14 h after the irrigation). Therefore, soil moisture is given as % of full saturation (field capacity), which accounts for absolute deviation among the sensors and provides a standardised comparison among the treatments.
Air temperature was recorded in each plot every hour using EL-USB-2+ sensors (Lascar Electronics, Salisbury, UK) covered by a radiation shield (TFA Dostmann GmbH, Wertheim, Germany) from 25 January 2019 until December 2019. Additionally, air temperature at a height of 2 m was also recorded outside of the plots under an aluminium radiation shield every 30 min. The second half of February 2019 was particularly warm, with daily maximum temperatures consistently being above the longterm average (Fig. S3). The last frost days with temperature down to −1.5°C occurred on 5-7 May (day of the year (DOY) 125-127), when all species had already leafed out (Fig. S3), but only slight frost damage was observed on beech seedlings. Two marked warm spells occurred at the end of June to beginning of July with daily maximum temperature higher than 35°C during a consecutive 7 d (DOY 180-186; Fig. S3) and at the end of July (DOY 209-211; Fig. S3). No frost occurred in autumn before leaf senescence reached 50% for any of the species (first autumnal frost on 13 November, DOY 317; Fig. S3).
Bud temperature was recorded in the following year (2020) from 1 January until species-specific budburst by inserting the needle (0.3 mm diameter and 13 mm long) of a thermocouple probe inside buds (Thermocouple Probe Model HYP1 ©OMEGA, Omega Engineering Inc., Norwalk, CT, USA; Fig. S1). Bud temperature was recorded for two individuals from different blocks for each of the four species and for each of the following treatments: black-painted buds, white-painted buds, shade, full sun. All the 32 thermocouples (4 species × 2 replicates × 4 treatments) were connected to a datalogger that recorded bud temperature every 10 min. Some thermocouple probes were disconnected from the buds due to a storm that occurred on 6 March 2020 and were inserted again in the respective buds 4 d later on 10 March. We discarded all records between these two dates. We averaged the temperature of the two replicates for each species and treatment and computed the daily minimum, mean and maximum values. Additionally, air temperature at plant canopy height was recorded every 10 min with the same thermocouple probes protected from direct solar radiation with a custom-fabricated radiation shield with several layers of carton recovered by aluminium foils (see details in Frei et al., 2020). This latter measurement was used as a reference to compare bud and air temperature in Fig. 1(b).
Phenology monitoring, growth measurements and soil inorganic nitrogen
Bud development and leaf senescence were monitored for all 720 individual seedlings in spring and autumn 2019. Bud development in spring was monitored by the same observer weekly or twice a week during warmer periods, from 15 February until 24 May, that is when the last individual unfolded its leaves. Bud development was monitored using a four-stage categorical scale (Vitasse, 2013): stage 0 (dormant bud), no bud development visible; stage 1 (bud swelling), buds swollen and/or elongating; stage 2 (budburst), bud scales open and leaves partially visible; stage 3 (leaf-out), leaves fully emerged from the buds but still folded, crinkled or pendant, depending on species; stage 4 (leaf unfolded), at least one leaf fully unfolded. For each tree, the day of year when the first bud reached the respective stage was recorded. The stages were estimated by linear interpolation when necessary (i.e. when a given stage occurred in between two monitoring dates).
New Phytologist
For leaf senescence in autumn, we evaluated the percentage of coloured or fallen leaves for every seedling according to the method developed in Vitasse et al. (2009) on a weekly basis from 23 August to 29 November. As a proxy for the beginning, middle and end of the leaf senescence process, for each individual tree, we computed the date when 25%, 50% and 75% of leaves were either coloured or had fallen using linear interpolation between two monitoring dates.
Measurements of seedling height and diameter were conducted at 2 cm above plant collar on all individuals before budburst and after leaf fall in spring and autumn 2019, respectively, using a graduated pole and an electronic caliper. We estimated the above-ground biomass using the allometric equation provided by Annighöfer et al. (2016) as follows: with AGB = above-ground biomass (g); RCD = root collar diameter (cm); H = height (cm); and b1 and β 1 and β 2 = species-specific coefficients as provided in Table 4 We computed the biomass increment during the growing season 2019 by subtracting the AGB estimated in spring 2019 from the AGB estimated in autumn 2019. These biomass increment measurements were used to characterise plant responses to the different treatments and to interpret the leaf senescence observations.
We measured extractable inorganic N by sampling soil in each mesocosm using a soil corer at 10 cm depth (missing three samples for each mesocosm on 3 September 2019). We used ion exchange by adding KCl solution to extract nitrate and ammonium from the soil (Table S2).
Data analysis and statistics
The progress of bud development in spring and leaf coloration in autumn was modelled by using generalised additive mixed models (GAMMs) with a binomial distribution using the R package GAMM4 v.0.2-5. Bud development stages (0-4) in spring were transformed to fit a 0-1 range by dividing each stage by 4 to apply the binomial distribution. The values of the stages were then back transformed for the visualisation of the graph. For each species, the models included a smoothing spline with four degrees of freedom for the DOY with the treatment as a factor modulating the spline and the block as a grouping variable for the random intercept with individuals nested inside the block. The fitted GAMMs with the associated means and confidence intervals are shown. The GAMMs were used to get the overall time course of bud development in spring and leaf senescence in autumn depending on the treatments. We assessed the effect of the treatments on the time of budburst and leaf senescence across species and within species with linear mixed effect models using the lme function of the R package NLME v.3.1-149 focusing on the DOY corresponding to stage 2 in spring (budburst) and to 50% senescence in autumn. We used block as random effect and treatments as fixed effects. Estimated marginal means were extracted from the model with the associated 95% confidence intervals. Post-hoc tests using Tukey's honest significant difference (HSD) tests were performed to test for significant differences between the control plants and the corresponding treatment. Analyses of stage 3 and stage 4 of spring bud development and of 25% and 75% autumn senescence yielded similar results and are therefore not shown. All data analyses and statistics were performed using R v.4.0.2 (R Core Team, 2020).
Treatment effects on annual growth increment
Additional irrigation significantly increased growth for oak (+43%) and only marginally for ash (+39%; Table 2). Biomass increment was lower under shade conditions compared with the control for all species, but this was significant for cherry only (−37%) and marginally significant for beech (−19%; Table 2). No significant effect was found for the drought treatment for any species (Table 2), suggesting that the reduced soil moisture was not impairing growth further than in the control drought. In none of the species did the nutrient-addition treatment lead to significantly increased growth compared with the control ( Table 2).
Effect of bud albedo and light intensity on bud temperature
Bud temperature recorded in 2020 from January to budburst showed similar minimum temperature between the shade, full sun and white or black-painted buds, irrespective of species (the mean difference ranged within 0.4°C for the different species from 1 January to DOY 75; Fig. S4). However, the daily maximum temperature was warmer in black-painted buds and in buds fully exposed to sun than in white or shaded buds (Fig. 1a), especially during bright days. When selecting the days when solar radiation reached at least 400 W m −2 from 1 January until the budburst of each species (from 26 to 58 d depending on the species), daily maximum temperature recorded in black-painted buds was on average 3.1°C (ash), 3.2°C (beech), 3.3°C (cherry) and 4.6°C (oak) warmer than air temperature, whereas white-painted buds were only 0.2-1.3°C warmer than air temperature (Fig. 1b). Bud temperature measured in the shade was slightly warmer than the temperature of the white-painted buds and generally cooler than buds fully exposed to sun (Fig. 1).
Effect of bud albedo and light intensity on bud development
Spring phenology significantly varied among species (Table 3), with cherry generally being the first species to leaf-out (budburst DOY 75.7 AE 1.1; mean AE SE), followed by ash (DOY 102.8 AE 5.7), oak (DOY 111.6 AE 3.2) and beech (DOY 125.3 AE 7.4), irrespective of the treatment (Figs 2a, S3). Bud albedo (white-painted vs black-painted buds) consistently affected the time of budburst across all species (Table 3). Seedlings with black-painted buds started bud development significantly earlier than seedlings with white-painted buds (Fig. 2a), especially for early flushing species (budburst cherry: −10.6 d; ash: −7.5 d; oak: −4.1 d; and beech: −4.3 d; Fig. 3a). Seedlings with no painting buds started bud development later than seedlings with black-painted buds and slightly earlier than seedlings with white-painted buds (Figs 2a, 3a).
Lower solar radiation significantly affected spring bud development, with later bud development under shaded conditions for all species (Table 3; Fig. 2b). Specifically, budburst was delayed by 4.5 d for cherry (not significant), 5.1 d for ash, 3.2 d for oak and 11.8 d for beech (Fig. 3b). These delays could be explained by lower temperatures in the shade treatments compared with the controls (Figs 3c, S5). Indeed, when accounting for this difference by accumulating GDH (growing degree hours accumulated The biomass increment was computed using an allometric equation based on height and diameter with species-specific parameters provided by Annighöfer et al. (2016) (see details in the Materials and Methods section). Asterisks indicate significant differences between a given treatment and its corresponding control (i.e. full sun for the treatment shade, water and nutrient and control drought for the treatment drought), tested with a mixed effect ANOVA with block as random effect and treatment as a fixed effect: **, P < 0.01; *, P < 0.1. (2021)
Effect of nutrients and irrigation on leaf senescence
Irrigation and nutrients had a significant effect on leaf senescence, but the effect differed among species as shown by the significant interaction between species and treatments (Table 3). Only in oak did irrigation have a significant effect on leaf senescence (Fig. 4a), with an advance of 19.8 d compared with the control based on the 50% leaf senescence stage (Fig. 5b). Except for oak, nutrients tended to delay the date of 50% leaf senescence compared with the control treatment (Fig. 4a), but this delay was significant for beech only (+6.5 d).
Effect of light intensity on leaf senescence
Leaf senescence was strongly delayed under shaded conditions for all species except oak in which a slight insignificant advance was found (Table 3; Figs 4b, 5c). Shade conditions delayed senescence by +39.6 d, +17.7 d and +44.8 d for cherry, ash and beech, respectively, whereas air temperature measured under a radiation shield was slightly cooler in the shade treatment (Fig. S6). For oak, 50% leaf senescence occurred 5.5 d earlier under shade conditions (not significant; Fig. 4c) and 75% leaf senescence occurred 7.0 d earlier (P = 0.011).
Effect of reduced precipitation on leaf senescence
Overall, the drought treatment had no significant effect on leaf senescence across species (Table 2). However, the species-specific analysis showed that lower soil moisture during summer significantly delayed leaf senescence of cherry by +7.0 d and significantly advanced senescence of beech by −7.1 d (Figs 4c, 5d). No effect was found for the two other species (Figs 4c, 5d).
Discussion
Our experimental study shows that the microclimate in which seedlings are growing significantly affects leaf phenology both in spring and in autumn. The albedo treatment, in which we painted buds to modify heat reflectance, demonstrates that temperature is sensed at the bud level: black-painted buds with lower albedo and higher maximum bud temperature during bright days showed earlier bud development relative to white-painted and unpainted buds. Moreover, a cooler microclimate induced by the shading nets significantly delayed budburst timing of all species. This delay was mainly explained by growing-degree-day accumulation, whereby the shaded and control plants required similar warming sums until budburst.
Regarding autumn phenology, reduced light intensity led to reduced biomass increment in all species (but significant for cherry and beech only) and strongly delayed leaf senescence for all species but oak. The magnitude of such delay exceeds the interannual variability observed over several decades in Switzerland for common tree species or for beech in France (Delpierre et al., 2009;Meier et al., 2021). This suggests that the phenological cycles of understory trees are strongly affected by the shade imposed by overstory trees and that trees growing under low light compensate for the reduced photosynthetic assimilation by extending their growing season. In addition, our results showed that higher water availability can strongly advance leaf senescence when it significantly increases growth rate (as found for oak). Overall, nutrients and water availability had only little effect on growth that might also explain their limited impact on leaf senescence timing.
Bud temperature as the main driver of budburst timing
Our study showed that black-painted buds started their development earlier than nonpainted or white-painted buds, suggesting that bud albedo affects internal physiological processes by influencing the temperature of bud tissues. Temperature records within buds show that black-painted buds are up to 3.6°C warmer than white-painted buds during bright days (mean difference for oak when solar radiation was more than 400 W m −2 ). The discrepancy in the time of budburst between black and white buds was more pronounced for early (cherry, ash) than for lateflushing species (oak, beech). Because of the nonlinear response of bud development to spring warming temperatures, lower bud albedo early in spring may lead to a strongly increased accumulation of temperature relevant for bud cell growth, whereas this increase in effective heat sums might be less pronounced later in spring when days are already warm. In addition, early flushing species might be more sensitive to temperature increases than late-flushing species for which more pronounced photoperiod and/or chilling requirements may limit responsiveness to spring warming (Fu et al., 2019a,b;Montgomery et al., 2020). Our results further demonstrate that lower radiation induced by the shading net substantially changed the microclimate of the buds, delaying bud development of beech by 12 d, which roughly correspond to half of the interannual variability that can be observed over several decades (Meier et al., 2021). The slower accumulation of growing degree days under shaded conditions fully explained the discrepancy in budburst timing between shaded and control plots, for all species except beech, which overall confirms that growing degree days/hours is a good method to predict budburst of temperate trees in regions where chilling is not limiting . However, the remaining discrepancy found for beech suggests that, in addition to temperature and photoperiod (Vitasse & Basler, 2013), light intensity may play a direct role in the regulation of bud development of European beech, as also suggested in a previous study analysing the partial correlation of leaf-out timing of European trees and insolation (Fu et al., 2015a,b). Alternatively, this species may have a different threshold above which temperature is accumulated or may respond nonlinearly to forcing temperature. Further experiments controlling both light intensity and bud temperature are needed to clarify the potential effect of solar radiation on European beech phenology. Bud temperature differs from air temperature measured by standard weather stations, that is under ventilated and shaded conditions. Under clear sky conditions, buds are heating up during the day by shortwave radiation and cooling down during the night by losses of heat from the atmosphere to the outer space through longwave radiation (radiative cooling). We therefore suggest that standard weather stations may substantially underestimate bud temperature during the day, when solar radiation is high and overestimate minimum temperature during the night when the sky is clear. Our temperature data recorded inside buds clearly show this effect for daily maximum temperature. Because sky brightness has been shown to have substantially increased since the 1980s in Europe, especially in spring (Sanchez-Lorenzo et al., 2015;Pfeifroth et al., 2018), the discrepancy between standard air temperature and bud temperature may have substantially increased over recent decades. This may introduce a significant bias in phenology modelling, for example in the estimation of spring phenological sensitivity to temperature, which has been suggested to have decreased since the 1980s (Fu et al., 2015a,b). Our study calls for more investigations on how bud temperature differs from standard air temperature, depending on other important climatic factors such as solar radiation or wind.
Factors affecting leaf senescence
Leaf senescence is assumed to occur when the cost of maintaining active leaves outweighs the benefits of photosynthesis and is seen as a strategy for reabsorbing nutrients from the leaves and reallocating them throughout the plant (Kikuzawa, 1991;Estiarte & Peñuelas, 2015). It has long been thought that the leaf senescence process of temperate trees is mainly triggered by the decrease in temperature and photoperiod during autumn (Delpierre et al., 2009;Liu et al., 2020). However, recently, other factors have been suggested to influence leaf senescence, such as temperature during the growing season , water and nutrient availability (Weih, 2009;Fu et al., 2019a,b), summer drought stress (Schuldt et al., 2020) and light conditions (Wingler et al., 2006). These factors can have a direct effect on leaf senescence by inducing a stress response or affect leaf senescence indirectly by modulating the plant source-sink relationship (Paul & Foyer, 2001), which plays a prominent role in the senescence process (Zani et al., 2020). The relative effects of these drivers often depend on the species. For example, severe drought has been shown to hasten leaf senescence of temperate trees at low elevations (Hwang et al., 2014;Xie et al., 2015), which might be a strategy to reduce transpiration (Munné-Bosch & Alegre, 2004) and avoid xylem embolism (Bréda et al., 2006) or the direct consequence of vessel cavitation under extreme severe drought as observed for European beech in central Europe during the summer 2018 (Schuldt et al., 2020;Wohlgemuth et al., 2020). Our results also showed earlier senescence under drought compared with the control-drought plots for beech and delayed senescence in the irrigated plots. By contrast, we found the opposite pattern for oak with significantly earlier senescence and elevated growth in the irrigated plots and slightly delayed senescence in the drought treatments. We expected drought to affect carbon sink and source strength and therefore senescence timing. However, the drought treatment led to significantly lower soil moisture content during late summer only, whereas the difference with the control was negligible at the beginning of the growing season until mid-July, that is when most of the growth may have already occurred. Accordingly, no differences in the estimated biomass increment was found between the drought and control treatment after the growing season. It is widely recognised that severe droughts in spring strongly affect growth rate (Vitasse et al., 2019;Bose et al., 2021) and may therefore affect leaf senescence timing due to lesser carbon supply and growth. More investigations should be conducted to determine the seasonal effects of drought on leaf senescence. Opposite responses of leaf senescence to moderate drought stress were also found for temperate trees in the north-eastern United States, with a delay in ash, maples and birches and an advance in oak and beech (Xie et al., 2018). The heterogeneity among microenvironments could partly explain this pattern when leaf senescence is studied in situ. Here, we rule out the possibility of different microenvironments and attribute these opposite responses to speciesspecific physiological features/characteristics (e.g. tolerance to drought or shade). For instance beech is a shade-tolerant species with rather low tolerance to drought, whereas pedunculate, sessile and pubescent oaks are more tolerant to drought (Gallé et al., 2007;Rubio-Cuadrado et al., 2018;Vitasse et al., 2019), and capable of maintaining photosynthesis when leaf water potential is low (Raftoyannis & Radoglou, 2002). The species-level variation in the responses to drought might be the result of differences in the relative importance of stress responses vs sink limitation. Reduced water availability can decrease photosynthetic activity, at least in drought-intolerant species, which in turn delays the senescence process by delaying saturation of a trees' annual carbon sink (Zani et al., 2020). Conversely, drought might have a direct effect on leaf senescence, whereby intense droughts cause a stress reaction, increasing ROS concentration in leaves or even hydraulic failure leading to precocious leaf senescence. Increased nutrient availability has generally been shown to delay leaf senescence. For instance, high nutrient availability was found to delay leaf senescence in Populus trichocarpa in Iceland (Sigurdsson, 2001) and in seedlings of Aesculus hippocastanum and beech (Fu et al., 2019a,b). By contrast, under low nutrient availability, elevated CO 2 concentration was found to accelerate growth cessation in Populus trichocarpa (Sigurdsson, 2001). Sigurdsson (2001) suggested that under elevated CO 2 or under low nutrient availability, there is an imbalance between carbon and nitrogen sources which alters autumn phenology. Other studies have suggested that a potassium deficiency may lead to earlier leaf senescence (Wang et al., 2012;Pan et al., 2017), probably because, in addition to the negative impacts on photosynthesis, K deficiency hinders the export of sucrose from the leaves through the phloem (Cakmak, 2005). Senescence, therefore, appears to be largely driven by the interaction between nutrient, particularly nitrogen, and carbon supply (Paul & Foyer, 2001). High nutrients will therefore allow trees to maintain source activity (photosynthesis) for longer time and shed their leaves later in the year. However, no significant increase in biomass was found in our experiment for the fertilised treatments, which may explain the insignificant delays in leaf senescence observed for beech, cherry and ash in this treatment. It is possible that the additional fertiliser was not yet absorbed by the trees, or only partially, as suggested by the high nitrate concentration remaining in the soil at the end of the season (25 times higher in the fertiliser treatment than in the control for oak/beech and c. nine times higher for cherry/ash, see Table S2).
We found that reduced light availability (reduction of PAR by c. 70%) strongly delayed leaf senescence of ash, cherry and beech by 18, 39 and 42 d along with a reduction of biomass increment of 15%, 37% and 19%, respectively. This can also be explained by the sink-limitation hypothesis (Wingler et al., 2006;Wingler & Roitsch, 2008;Dox et al., 2020). Shaded conditions over summer led to reduced carbon uptake (i.e. lower biomass increment) due to lower photosynthetic activity (Sevillano et al., 2016), and subsequently delayed the senescence process. This result was also found for common sunflower and beans (Ono et al., 2001) or for European beech and the Japanese spiraea (Zani et al., 2020). It remains an open question to which degree sink limitation operates at the leaf, branch or whole-plant level. Given that the source/sink control of leaf senescence appears to be largely driven by leaf-level nitrogen to carbon ratios (Paul & Foyer, 2001), localised effects on leaf senescence can be expected. This agrees with observations that, under natural conditions, the upper part of the canopy, which is more exposed to full light, shows earlier senescence than more shaded parts of the tree (Gressler et al., 2015).
In addition, photo-oxidative stress might drive early senescence under high light. Photo-oxidative stress occurs when light-energy absorption exceeds the capacity for light utilisation: an excess of photons may lead to nonphotochemical quenching and oxidative
Research
New Phytologist stress by an accumulation of ROS (Müller et al., 2001). This may lead to photoinhibition (Long et al., 1994) and can accelerate the process of senescence (Munné-Bosch & Alegre, 2004;Juvany et al., 2013;Pintó-Marijuan & Munné-Bosch, 2014). Photo-oxidative stress might, therefore, play a role, especially in species adapted to grow under the canopy shade at juvenile age such as beech. The absence of a response to light or drought in oak could be related to its tolerance to high solar radiation (Valladares et al., 2002) and its ability to efficiently dissipate an excess of energy and degrade ROS under photo-oxidative stress, as demonstrated for pubescent oak (Gallé et al., 2007).
Overall, in addition to the well known effects of temperature and photoperiod on the regulation of leaf senescence timing, our results suggest a tight interplay between source and sink processes in regulating the end of the seasonal vegetation cycle, which can be largely influenced by light, water and nutrient availability. More experiments will be necessary to fully untangle the relative contribution of direct effects of solar radiation on leaf senescence in relation to the indirect effects mediated through sugar and nutrient availability.
Conclusion
This study demonstrates the importance of microclimatic conditions, especially solar radiation, in regulating the timing of budburst in spring and leaf senescence in autumn. While our experiment shows that light availability mainly affects spring budburst through modification of bud temperatures, in European beech, light intensity and/or quality may directly affect budburst as the delayed budburst under shaded conditions could not be fully explained by the local temperature recorded beneath the shading net. A potential avenue to improve phenological predictions will therefore be to quantify how bud and leaf temperatures differ from standard air temperature depending on other meteorological factors, such as solar radiation and wind. Light availability also had a large effect on autumn senescence, with a considerable delay of leaf senescence under shaded conditions during the growing season found for all species except oak along with a reduction of growth. This delay under low light can be explained by the sink-limitation hypothesis, whereby leaf senescence is tightly linked to photosynthate and nutrient supply. Oxidative stress under high light conditions may further drive this trend in late successional and shade-tolerant species sensitive to heat and drought, such as European beech. The results provide important insights into the roles of sink limitation and drought stress in mediating autumn phenology and call for a more accurate representation of microclimate to improve phenological predictions. Date of leaf senescence at 50% rate estimated from the mixed effect models using block as random effect for the different treatments (a, nutrient; b, irrigation; c, shade; d, drought) in comparison with their corresponding control. Values correspond to the marginal mean estimates of the mixed effect ANOVA with blocks as random factor. The error bars correspond to the confidence intervals at 95%. The models were performed separately for each species and pairs of treatments shown in each panel. For each species, different letters among treatments indicate significant differences (post-hoc Tukey's tests at α = 0.05).
Author contributions
YV, FB and BM planned and designed the experiment. YV conducted the experiment with the field assistance of RK and MGW. CMZ and YHF helped in the interpretation of the results. YV analysed the data and drafted the manuscript with substantial inputs from all co-authors.
Data availability
The data that support the findings of this study are available from the corresponding author upon request.
Supporting Information
Additional Supporting Information may be found online in the Supporting Information section at the end of the article. Daily minimum temperatures recorded within the bud from January 2020 until species-specific budburst in white, black, shade or fully exposed buds. | 2021-07-09T06:16:59.819Z | 2021-07-08T00:00:00.000 | {
"year": 2021,
"sha1": "74491df462fc4e6653aa25da9963c5aafdb5f55e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.17606",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0428026a7a56a53b0b906b56a168518d1e54daa6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235681615 | pes2o/s2orc | v3-fos-license | H2B Type 1-K Accumulates in Senescent Fibroblasts with Persistent DNA Damage along with Methylated and Phosphorylated Forms of HMGA1
Cellular senescence is a state of terminal proliferative arrest that plays key roles in aging by preventing stem cell renewal and by inducing the expression of a series of inflammatory factors including many secreted proteins with paracrine effects. The in vivo identification of senescent cells is difficult due to the absence of universal biomarkers. Chromatin modifications are key aspects of the senescence transition and may provide novel biomarkers. We used a combined protein profiling and bottom-up mass spectrometry approach to characterize the isoforms and post-translational modifications of chromatin proteins over time in post-mitotic human fibroblasts in vitro. We show that the H2B type 1-K variant is specifically enriched in deep senescent cells with persistent DNA damage. This accumulation was not observed in quiescent cells or in cells induced into senescence without DNA damage by expression of the RAF kinase. Similarly, HMGA1a di-methylated and HMGA1b tri-phosphorylated forms accumulated exclusively in the chromatin of cells in deep senescent conditions with persistent DNA damage. H2B type 1-K and modified HMGA1 may thus represent novel biomarkers of senescent cells containing persistent DNA damage.
Introduction
Cellular senescence is a stress response of mammalian cells characterized by a stable cell cycle arrest despite remaining metabolically active. Senescence can be induced by diverse stimuli including telomere loss (which results from repeated cell divisions), oncogene activation, and genotoxic agents [1]. Regardless of the stress, senescent cell cycle arrest is mediated by p53 and/or Rb tumor suppressor pathways. Moreover, they often display an enlarged and flattened morphology with increased expression of SA-β-galactosidase, secretion of some cytokines and metalloproteases, and profound chromatin reorganization (i.e., heterochromatin assembly) that may include the formation of highly compacted DNA in the form of senescent-associated heterochromatic foci (SAHFs) [1,2]. Accumulating evidence shows that cellular senescence plays critical roles in tumor suppression, wound healing, and aging in vivo [2].
Chromatin is the heritable material in eukaryotes and is composed of DNA, histone, and non-histone proteins. The building block of chromatin is the nucleosome that is composed of pairs of core histones H2A, H2B, H3, and H4. As the primary component of chromatin, histone post-translational modifications have been implicated in virtually all cellular processes requiring access to the genome by modulating local chromatin organization (i.e., transcription, DNA replication, and repair) [3][4][5][6]. Histone functions are regulated by a myriad of chemical modifications including acetylation, methylation, and phosphorylation. For example, H3-K4Me3 has been linked to transcriptional activation by rendering chromatin permissive to the transcriptional machinery (i.e., euchromatin). On the contrary, H3-K9Me2/3 marks are associated with transcriptional silencing by heterochromatin assembly. Histones H2A and H2B are less extensively modified than H3, but instead, are present as multiple variants that slightly differ in amino acid sequence. In humans, there are about 12 H2A variants [7], 16 H2B variants [8], and 5 H3 variants, according to the HISTome2 database [9]. H4 was thought to be a unique species until the recent discovery of a second isoform [10]. Several of these variants have been extensively studied for their crucial roles in transcriptional silencing (macro-H2A), activation (H2A.Z), and DNA repair (H2A.X) [11]. However, nuclear functions of other H2A and H2B variants remain to be determined.
Histones are mainly encoded by a family of replication-dependent genes located at two genomic clusters (cluster 1 on chromosome 6p22 and cluster 2 on chromosome 1q21). Histone mRNAs are the only known cellular mRNAs that end by a stem-loop instead of a polyadenylated tail [12]. This uncommon structure is necessary to confer a short half-life to these mRNAs when compared to polyA mRNAs. In fact, during the relatively short replication phase, newly synthetized DNA needs to be rapidly packaged with new and old histones. Hence, the S-phase is accompanied by an approximately 30-fold induction of histone mRNAs. When replication is completed, histone mRNA levels need to quickly diminish in order to avoid a toxic accumulation of histone proteins. Several histone variants are polyadenylated and expressed throughout the cell cycle (replication-independent genes) such as H3.3, H2A.J, and H2A.Z [11].
The stem-loop consists of a 6-base stem and a 4-nucleotide loop recognized by the stem-loop binding protein (SLBP). This protein is highly expressed during the S-phase and is crucial for mRNA 3 processing, nuclear export, and translation [12]. In contrast, SLBP is present at very low levels in non-proliferating cells (differentiated, quiescent, senescent cells). Canonical histone mRNAs are rapidly degraded in the absence of SLBP unless there is a polyadenylation site downstream of the stem-loop sequence to stabilize the transcript [13][14][15][16].
In proliferating cells, chromatin maintenance is intrinsically linked to DNA replication by activation of histone mRNA transcription and facilitated deposition of histones by histone chaperones such as CAF1 [4,17]. However, non-proliferating senescent cells do not undergo S-phase chromatin assembly but utilize replication-independent chromatin assembly pathways to maintain chromatin structure and dynamics (for example, histone chaperones HIRA [18], DAXX/ATRX [19], and the DEK complex [20]). The functional consequences of these pathways in chromatin composition and dynamics during long-term cell cycle arrest remain to be explored. Senescent cells can persist in the body for decades (i.e., benign human nevi), but the state of senescent chromatin over long periods is only partially understood.
In this study, we characterized chromatin composition in various senescent and quiescent states. We used a combined protein-profiling (top-down) and bottom-up approach using mass spectrometry [21] to analyze all main core histone and high mobility group A (HMGA) post-translational modifications (PTMs) and variants in these conditions. Using this methodology, we previously described the accumulation of the H2A.J variant in fibroblasts induced into senescence by DNA damage, and the accumulation of H2A type 1-C in both quiescent and senescent cells [22]. Here, we investigated the dynamics of other core histones and associated variants as well as some specific abundant PTMs in relation to the senescence phenotype. Thus, we investigated H2B isoforms and showed that the H2B type 1-K histone variant markedly accumulated in fibroblasts induced into senescence by DNA damage by a post-transcriptional regulation after accumulation of the polyA mRNA forms.
The HMGA1/2 proteins accumulate in senescent fibroblasts and contribute to chromatin compaction [27,28]. We analyzed HMGA1 PTMs in long-term cell cycle arrest states including multiple early (5 days) and deep (20 days) senescent (oncogene activation, genotoxic stress, telomere attrition) and quiescent conditions. We found that HMGA1 proteins are overexpressed in senescent cells and underwent senescent-specific modifications: HMGA1a di-methylation and HMGA1b tri-phosphorylation increased in deep senescent conditions. Our results reveal characteristic modifications of the chromatin that are shared in several non-proliferative states or are specific to senescent cells.
Cell Lines and Retroviruses
WI-38hTERT human embryonic fibroblasts expressing a conditionally activated form of the RAF1 kinase (GFP-RAF-ER) were cultured as described [29]. WI-38 cells were passaged in ambient 20% oxygen and 5% CO 2 to obtain an early replicatively senescent population at population doubling (PD) 65. These cells were further maintained in culture for an additional month to obtain a deep replicatively senescent population at PD 66. MRC-5 human lung fibroblasts were cultured in a similar fashion to obtain a deep replicatively senescent population. Retroviral preparations of pBabe, pBabe-TRF2∆ (dominant-negative lacking the basic and Myb domains) were prepared as described [27,30].
Preparation of Histones and HMGA Proteins, and Mass Spectrometry Analyses
Histones and HMGA proteins were acid-extracted and analyzed by MS and MS/MS both at the intact protein and tryptic peptide levels as previously described [21]. Profiling was performed by UHPLC-MS using an LTQ-Orbitrap mass spectrometer (ThermoFisher Scientific, Les Ulis, France) operating in the positive ion mode at a 30,000 resolution. Proteins were identified by their accurate mass measurement after deconvolution using the Xtract software (ThermoFisher Scientific). For tryptic peptide analyses, histones were first propionylated on lysine residues, then digested with trypsin, and finally, subjected to a second round of propionylation to block the newly formed N-terminal residues [31]. Analyses were then performed on a LTQ-Orbitrap Discovery mass spectrometer that was operated in the data-dependent acquisition mode, allowing the automatic switching between MS and MS/MS. The MS survey scan was performed from m/z 300-2000 in the Orbitrap, using a resolution set at 30,000 (at m/z 400). The five most abundant ions (threshold 500 counts, charge states higher than +1) were further selected for collisioninduced dissociation (CID) experiments. The CID mass spectra were collected in the linear ion trap.
Relative quantification of deconvoluted intact protein modified forms/variants was performed by dividing the intensity of a given deconvoluted MS peak by the sum of the intensities of the different deconvoluted MS peaks composing the spectrum of a considered protein. Modified peptide sequences were first manually searched in the MS trace and then confirmed by visual inspection and interpretation of the corresponding MS/MS spectra. Relative quantification of PTMs was performed by measuring the area of the extracted ion chromatogram peak corresponding to a specific modified peptide normalized to the sum of the peak areas corresponding to all observed modified and non-modified forms of this peptide. In addition, relative quantification of histone variants was realized by measuring the area of the extracted ion chromatogram peak corresponding to a variant-specific peptide normalized to a peptide found in all corresponding variants.
Flow Cytometry Analyses of DNA Content
DNA content analysis was performed with a FACS Calibur flow cytometer (BD Biosciences, le Pont de Claix, France) essentially as described [29].
BrdU Incorporation and Immunostaining
Cells were seeded in 24 well plates at a density of 50,000 cells/well on collagen-treated coverslips. BrdU was added to media at a final concentration of 50 µM for 24 h. Immunofluorescence to visualize incorporated BrdU and γH2AX foci was performed as described [28]. DAPI CV (coefficient of variation) measurements were realized using an ImageJ in for semi-automatic quantification of DNA compaction as previously described [28].
qRT-PCR
Histone variants mRNA quantities were assessed by qRT-PCR. 500,000 cells were harvested in each condition prior to total RNA isolation using a Nucleospin RNA XS kit (Macherey-Nagel). Reverse transcription was performed using random hexamer and oligo(dT)18 primers. Variant-specific histone primers were designed to quantify the total mRNA level and the PolyA subpopulation (listed in Table S1). Quantitative real-time PCR was performed on a Bio-Rad iQ5 instrument. The reactions were prepared using Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen 11733-046). GAPDH was used as a control gene for normalization.
Data Analysis and Statistics
The percentage of BrdU-positive cells was determined by counting at least 200 cells. γH2AX foci were counted in at least 100 nuclei in each condition. The DAPI CV was calculated for the indicated number of nuclei. DAPI CV results are presented as boxplots. A box represents 50% of the data and the median. Whiskers correspond to the minimum and maximum values. Histone PTM and variant relative abundances were presented on stacked histograms as the average values and standard deviations for the indicated number of biological replicates.
Results
In this study, we were interested in analyzing major core histone and HMGA PTMs and variants in early (5 days after treatment) and deep (20 days after treatment) senescent states. Our reference population was WI-38hTERT human embryonic lung fibroblasts grown in 5% oxygen. These cells were immortalized by expression of the telomerase to prevent stress engendered by telomere attrition or growth under hyper-physiological 20% ambient oxygen. They also expressed a fusion protein composed of a constitutively active form of the RAF1 kinase fused to GFP and the estrogen receptor domain (GFP-RAF-ER). The estrogen receptor domain is sequestered in an inactive form that can be activated by the addition of the ER ligand 4-hydroxy-tamoxifen (4-HT). Activation of the RAF1 kinase leads to a rapid hyper-stimulation of the MAP kinase pathway that induces senescence within 3 days [29]. Senescence was induced by (i) oncogene RAF activation for 5 days (eSenRAF) and 20 days (dSenRAF), (ii) genotoxic stress by treating cells with etoposide (a topoisomerase 2 inhibitor) (5 days: eSenETO and 20 days: dSenETO), (iii) telomere erosion (i.e., replicative senescence) (eSenRep, dSenRep). These conditions were compared to proliferating and quiescent serum-starved WI-38 fibroblasts (5 days: eQuiescent, 20 days: dQuiescent) ( Figure 1). topoisomerase 2 inhibitor) (5 days: eSenETO and 20 days: dSenETO), (iii) telomere erosion (i.e., replicative senescence) (eSenRep, dSenRep). These conditions were compared to proliferating and quiescent serum-starved WI-38 fibroblasts (5 days: eQuiescent, 20 days: dQuiescent) ( Figure 1).
Figure 1.
Overview of the study. WI-38 human fibroblasts were induced into quiescence by serum starvation or into senescence following Raf activation (SenRaf), etoposide treatment (SenETO), and passaging (Replicative senescence, SenRep). These various states of cell arrest were maintained for 5 days (early state) or 20 days (late state) and histones and HMGA were analyzed by mass spectrometry and RNA by RT-qPCR. Each condition was analyzed in triplicates except replicative senescence in duplicate. PD: population doubling. All senescent conditions presented some level of chromatin compaction as observed by quantifying the coefficient of variance of the DAPI staining of DNA ( Figure S1A) and cell cycle arrest was confirmed by BrdU incorporation and flow cytometry ( Figure S1B). Histone and HMGA proteins were analyzed in each condition by mass spectrometry using a combined protein-profiling and bottom-up approach to characterize the main PTMs and variants at the protein and peptide levels, respectively [21]. The protein profiling performed by UHPLC-MS using a high-resolution high-mass accuracy Orbitrap instrument allowed the convenient detection and identification of the main histone post-translational modifications and variants thanks to a mass accuracy better than 10 ppm [21]. Distinction of post-translational modifications with the same nominal mass, such as acetylation and trimethylation (42.01 and 42.05 Da, respectively) cannot be accurately performed at the intact protein level. However, the bottom-up approach makes it possible first thanks to high mass accuracy measurement and high-resolution, and then by their respective MS/MS spectra. Moreover, acetylated and trimethylated peptides elute at different retention times, thus providing an additional identification criterion [32,33]. Hence, post-translationally modified residues and variant-specific peptides from core histones were identified by bottom-up proteomics with trypsin digestion and including 2 rounds of propionylation (pre-and post-digestion) [21]. RNAs were also analyzed to gain regulatory insights. Figure 1. Overview of the study. WI-38 human fibroblasts were induced into quiescence by serum starvation or into senescence following Raf activation (SenRaf), etoposide treatment (SenETO), and passaging (Replicative senescence, SenRep). These various states of cell arrest were maintained for 5 days (early state) or 20 days (late state) and histones and HMGA were analyzed by mass spectrometry and RNA by RT-qPCR. Each condition was analyzed in triplicates except replicative senescence in duplicate. PD: population doubling. All senescent conditions presented some level of chromatin compaction as observed by quantifying the coefficient of variance of the DAPI staining of DNA ( Figure S1A) and cell cycle arrest was confirmed by BrdU incorporation and flow cytometry ( Figure S1B). Histone and HMGA proteins were analyzed in each condition by mass spectrometry using a combined protein-profiling and bottom-up approach to characterize the main PTMs and variants at the protein and peptide levels, respectively [21]. The protein profiling performed by UHPLC-MS using a high-resolution high-mass accuracy Orbitrap instrument allowed the convenient detection and identification of the main histone post-translational modifications and variants thanks to a mass accuracy better than 10 ppm [21]. Distinction of post-translational modifications with the same nominal mass, such as acetylation and trimethylation (42.01 and 42.05 Da, respectively) cannot be accurately performed at the intact protein level. However, the bottom-up approach makes it possible first thanks to high mass accuracy measurement and high-resolution, and then by their respective MS/MS spectra. Moreover, acetylated and trimethylated peptides elute at different retention times, thus providing an additional identification criterion [32,33]. Hence, post-translationally modified residues and variant-specific peptides from core histones were identified by bottom-up proteomics with trypsin digestion and including 2 rounds of propionylation (pre-and post-digestion) [21]. RNAs were also analyzed to gain regulatory insights.
The early senescent samples (5 days) show very few differences in the relative abundance of histone PTMs and variants despite cell cycle arrest and heterochromatin assembly ( Figure S1). However, striking differences were observed in deep senescent conditions when cells were kept in culture for 20 days (see below).
H2B type1-K Is Specifically Enriched in Deep Senescent Conditions with Persistent DNA Damage by an Active Post-Transcriptional Regulation
Histone H2B is predominantly unmodified and is present in 6 main peaks corresponding to potentially 8 variants based on accurate measurement of intact protein masses Table S2). In early senescent conditions, no significant difference in the relative abundance of histone H2B variants was observed (Figure 2A,B). However, a reproducible relative increase of the "blue" peak, corresponding to H2B type 1-K alone or in combination with H2B type 1-H, since both isotope massifs might overlap to some extent (based on intact protein masses, Table S2), was observed in conditions of deep senescence with persistent DNA damages, i.e., dSenETO and dSenRep, as indicated by the large number of γH2AX foci (Figure 2 and Figure S1C). The relative abundance of the corresponding "blue" peak increased by 2-fold in these conditions to reach almost 40% when compared to other conditions. An increase of this peak was also visible in WI-38 fibroblasts expressing a dominant negative form of TRF2 that induces chromosome deprotection and senescence (dSenTRF2D) as well as replicatively senescent human lung MRC-5 fibroblasts (MRC-5 dSenRep) ( Figure S2). This observation seems to indicate that an active phenomenon is responsible for the relative enrichment of specific H2B variants in deep senescent populations induced by DNA damage. Despite the high sequence similarity of H2B variants (Table S3), H2B type 1-K differs from the others by a Ser124Ala substitution, which results in 2 Da-or >16Da-mass differences when compared to H2B type 1-H or other H2B variants, respectively (Table S2).
The presence of H2B type 1-K was further ascertained by monitoring the corresponding specific peptide ( Figure S3). Relative quantification of the abundance of tryptic Cterminal peptides Leu100-Lys125 unambiguously identified H2B type 1-K as the specific H2B variant that increased in deep senescence with DNA damage ( Figure 2C). Altogether these data confirmed attribution of the "blue" peak to H2B type 1-K, while also showing the specific accumulation of this variant in deep senescent conditions with persistent DNA damages. This increase can be due either to a transcriptional regulation (increased mRNA levels) or a post-transcriptional regulation (increased translation, deposition and/or decreased degradation, eviction).
To investigate the mechanisms involved in this regulation, we performed qPCR experiments on randomly primed reverse-transcribed cDNA using 2 pairs of primers that hybridize to: (i) the mRNA 5 upstream of the stem-loop that gives the total mRNA level and (ii) the mRNA 3 downstream of the stem-loop and before the polyadenylation site that gives the polyA mRNA level ( Figure 2D). We used the HIST1H2BD gene as an internal control that coded for an H2B variant that remained globally constant at the protein level in all conditions. As anticipated, the total mRNA of all tested H2B variants was decreased by 3-10 fold in quiescent and senescent proliferative arrest conditions reflecting the instability of the predominant stem-loop RNAs in these conditions [13][14][15][16] (Figure 2E). However, we found that HIST1H2BK (encoding H2B type 1-K) and HIST1H2BD (encoding H2B type 1-D) polyA mRNAs were both enriched 5-20 fold in all arrested conditions compared to the proliferating control ( Figure 2F). Thus, a basal expression of H2B genes with polyadenylation sites explains the enrichment of polyA H2B mRNAs when SLBP is absent in conditions of long-term cell cycle arrest. Strikingly, H2B type 1-K protein is enriched to a greater extent than H2B type 1-D (Figure 2A,B), even though polyadenylated HIST1H2BD RNA is equal or greater than HIST1H2BK polyadenylated RNA in senescence ( Figure 2E). This observation suggests that the enrichment of H2B type 1-K protein must involve some post-transcriptional mechanism (i.e., increased export, translation, deposition efficiencies or decreased eviction, degradation).
H4 Mono-Acetylation Remained Low in Deep Senescent States and H4-K20Me3 Increased Progressively with Time in Conditions of Cell Cycle Arrests
In a previous study [28], we described a specific decrease of 25% of H4 monoacetylation localized at K16 by the NAD+-dependent deacetylase SIRT2 in early senescent conditions (eSenRAF, eSenETO, eSenRep) compared to proliferating (Prolif.) and quiescent (eQuiescent) cells. This decrease of H4-K16Ac contributed to heterochromatin assembly.
When cells were maintained senescent for 20 days (deep senescence), the levels of H4 mono-and di-acetylation remained low, as observed in early senescent samples ( Figure 3A,B), and heterochromatin persisted ( Figure S1A). Bottom-up proteomics was used to localize acetylated residues, and it demonstrated that K16 and then K12 were the most prominently acetylated residues ( Figure S4). DNA compaction was even more marked when cells were treated with etoposide for a longer time (dSenETO) ( Figure S1). However, H4 mono-acetylation level remained higher when cells were quiescent for 20 days (dQuiescent) (Figure 3A,B). (Table S2). Peptide Leu100-Lys125 is specific to H2B type 1-K and corresponding MS/MS spectra are given in Figure S3. Experiments were performed on three independent biological replicates. (D) Processing of histone mRNA when SLBP is present (cell proliferation) or not (cell cycle arrest). To quantify total and polyA mRNA levels, primers were designed upstream of the stem-loop (1) and downstream the stem-loop (2), respectively. qPCR quantification on randomly reversetranscribed cDNA of (E) total and (F) polyA mRNA levels of H2B variants. HIST1H2BK, HIST1H2BH, and HIST1H2BD genes encode for H2B type 1-K, H2B type 1-H, and H2B type 1-D proteins, respectively.
The presence of H2B type 1-K was further ascertained by monitoring the corresponding specific peptide ( Figure S3). Relative quantification of the abundance of tryptic C-terminal peptides Leu100-Lys125 unambiguously identified H2B type 1-K as the specific H2B variant that increased in deep senescence with DNA damage ( Figure 2C). Altogether these data confirmed attribution of the "blue" peak to H2B type 1-K, while also showing the specific accumulation of this variant in deep senescent conditions with persistent DNA damages. This increase can be due either to a transcriptional regulation (increased mRNA Blue peak = H2B type 1-K and/or 1-H; Red peak = H2B type 1; Green peak = H2B type 2-E and/or 2-F; Purple peak = H2B type 1-D; Sky blue peak = H2B type 1-B; Orange peak = H2B type 1-M. Relative abundance of H2B isoforms quantified at the (B) protein and (C) peptide levels. Intact proteins were identified by accurate measurement of monoisotopic masses (Table S2). Peptide Leu100-Lys125 is specific to H2B type 1-K and corresponding MS/MS spectra are given in Figure S3. Experiments were performed on three independent biological replicates. (D) Processing of histone mRNA when SLBP is present (cell proliferation) or not (cell cycle arrest). To quantify total and polyA mRNA levels, primers were designed upstream of the stem-loop (1) and downstream the stem-loop (2), respectively. qPCR quantification on randomly reversetranscribed cDNA of (E) total and (F) polyA mRNA levels of H2B variants. HIST1H2BK, HIST1H2BH, and HIST1H2BD genes encode for H2B type 1-K, H2B type 1-H, and H2B type 1-D proteins, respectively. Relative abundance of H4-K20Me3 quantified on the Lys20-Arg23 peptide; the rest is di-methylated. Experiments were performed on three independent biological replicates. MS/MS spectra corresponding to those H4 peptides are given in Figure S4).
H4-K20Me3 is a repressive histone mark that was reported to increase during the RAS-induced senescence of IMR90 fibroblasts and to participate in transcriptional repression and chromatin compaction [23,24]. H4-K20Me3 was also reported to increase in quiescent fibroblasts [34].
Our analysis showed a progressive increase of H4-K20Me3 (at the peptide level) during both quiescent and senescent proliferative arrests for 5 and 20 days ( Figure 3C). H4-K20Me3 increased about 2-fold in cells arrested for 5 days and about 4-6 fold in cells arrested for 20 days compared to proliferating cells. H4-K20Me3 was also increased in senescent cells induced by expression of a dominant negative TRF2 protein for 18 (eSen-TRF2D) and 28 days (dSenTRF2D) ( Figure S5). In this condition, cells are enlarged, flattened, SA-β-galactosidase activity is increased, and DNA is condensed as for replicatively (C) Relative abundance of H4-K20Me3 quantified on the Lys20-Arg23 peptide; the rest is di-methylated. Experiments were performed on three independent biological replicates. MS/MS spectra corresponding to those H4 peptides are given in Figure S4). H4-K20Me3 is a repressive histone mark that was reported to increase during the RASinduced senescence of IMR90 fibroblasts and to participate in transcriptional repres-sion and chromatin compaction [23,24]. H4-K20Me3 was also reported to increase in quiescent fibroblasts [34].
Our analysis showed a progressive increase of H4-K20Me3 (at the peptide level) during both quiescent and senescent proliferative arrests for 5 and 20 days ( Figure 3C). H4-K20Me3 increased about 2-fold in cells arrested for 5 days and about 4-6 fold in cells arrested for 20 days compared to proliferating cells. H4-K20Me3 was also increased in senescent cells induced by expression of a dominant negative TRF2 protein for 18 (eSenTRF2D) and 28 days (dSenTRF2D) ( Figure S5). In this condition, cells are enlarged, flattened, SA-βgalactosidase activity is increased, and DNA is condensed as for replicatively senescent WI-38 ( Figure S1). In addition, replicatively senescent human lung MRC-5 fibroblasts presented the same features ( Figure S5). Our results highlight a progressive increase in H4-K20Me3 levels during quiescence and during several types of induced senescence in fibroblasts.
H3.1/2-K27Me2/Me3 and K36Me2 Accumulate with Time in Conditions of Cell Cycle Arrest
H3 histones are highly modified at many distinct sites. The main covalent modifications are lysine methylation (mono-, di-, or tri-methylation) and acetylation [35].
At the protein level, we noticed that H3.1 (the most abundant H3 variant in WI-38 fibroblasts) was more highly modified by methylation and/or acetylation in nonproliferating cells (early senescence and quiescence) compared to cycling cells ( Figure 4A). These modifications seem to accumulate with time, because this phenomenon was exemplified in long-term cell cycle arrest conditions. Analysis at the peptide level revealed that highly methylated forms accumulated predominantly on K27 and K36. The level of H3-K9Me3 remained the same in all conditions, except that it was slightly but reproducibly decreased in deep senescent conditions induced by DNA damage to the benefit of H3-K9Me2 ( Figure 5B and Figure S7). We observed a strong and reproducible accumulation of K27Me2/Me3 and K36Me2 with time of cell cycle arrest that could explain what we observe at the protein level ( Figure 4C). H3-K27Me2 and H3-K27Me3 accumulated approximately 2 fold and 2-4 fold in long-term cell cycle arrest conditions, respectively. Despite H3-K36Me1 levels remaining globally the same in all conditions, H3-K36Me2 increased in long-term cell cycle arrest by 1.5-2 fold ( Figure 4D). Table S4 summarizes the relative abundancies of the different modified forms of those 2 peptides while the most representative MS/MS spectra are given in Figure S6. Of note, H3K36Me3 was not detected under our conditions, probably due to insufficient analytical sensitivity.
HMGA1a Di-Methylation and HMGA1b Tri-Phosphorylation Accumulated in Deep Senescent Conditions
Although this study was focused on core histones, we also examined HMGA1 proteins that were extracted alongside the histone proteins. The HMGA1 gene encodes 2 isoforms, HMGA1a and HMGA1b, generated by alternative splicing [36]. HMGA1b (95 aa) contains an internal 11 aa deletion relative to HMGA1a (106 aa). Interestingly, we ob- Table S4 summarizes the relative abundancies of the different modified forms of those 2 peptides while the most representative MS/MS spectra are given in Figure S6. Of note, H3K36Me3 was not detected under our conditions, probably due to insufficient analytical sensitivity. Proteomes 2021, 9, x FOR PEER REVIEW 13 of 17
Discussion
H2B exists as multiple variants in mammals that differ by a small number of amino acids, but very little is known about their potential physiological and functional specificities. The small number of amino acid differences makes it difficult to distinguish them by chromatographic or electrophoretic methods and no antibodies distinguishing them have yet been described to our knowledge. However, they are distinguishable by their intact mass, so that mass spectrometry is a technique of choice to distinguish them [15,21].
Here, we identified an enrichment of H2B type 1-K at the protein and peptide levels in deep senescent conditions with persistent DNA damages (dSenETO, dSenRep, dSenTRF2D, MRC-5 dSenRep). RT-qPCR experiment showed that total mRNA level of all
HMGA1a Di-Methylation and HMGA1b Tri-Phosphorylation Accumulated in Deep Senescent Conditions
Although this study was focused on core histones, we also examined HMGA1 proteins that were extracted alongside the histone proteins. The HMGA1 gene encodes 2 isoforms, HMGA1a and HMGA1b, generated by alternative splicing [36]. HMGA1b (95 aa) contains an internal 11 aa deletion relative to HMGA1a (106 aa). Interestingly, we observed potential senescence-associated methylation and phosphorylation modifications of HMGA1. Observed mass differences between HMGA1 isoforms were consistent with methylation and phosphorylation modifications (14 and 80 Da, respectively). Similar HMGA1 modifications were described for B16F10 and H1299 cancer cells induced into senescence by DNA damage [37]. HMGA1a non-and mono-methylated forms are equally present in proliferating samples ( Figure 5A,B). However, a massive enrichment of the di-methylated forms occurs in deep senescent conditions (i.e., dSenRAF, dSenETO, dSenRep). The same observations were made for dSenTRF2D and MRC-5 RepSen cells ( Figure S8). In early senescent samples, the mono-methylated forms accumulated in a similar fashion as eQuiescent cells (Figure 5A,B). The di-methylated form accumulated by 2-3 fold in deep senescent conditions compared to proliferating cells. In dQuiescent conditions, the di-methylated forms of HMGA1a also accumulated, but to a lesser extent. Previous studies indicated that HMGA1a can be mono-and di-methylated essentially at Arg25 residue, while Ser98, Ser101, and Ser102 are preferentially phosphorylated [37][38][39][40]. The bottom-up strategy used in the present paper involving propionylation and trypsin digestion is not ideally suited to studying Arg25 methylation and potential phosphorylation sites [40]. Therefore, the in-depth study of HMGA1 PTMs would warrant further methodological development and investigation not in the scope of the present study. In contrast, HMGA1b does not seem methylated despite the presence of the Arg25 residue, but is present mainly as diand tri-phosphorylated forms, presumably at Ser87, Ser90, and Ser91 [39]. We noticed a progressive increase of HMGA1b tri-phosphorylated forms in senescent conditions compared to cycling and quiescent samples. HMGA1b tri-phosphorylation relative abundance reached 70% to 90% depending on deep senescent samples ( Figure 5C). This was also the case for dSenTRF2D and MRC5 RepSen conditions ( Figure S8). In addition to the modification state, HMGA isoforms were present at higher levels in deep senescent conditions as compared to proliferating and quiescent cells. Altogether, we demonstrated that di-methylated HMGA1a strongly accumulates in deep senescent conditions and HMGA1b tri-phosphorylated form specifically increased in deep senescent conditions. These modifications along with increased HMGA1 protein levels in deep senescent conditions could play an important role in the stable cell cycle arrest and heterochromatin assembly compared to proliferating and dQuiescent cells [27,37].
Discussion
H2B exists as multiple variants in mammals that differ by a small number of amino acids, but very little is known about their potential physiological and functional specificities. The small number of amino acid differences makes it difficult to distinguish them by chromatographic or electrophoretic methods and no antibodies distinguishing them have yet been described to our knowledge. However, they are distinguishable by their intact mass, so that mass spectrometry is a technique of choice to distinguish them [15,21].
Here, we identified an enrichment of H2B type 1-K at the protein and peptide levels in deep senescent conditions with persistent DNA damages (dSenETO, dSenRep, dSen-TRF2D, MRC-5 dSenRep). RT-qPCR experiment showed that total mRNA level of all tested canonical H2B variants decreased strongly in early (5 days) and deep (20 days) conditions of cell cycle arrest compared to cycling cells. This is due to the fact that during DNA replication, canonical histone mRNAs are induced about 30 fold to package newly synthesized DNA [12]. However, we noticed an increase of polyA mRNA level for all tested H2B variants in cell cycle arrested conditions. This can be explained by a basal expression of histone genes even when cells do not proliferate and by the absence of SLBP, leading to the accumulation of stable histone polyA mRNAs for those H2B genes that contain a polyadenylation site downstream of their stem-loop sequence [13][14][15][16]. We identified an increase of H2B type 1-K specifically in deep senescent conditions with persistent DNA damages despite an increase of all H2B polyA mRNA levels. Hence, under these conditions, H2B type 1-K must be regulated post-transcriptionally, such as by: (i) increased mRNA export to the cytoplasm, (ii) increased export of the gene product to the nucleus, (iii) increased deposition, or (iv) decreased eviction/degradation. This regulation might be linked to the ATM/ATR kinases pathways that are activated by DNA damage. H2B type 1-K enrichment in senescence parallels the increase in the H2A variant H2A.J that we previously described [22]. Since H2A forms heterodimers with H2B, it is likely that H2A.J-H2B-type 1-K heterodimers increase specifically in senescent cells. Several histone chaperones for H2A/H2B have been described including the NAP family chaperones and the FACT complex. Intriguingly however, the HIST1H2B1K gene encoding H2B type 1-K was first identified in a 2-hybrid screen with HIRA, a subunit of a histone chaperone complex for the H3.3-H4 histones. In this study, HIRA was shown to interact with both H2B type 1-K and H4 [41]. HIRA has important roles in the chromatin dynamics of senescent cells [14]. The functional importance of the HIRA-H2B type 1-K merits further study as it is not rare for histone chaperones to interact with multiple histone types whilst facilitating chromatin dynamics [42].
H2B type 1-K is discernable from most other H2B variants by containing an Ala instead of a Ser at position 124 near the C-terminus in a region that should be accessible outside of the nucleosome. This substitution of a phosphorylatable residue to a non-phosphorylatable one could have important functional effects, although Ser-124 phosphorylation of H2B has not yet been reported. Further genetic experiments will be necessary to determine whether H2B type 1-K has specific functional roles in senescence.
In a previous paper, we identified a senescent-specific deacetylation of H4-K16Ac that occurs early in the establishment of senescence and that contributes to heterochromatin assembly. Here, we noticed that the decreased acetylation on H4 was maintained in deep senescent conditions concordant with a maintained heterochromatin assembly [28].
In this study, we also highlighted an increase with time of H4-K20Me3, H3.1/2-K27Me2/Me3, and H3.1/2-K36Me2 in all types of post-mitotic fibroblasts in vitro. Hence, these histone variants and marks accumulated independently of the senescent state and DNA damages. Interestingly, H4-K20Me3 levels were found to increase progressively with age in the rat liver and kidney [43]. Likewise, increases of H3-K27Me3 were observed in quiescent muscle stem cells during aging [44]. Furthermore, H3.3-K27Me2 and H3.3-K36Me2 were found to accumulate during aging in mouse liver, kidney, brain, and heart [45]. The molecular basis for the aging-associated accumulation of these specific H3 and H4 methylation marks has not been determined, but they are likely to affect both gene expression and chromatin compaction during the aging of post-mitotic tissues. The recapitulation of their accumulation in post-mitotic fibroblasts in vitro should facilitate the study of their regulation.
In addition to histone variants and PTMs, we examined HMGA1 isoforms. Similarly to what has been observed by Tran et al. for cancer cells (B16F10 and H1299) induced into senescence by DNA damage [37], we noticed a strong increase of di-methylated HMGA1a, and total HMGA1a/b protein levels, with time in all deep senescent states compared to cycling cells. These modifications are thus characteristic of senescence for both normal and cancer cells. The di-methylated forms also accumulate to a lesser extent in quiescent cells, with no visible increase in HMGA1 protein levels in quiescence. In senescent WI-38 cells, HMGA1b is less overexpressed than HMGA1a. We also noticed a modest but significant increase of the HMGA1b tri-phosphorylated form in deep senescent states. Altogether, the quantity and modification state of HMGA1 proteins could be important for senescence maintenance. Antibodies to specific methylated and phosphorylated sites of histones have proven their effectiveness for detecting and characterizing these modifications. We propose that the development of antibodies to di-methylated HMGA1a (dimethylation presumably occurs at Arg25 [37,39]) and tri-phosphorylated HMGA1b will be useful in specifically identifying senescent cells and in studying the regulation and the function of these modifications. | 2021-06-29T20:25:04.312Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "017d25753fda4a8300985b9dc335d2c866280d43",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7382/9/2/30/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "017d25753fda4a8300985b9dc335d2c866280d43",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244182666 | pes2o/s2orc | v3-fos-license | Algorithms for Optimal Power Flow Extended to Controllable Renewable Systems and Loads
In an effort to quantify and manage uncertainties inside power systems with penetration of renewable energy, uncertainty costs have been defined and different uncertainty cost functions have been calculated for different types of generators and electric vehicles. This article seeks to use the uncertainty cost formulation to propose algorithms and solve the problem of optimal power flow extended to controllable renewable systems and controllable loads. In a previous study, the first and second derivatives of the uncertainty cost functions were calculated and now an analytical and heuristic algorithm of optimal power flow are used. To corroborate the analytical solution, the optimal power flow was solved by means of metaheuristic algorithms. Finally, it was found that analytical algorithms have a much higher performance than metaheuristic methods, especially as the number of decision variables in an optimization problem grows.
Introduction
In recent years, the advances in technology, the reduction in manufacturing costs and the growing concern for the environment, unconventional renewable energy sources have had a great penetration in electrical power systems. For example, in Colombia, it is expected that 1050 MW of wind energy will come into operation by 2023, according to information published by the Mining Energy Planning Unit (UPME) [1]. The start-up of these new non-conventional renewable energy sources poses a challenge to power systems, with regard to the dispatch of generation resources. One of the biggest drawbacks of energy sources such as solar or wind is their high variation over time and their stochastic nature. In addition, advances in technology such as large-scale energy storage, the massification of electric vehicles and the increase in distributed generation must be taken into account when making economic dispatches.
According to the review of the state of the art, it is possible to address the problem of uncertainties of power systems with different approaches, whether probabilistic, possibilistic or probabilistic-possibilistic hybrids [2]. Economic dispatch problems are approached mainly by probabilistic methods, while possibilistic and probabilistic-possibilistic methods are especially useful for generating generation forecasts [3]. In an effort to quantify the impact of uncertainties in power systems, from the calculation of expected values it has been possible to determine the cost of uncertainty in non-conventional renewable energy sources, starting from the probability distributions associated with the random variables that affect the behavior of the system. Arévalo et al. [4] presents the calculation of the cost of uncertainty associated with solar and wind plants and the behavior of electric vehicles; for this case it was assumed that the irradiance for solar plants was associated with a lognormal function, the wind speed with a Rayleigh distribution and the power delivered by electric vehicles had a Gaussian behavior. Complementing the work on the cost of uncertainty, in the article written by Molina et al. [5], the cost of uncertainty was calculated for a run-of-river hydroelectric power station, whose generation depends on the flow of the river from which the station feeds; it is exposed that the behavior of river flows is associated with a Gumbell distribution.
In many cases, you may not have historical measurements of the resource you want to take advantage of (for example, in remote, non-interconnected areas). For these cases, a uniform distribution of the resource to be exploited could be assumed. Bernal et al. [6] propose the calculation of the cost of uncertainty for uniform distributions. The uncertainty in the loads can be modeled by a normal distribution [2]. Vargas et al. [7] describe the calculation of the uncertainty cost for controllable loads. The uncertainty cost calculation for controllable loads is also valid for any other load, since controllable loads have normal behavior. It is necessary to clarify that the results of the uncertainty cost functions in the previously mentioned works [4][5][6][7] were validated using the Monte Carlo method. Now, to achieve a reliable operation, and so that the beginning of the operation about non-conventional renewable energy sources can be reflected in an economic benefit, it is necessary to perform Optimal Power Flows (OPFs) which consider costs of uncertainty. In the literature, there are models of optimal power flow that consider different costs of uncertainty and are solved analytically [3,8] or through heuristic techniques [9][10][11][12][13][14].
The OPFs have different applications and are generally differentiated by objective functions, restrictions, the type of network to be optimized, and so forth. In relation to OPF in Hetzer et al. [3] an optimal power dispatch is presented considering two wind generators and their associated uncertainty cost in a system with two conventional generators. The work from Hetzer et al. was supplemented by Zhao et al. [11] when solving an OPF using particle swarm optimization (PSO), considering the uncertainty costs of electric vehicles and wind generators in a 118 node IEEE power system. From the previously mentioned works [3,4,11], Arévalo et al. [9] show the calculation of an OPF that takes into account the cost of uncertainty in wind and solar generators and that of the large-scale entry of electric vehicles. In this work, the 118-node IEEE system was used and the PSO algorithm was used to solve the optimization problem.
Torres and Rivera [15] propose the calculation of an OPF in various operating scenarios of the system. In their work, a simplified model of the Colombian network was considered, as well as the future agglomeration of solar and wind resources on the Caribbean coast [16]. These same authors, in a different work [12], showed the calculation of an OPF using the DEEPSO method (combination of particle swarm and differential evolution) for the IEEE system of 118 nodes with costs of uncertainty from wind generators, solar plants and electric vehicles. The optimal power flows mentioned so far have been formulated for large systems and their sources of uncertainty have been solar radiation, the speed of time and/or the delivery of energy to the grid by electric vehicles. However, with the penetration of distributed generation, the possibility arises that some clients of the system satisfy part or all of their demand for a certain time, becoming a controllable load. Guzmán et al. [13] describe the controllable loads and their types of contract with the energy marketer. In addition to this, they calculate an OPF solved by the DEEPSO algorithm where they take into account controllable loads, position of the transformer taps and compensation in the IEEE system of 118 nodes.
The uncertainty cost functions, the basis for calculating the uncertainty costs, are usually complex since they appear to be non-elementary integrals [4,5,7]. One possible way to deal with this difficulty is the one proposed by Martinez and Rivera [8], where a quadratic approximation of the uncertainty cost of electric vehicles, solar, wind and run-ofriver plants is made to carry out an analytical calculation of the OPF using MATPOWER. Due to the reduction in the costs of equipment for the generation of energy from nonconventional renewable sources, it is now possible that small consumers of energy at medium and low voltages can also generate energy on a small scale, giving rise to the appearance of microgrids. Peña et al. [14] performed the calculation of an OPF for a microgrid with different sources of uncertainty where there was also energy storage. Using a genetic algorithm (NSGA-II) they managed to minimize the cost of power generation and maximize the useful life of the batteries. Similarly, Li et al. [10] propose an OPF where the operating cost of a microgrid with batteries is minimized. Unlike the other works mentioned so far, Li et al. did not use the Monte Carlo method for the validation of their results, but instead used the method of the estimated point. In this way, the novelty of this paper is in including the marginal uncertainty cost functions (MUCFs) in the algorithms for optimal power flow extended to controllable renewable systems and controllable loads, since MUCFs have been tested with Monte Carlo simulations.
Optimization Problem Statement
Optimal power flow is, as its name implies, an optimization problem defined as follows [17,18]: where u is a set of decision variables and x is a set of dependent variables. F is the scalar objective function, h represents a set of equality constraints and g a set of inequality constraints.
Target Function
The objective function evaluated in this proposal is shown below, in the expression (4): The variables presented in the objective function are described below: N gen,C Number of conventional generators, N gen,B Number of batteries, N gen,S Number of solar generators, N gen,E Number of wind generators, N gen,H Number of run-of-river hydraulic generators, N gen,CL Number of controllable loads or nodes with electric vehicle connection, P j Active power delivered by the conventional generator j, P k Active power delivered by battery k, P m Active power delivered by the solar generator m, P n Active power delivered by wind generator n, P r Active power delivered by the hydraulic generator r, P t Active power dispatched by controllable load or electric vehicle station t.
The functions f j and f k refer to the cost functions for conventional generators and batteries, respectively. On the other hand, the functions f m , f n , f r and f t refer to uncertainty cost functions for solar, wind, run-of-the-river hydro generators and electric vehicles/controllable loads, respectively.
The cost functions for conventional generators correspond to quadratic polynomial functions of the form [19]: On the other hand, the cost functions for batteries correspond to linear functions. In this work, only batteries are taken into account in their discharge cycle, which means, working as generators [20,21]: In the expression (6), cs m is the battery operation and maintenance cost and PD is the power delivered by the battery.
The functions f m , f n , f r and f t refer to cost functions of uncertainty due to the variable behavior of the resources from which energy is extracted, such as solar radiation, wind speed, water flow or the number of connected electric vehicles.
The aforementioned functions are the sum of the costs of uncertainty due to underestimation and the costs of uncertainty due to overestimation, as shown below: In the expressions from (7)-(10), the subscript u refers to the cost function of uncertainty due to underestimation while the subscript o refers to the cost function of uncertainty due to overestimation.
Concept of Uncertainty Cost Functions from Previous Studies
In order to calculate Uncertainty Costs Functions (UCFs), it is necessary to define underestimation and overestimation costs developed in previous studies [4,11] : Uncertainty Cost Due to Underestimation Costs due to underestimation refer to the power that a renewable generation unit cannot deliver to the grid when the scheduled power value of the plant is smaller than the available generation power: where P Sch and P Av are the scheduled power and the available power, respectively. In this case, penalty cost due to underestimation is given by: where c u is the penalty cost coefficient due to underestimation and P max is the generator maximum output power. Now, because of the variability of the renewable power sources, the power generated by these sources has an associated Probability Density Function (PDF) f n (P). The uncertainty cost due to underestimation is defined as the expected value of C sub (developed from Expression (12)): Uncertainty Cost Due to Overestimation Costs due to overestimation refer to the power that cannot be supplied by a renewable generator because the available power is smaller than the previously scheduled power: In this case, penalty cost due to overestimation is given by: where c o is the penalty cost coefficient due to overestimation and P min is the generator minimum output power.
In the same way as the underestimation condition, and based on the stochastic nature of renewable sources, the uncertainty cost due to overestimation is given by the expected value of C so (developed from Expression 15): Finally, the Uncertainty Cost Function (UCF) for a given renewable source is equal to the sum of underestimation and overestimation costs (developed from Expressions (13) and (16) respectively): In [11], the authors of this paper presented the development of the Uncertainty Cost functions of PVG, WEG, PEV and RHG and the Formulation and application of marginal cost functions of PVG, WEG, PEV, and RHG. They are useful as target functions in this paper, and the marginal cost functions are used in the analytical algorithm applied in this paper.
Constraints
Below are the constraints that are taken into account for optimal power flow [18]. The active power limits of each generator must be respected: The reactive power limits of each generator must be respected: The voltage limits in each system bar must be respected: The chargeability of any line or transformer should not be exceeded. For this restriction, the maximum apparent power of each element, the input power and the output power are taken into account: S p,out ≤ S max,p p = 1...n lines (22) S t,in ≤ S max,t p = 1...n trans f ormers (23) S t,out ≤ S max,t p = 1...n trans f ormers .
Finally, for each node k of the system, the balance of active and reactive power must be maintained, that is, the load flow [19]:
DEEPSO Algorithm
In this paper, two metaheuristic algorithms were used for the validation of the results obtained by the analytical method. The first one is the DEEPSO (Differential Evolutionary Particle Swarm Optimization) algorithm and will be studied in this subsection. The other algorithm used in the validation of results is the genetic algorithm, which will be described in the next subsection.
The DEEPSO algorithm is an improvement of the PSO (Particle Swarm Optimization) algorithm, to which evolutionary and differential computing techniques are added [22]; it is for this reason that, for a better understanding of the algorithm implemented in this thesis, we will begin by describing the algorithm PSO.
PSO Algorithm
The Particle Swarm Optimization (PSO) algorithm is an algorithm based on the behavior of fish or bird populations, which either seek to avoid a predator or search for food [23]. There is an initial population of particles of size (number of elements) N and dimension (number of variables) D, which is denoted as follows: The Particle X moves to a new position X (2) according to the following rule [22][23][24]: where V is known as the velocity of the particle and is defined as: The expressions (29) and (30) can be rewritten in a vector manner as follows for each element of the population [24]: In the expression (32), w represents a scalar value that descends from w max = 0.9 to w min = 0.4 as the number of iterations of the algorithm progresses [24], c 1 and c 2 are constants inherent to the method and r 1 and r 2 are random values uniformly distributed among 0 and 1.
The operation of the PSO algorithm consists of making a search for new optimal values by each particle using the expressions (31) and (32), where a new position of particle i , taking into account the best historical position of each particle P (k) best i . That is, the value of X i that has given the best value of the objective function and the best historical position of the total set of particles . This is illustrated in Figure 1.
EPSO Algorithm
The EPSO (Evolutionary Particle Swarm Optimization) algorithm is a hybrid of the PSO algorithm and evolutionary techniques. According to what was proposed by Vladimiro Miranda, the algorithm consists of the following steps [22,25]: Each particle is replicated r times, usually r is equal to 1; 2.
Parameters A, B, C of expression (30) for the r replicas are mutated; 3.
Each of the r + 1 particles creates a new generation from the particle motion rules (29) and (30); 4.
The value of the objective function is evaluated for each particle of the new generation; 5.
Through some selection procedure, the best descendants of each ancestor are selected to form a new generation.
DEEPSO Algorithm
The DEEPSO algorithm is a variation of the EPSO algorithm in which a modified form of the speed equation (30) shown below is used [22]: In the expression (33), b * G is given by: That is to say, b * G is affected by normal noise with average 0 and standard deviation w G . P is a random matrix of ones and zeros, where each element has a 75% probability of being 1 and a 25% probability of being 0 [22]. X r1 and X r2 are any two points of the population that define four types of implementation of the DEEPSO algorithm that are defined in [22]. For the case of this study, the implementation of the algorithm used was called DEEPSO Pb-rnd, in which X and each row of the matrix b (k) r1 are chosen from P b , the particle vector with the best historical values; on the other hand, X (k) r2 = X (k) and for the case of minimization, the following must be fulfilled [22]: The flow chart for the DEEPSO algorithm implemented to solve the OPF is shown in Figure 2.
Genetic Algorithm
The genetic algorithm is a metaheuristic technique initially conceived for solving discrete variable problems. A modification of the genetic algorithm designed to solve continuous variable problems was used for this work [26]. In the following subsections, the implemented genetic algorithm will be described.
Genetic Algorithm Pseudocode
The pseudocode of the genetic algorithm implemented in order to solve the optimal power flow is presented below: 1.
Generate an initial random population; 2.
Evaluate the objective function for each element of the initial population; 3.
Carry out a selection scheme to choose the individuals with the greatest probability of reproduction; 4.
Evaluation of the objective function for each element of the new population; 7.
Selection of the individuals with the best objective function values.
Generation of the Initial Population
To generate the initial population, we proceeded in a similar way to the case of the DEEPSO algorithm, that is, values of variables that met the proposed restrictions were randomly selected.
Evaluation of the Objective Function of the Initial Population
For the genetic algorithm, the implemented objective function, which from now on will be known as the fitness function, is shown in the expression (36): Thus, if a restriction is violated, the fitness function will assign an infinite value to the cost function, which is known as a penalty.
Selection
For the selection, the tournament scheme was used in references [27,28], in which k groups of m elements were established, where each element m is the value of the objective function associated with a given individual of the population. Since this work seeks to solve a minimization problem, in each group k the m elements are ordered from highest to lowest and a probability of recombination and reproduction is assigned to each element depending on the position they occupy in the array according to the expression (37). P(pos) = 2pos n(n + 1) ; pos = 1, 2, 3..., m.
Reproduction and Recombination
For the reproduction phase, two different parents are randomly selected from the entire population according to the reproductive probabilities given in the selection phase. The actual recombination strategy used is shown below, which was taken from [26]. There is a vector α, shown in the following expression.
where n is the number of decision variables of the optimization problem. Each value α i is randomly and uniformly distributed in the interval [−γ, 1 + γ] with γ = 0.1. Now, we have the parent vectors x1 and x2 that are shown in the expressions (39) and (40).
The recombination procedure for each element of y 1 and y 2 is presented in expressions (43) and (44): Finally, it is clarified that as many parents as elements in the initial population are selected and each pair of parents has a pair of children, so that the new generation (children) has the same size as the initial population and the total population would be twice the size of the initial population size.
Mutation
The mutation takes place for a percentage of individuals of the new generation, that is, not all individuals of the new generation are mutated. For mutated individuals, each variable is modified by adding a normally distributed value with mean 0 and standard deviation σ.
Evaluation and Selection
For the new generation, the fitness function is assessed. Finally, all the existing particles are ordered, that is, first and second generation from highest to lowest and 50% of the population with the lowest objective function value is selected. This reduces the size of the population to its initial value.
Genetic Algorithm Flow Diagram
The flow chart of the genetic algorithm is shown in Figure 3.
Analytical Algorithm: Interior Point Method
In this subsection, a description of the interior point method will be presented, since this method was the analytical technique used to solve the optimal power flow in view of its ability to solve non-linear optimization problems. The description presented here is illustrative of the Matpower MIPS solver method [18]. It should be clarified that the solvers used to solve the optimal power flow can use different variations of the method [18,29,30].
The choice of the interior point method is due to a limitation on the part of the software that was used to solve the optimal power flow, since solvers that use Matlab code must be used to implement the non-linear cost functions, its gradients (marginal costs) and its Hessians (derivatives of marginal costs) [18].
Karush-Kuhn-Tucker Conditions
Karush-Kuhn-Tucker conditions are necessary conditions for a point to be a restricted local optimum [31].
If x * is a local maximum value for the following problem [31]: The gradients of the constraints at the optimum ∇g i (x * ) are independent, then there is a vector λ * = [λ 1 , λ 2 , ..., λ m ] T such that [31]:
Lagrangian Function
The Lagrangian function is defined as shown below [31]: Thus, if we only have equality restrictions, it is possible to rewrite the Karush-Kuhn-Tucker conditions in terms of the Lagrangian function (53) as shown in the expressions (54) and (55) [31]. This fact is the basis of the interior point method.
MIPS based Optimization
The optimization problem under study is redefined below [18]: Subject to: G(X) = 0 (57) This optimization problem then becomes a problem where there are only equality restrictions through the inclusion of slack variables Z and the sum of a barrier function to the objective function: Subject to: G(X) = 0 (60) As the perturbation parameter γ approaches zero, the solution of the previous problem approaches the solution of the original problem. The Lagrangian for a given value of γ from the previous optimization problem is then: From expression (63), the gradient to each of the variables is obtained: It should be noted that e is a vector of ones and [A] represents a diagonal matrix with vector A on the diagonal.
To satisfy the Karush-Kuhn-Tucker conditions, the values of X, Z, λ and µ must be found which make expressions (64) to (67) equal to zero as γ approaches zero. To determine these values, the Newton-Raphson method is used, which produces the following matrix equation: Since it is possible that the variations ∆X, ∆Z ∆λ and ∆µ take the variables to a new infeasible point, they are usually truncated by α coefficients as shown below: There are different ways to determine the values of α such as functions of merit, confidence regions, and filtering methods [31]. Finally, the value of α must be updated which, as mentioned, should decrease as the number of iterations increases.
Results: Optimal Power Flow Solution Considering Heuristic and Analytical Algorithms
For the solution of the optimal power flow consider three test systems: The IEEE system of nine nodes and the IEEE system of 57 nodes. This section shows the results of the optimal power flow solution in each of the aforementioned systems and a validation and comparison of the results obtained by analytical and heuristic methods.
Scenarios Analyzed Scenario 1
Scenario 1 consists of keeping the generators connected to nodes 1 and 2 as they are established in the original case, that is, with quadratic cost functions; however, the generator connected to node 3 is assumed to be a hydraulic generator with the cost functions defined in [32].
Scenario 2
Scenario 2 now contemplates the connection of a solar plant to node 3, replacing the existing conventional generator. The other generators are not modified.
Scenario 3
In the same way as in scenarios 1 and 2, the conventional generator connected to node 3 is replaced by a wind generator.
Scenario 4
In this scenario, a new controllable load is added to node 5 with a maximum value equal to −30 MW with an average of −19.54 MW and a standard deviation of 0.54 MW.
Tables 1-4 show the parameters of the renewable generators and the controllable load. The results obtained for all the scenarios in the nine-node system are observed in Tables 5-8 where the active and reactive power values, the voltage at the slack node, the execution time and the value of the objective function are shown for each optimization technique or solver with which the optimal power flow was solved. It is noteworthy that, for scenario 1, as seen in Table 5, the DEEPSO algorithm yielded a slightly lower value than the analytical solver FMINCON. It is also observed that the analytical solver FMINCON outperforms heuristic algorithms in terms of computation times. On the other hand, it is also observed that the IPOPT and MIPS solvers did not converge for this scenario.
In scenarios 2 and 3 shown in Tables 6 and 7, the three solvers analyzed reached the local optimum value. The MIPS solver had the best performance in terms of execution time. For the evaluation of the objective function the three solvers reached very similar answers.
In scenario 4, shown in Table 8, the IPOPT solver did not converge, while FMINCON had an excellent performance in terms of computational time. From the analyzed scenarios, the superiority in terms of computation times of the analytical methods with respect to the metaheuristic algorithms is clearly observed.
Although the genetic algorithm converged much faster than the DEEPSO algorithm, the results of the DEEPSO algorithm are much closer to the results obtained by the analytical methods; this is mainly observed in the reactive power values obtained for each generator. It is also observed in the analytical methods that, although the active power values in generators produced by each method are too close to each other, there are small differences in the reactive power values.
Fifty Seven-Node IEEE System Description
The IEEE 57-bus test system is used to evaluate the optimization problem of this study. Based on details given in [4] for system buses and branches, the data of the system have been structured in the MATPOWER [6] data format. Branch thermal limits were defined based on reference values given in [5]. A summary of the characteristics of the test system are: seven generators, 42 loads, 63 line/cables, 15 transformers step/wise, two transformers fixed taps and three shunt compensation binary On/Off. Figure 5 below shows the 57-node system on which solar, wind, run-of-the-river generators, controllable loads and batteries were mounted.
Scenarios Analyzed
For the 57-node system, only one scenario was analyzed that took into account all types of generation. It begins by clarifying that for this system the slack node is node 1. The nodes whose generators were changed by generators with uncertain costs are listed below. The nodes to which some type of generation was added and that previously did not have it are also listed.
•
The generator connected to node 3 is replaced by a wind generator with uncertain costs; • The generator connected to node 6 is replaced by a solar generator with uncertain costs; • The generator connected to node 9 is replaced by a run-of-river hydraulic generator with uncertain costs; • A battery bank is connected to node 57; • A controllable charging or electric vehicle charging station is connected to node 47.
The parameters of the cost functions of the generators with cost of uncertainty and of the batteries are shown in Tables 9-12 (data from previous research: [32]): The results obtained are presented in Table 13, where it is observed that the optimal power flow solution could be found using all optimization techniques. It is also observed that, for the heuristic methods, 17 decision variables are being taken, namely, active and reactive powers in all generators, except for the slack and the voltage of the slack node. Again the interior point method has superiority with respect to the speed of solving the problem, since the metaheuristic techniques took more than 10 min to solve the problem. From the results shown in Table 13, it is possible to see that, as the number of variables in a problem grows, analytical methods have a greater advantage over the heuristics in terms of the speed with which they solve the optimal flow of power and due to the fact that they are not subject to random variation computation of the first iteration of the variable decisions and the computationally high costs for large systems.
It is also observed that the analytical optimum presents a value of the objective function that is lower than that presented by heuristic methods. Furthermore, in this case it is observed that the DEEPSO algorithm does not have such good behavior as to determine the highest value of the objective function, as is observed in the 57 buses system. Additionally, it is also observed that the genetic algorithm has a lower value of the objective function and takes less time than the DEEPSO algorithm. In this way, the appropriate optimization technique for solving the OPF was determined. The inclusion of the cost functions of renewable systems was also considered.
Conclusions
In this paper, an optimal power flow was developed that takes into account the uncertainties produced by the penetration of renewable energy sources and controllable loads. It was observed that including the Jacobian (First derivatives) and the Hessian (Second derivatives) of the uncertainty cost functions in the optimal power flow model brings great benefits, since it allows the use of analytical techniques for the determination of active and reactive power dispatches. The main benefit of analytical techniques over the metaheuristic techniques used in the state-of-the-art for the solution of the optimal power flow is the reduction of the computation times that was evidenced in all the cases analyzed, as shown in Table 13. Additionally, it was observed that the IPOPT solver did not converge to a solution in several of the systems and scenarios proposed. On the other hand, the Matlab FMINCON solver converged in all cases to a solution. The MIPS solver did not converge in scenario 1 of the nine-node system.
On the other hand, heuristic methods were used in order to validate the results obtained through analytical methods and very close responses were achieved for the active power dispatches in the IEEE systems of nine and 57 nodes, as shown in Tables 5-8 and 13. However, for the 118-node system, the heuristic methods yielded significantly different dispatch values with respect to the analytical methods; in addition, the dispatches produced by the analytical methods produced a lower value in the objective function. Finally, the presented formulation and its respective solution allow network operators to have a tool to manage the controllable renewable resources found in the network, satisfying the different physical restrictions of the system.
In this way, in this paper the marginal costs of uncertainty are used to determine the active power values that a particular generator with uncertainty must deliver to minimize the uncertainty costs. The field of optimization in electrical power systems is in continuous development. In the case of controllable renewable sources and controllable loads, it is possible to extend this analysis to optimal power flows in DC. It is also possible to use the flow extension optimal power for controllable renewable systems and controllable loads and problems such as Unit Commitment or Security Constraints Optimal Power Flow.
To solve the proposed OPF, when controllable renewable systems and loads are considered, it is also necessary to take into account all the constraints that can affect the power system. The solution would allow the system to work between specified constraints considering uncertainty cost functions, making this problem very complex. Including all constraints in the traditional OPF interior point solution method could prevent convergence and tackling the problem using metaheuristic strategies makes the problem hard to solve within short periods of time (critical clearing times of the contingency in the case of the security constraint optimal power flow) without high computational power.
In this study, the developed research relies on two aspects: the first is analytical methods, using marginal uncertainty cost functions which allow us to use the interior point method for the optimization problem. The second is the solution of the problem using metaheuristic methods, in cases where it is not possible to determine the marginal cost functions. The use of marginal uncertainty cost functions (MUCF) strategies can be employed in small power systems, using strong assumptions of the analytical modeling of cost functions, which makes some of the applications infeasible in real operations or very costly in terms of hardware implementation for real systems. As a solution for those limitations, future research will address the problem using the potential of Parallel and Heterogeneous Computing (PHC). | 2021-10-19T15:22:29.433Z | 2021-09-25T00:00:00.000 | {
"year": 2021,
"sha1": "ff0eae912790c2c36796c969579e1305bcd1dbdb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/14/10/276/pdf?version=1634288943",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f9dc97bb4d56db7c146ab4481bb14ab7bd6e0e04",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270087251 | pes2o/s2orc | v3-fos-license | Exploring teacher self-efficacy in human evolution instruction following a dynamic hands-on professional development workshop
Background Human evolution is a topic that is largely excluded from K-12 classrooms for a variety of reasons, including the inability, unwillingness, or lack of preparedness of educators to teach a topic that has been seen as controversial. This study explored how engagement in professional development infused with 3D printing and ways of knowing discussion influenced science teachers’ self-efficacy for teaching human evolution. The professional development opportunity was designed to empower teachers and provide them with the tools necessary to incorporate human evolution into their curriculum. During this workshop, participants learned about paleontology and human origins, spoke with professional paleoanthropologists, discussed implementation strategies with evolution educators, and developed lesson plans centered around human evolution. To explore the role of this professional development on teachers’ self-efficacy and perceptions of the teaching of evolution, we used a previously validated survey that was employed in the pre-test and post-test format and semi-structured focus group interviews. Results The results of this study indicate that the workshop positively impacted teacher perceptions of the teaching of evolution with significant improvements on two of the three tested factors and the third factor almost reaching significance. Conclusions Our data demonstrate that a three-day workshop can successfully impact teachers’ perception dof the teaching of evolution and, in turn, increase the implementation of human evolution in K-12 classrooms. By specifically structuring the workshop content in a way that addressed many of the previously indicated obstacles in teaching evolution, we were able to positively impact educators and provide them with the information and tools necessary to add human evolution into their curricula.
Introduction
Perceived obstacles in the teaching of evolution in the K-12 classroom are well-documented (Alters and Nelson 2002;Geher et al. 2019;Kruger et al. 2012;Lerner 2000, Nelson 2008;Rohrbacher 2013;Scharmann 2005; Ziadie and Andrews 2018), however, assessing and addressing barriers against the implementation of human evolution as a specific approach has been largely unstudied.Previously addressed obstacles in the teaching of evolution include a lack of scientific literacy and distrust of the scientific community (Geher et al. 2019), a dearth of educator knowledge about the ever-changing field of human origins (Pobiner 2016), a deficiency of easilyimplemented teaching materials (Selba 2019), a lack of access to the actual fossils on which our understanding of evolution is based (Ziadie and Andrews 2018), as well as the perceived controversial nature of the teaching of evolution (Hermann 2008).Even within the implementation of human evolution curriculum, there is controversy regarding the real or perceived interdisciplinarity and scope of the material (Hanisch and Eirdosh 2020).The Next Generation Science Standards (NGSS) for example, reference key concepts of selection, common ancestry, and evidence for evolution writ large, but do not specifically address human evolution or encourage the inclusion of such in classroom teaching (NGSS 2013).Furthermore, although they are a set of national-level standards, they are not mandated and have been adopted by only 22 states, with states holding the authority to determine their own standards for teaching science.As a result, evolution has only recently been added to the statewide teaching standards in many states (for example, the word 'evolution' was not included in the teaching standards for Florida until 2008) (Fowler and Meisels 2010).Combining these factors leaves teachers interested in teaching evolution without the resources and support to do so.It can also discourage disinterested or skeptical teachers from incorporating it into their curriculum in the first place.
This study aimed to better understand how to help educators increase the inclusion of human evolution into existing K-12 science curricula with accuracy and confidence.This study was designed to address the following research question: • How does human evolution teacher professional development integrating 3D printing and discussions of "ways of knowing" (i.e., the ways in which humans acquire knowledge and process experiences to make sense of the world) impact teacher perceptions and self-efficacy for teaching human evolution?
Review of literature
Research on the teaching and learning of evolution has expanded widely in the twenty-first century.However, national polls indicate that public perceptions of evolution have remained primarily unchanged over three decades, and further efforts are still required (Evolution, Creationism, Intelligent Design, 2019).Not only does the minimal change in public perceptions represent a challenge for evolution, it is a critical blow to scientific literacy as a whole, suggesting the by-and-large evidenceonly approach to teaching scientific concepts seen by the public as "controversial" fail to reach goals to build a scientifically literate society (Rankey 2003;Robbins and Roy 2007;Schilders et al. 2009;Smith and Seigel 2016).
In response to the polls, researchers have focused on the foundations of knowledge, understanding, belief, and acceptance of evolution (Matthews 2001;Rutledge and Sadler 2011) to understand the interactions that drive public thinking (Miller et al. 2006).Similar focus looked toward university and K-12 classroom experiences, standards, and teaching approaches to foster change on a broader scale (Glaze and Goldston 2019;Ha et al. 2012;Hermann et al. 2020).Guided by our growing understanding of how students learn, how teachers engage their autonomy, effective practices in science, and actively mitigating conflict, now it is possible to put theory into practice, utilizing understandings of what does and does not work in new ways to build practical approaches that translate and transfer with fidelity into the classroom.
Evolution teaching, learning, and perceptions are complicated
Foundational explorations in evolution education center around differentiation and interactions among knowledge/understanding and acceptance of evolution (Kim and Nehm 2011;Matthews 2001;Nehm and Reilly 2007;Nehm and Schonfeld 2007).A common theme in research is whether knowledge or acceptance of evolution should be the goal of education, with differences noted between goals for K-12 education and post-secondary education (Barnes and Brownell 2016;Glaze 2017;Meadows 2009;Smith and Seigel 2016).While the literature finds little agreement on whether and to what extent knowledge impacts acceptance, it is clear that there is a disconnect between the two that does not follow the logical pattern shown in other topics (Bertka et al. 2019;Sinatra et al. 2003).As a result, subsequent studies often begin with at least a cursory exploration of evolution content knowledge or acceptance levels of students and teachers to establish baselines or explore groups compared to others.It has been concluded that increasing content alone is not enough to instigate conceptual change that leads to greater acceptance of evolution (Bertka et al. 2019;Barnes et al. 2017;Glaze and Goldston 2015;Glaze et al. 2015;Hermann 2012).Whether the goal is to ensure acceptance or increase understanding, students should learn about evolution from a scientific perspective in their science classes (Bertka et al. 2019).
Understanding and accepting evolution requires acknowledging barriers
We are combating challenges arising from content knowledge disparities, worldviews, and culture when teaching evolution and negative perceptions (Bertka et al. 2019).In addition to content barriers, cultural objections play a role in the teaching and learning of evolution in the classroom, including but not limited to religious beliefs and historical contexts surrounding race (Bertka et al. 2019;Brem et al. 2003;Goldston and Kyzer 2009;Meadows et al. 2000).Failing to acknowledge worldview elements in a considerate and not suppressive way creates an environment of exclusion and discomfort, preventing conceptual change (Barnes et al. 2017;Bertka et al. 2019;Hermann 2012).Additionally, there can be conceptual challenges such as misconceptions, the semantics of the language of science, and a need for more modeling to cope with conflict (see reviews in Glaze and Goldston 2015;Glaze et al. 2015;Pobiner 2016).Conflict exists long before formal experience with the concepts in schools and persists long after those experiences where nothing is done to address concerns and support the navigation of the conflict (Bertka et al. 2019;Glaze and Goldston 2015;Glaze et al. 2015;Griffith and Brem 2004;Long 2012), therefore, approaches meant to increase knowledge or acceptance, and thereby teachers' autonomous choices in the instruction of evolution, must address diverse elements from content and conflict mitigation to coping skills and pedagogical strategies that are desperately needed (Bertka et al. 2019).
Teacher autonomy impacts what and how evolution is taught in the classroom
While there are national Next Generation Science Standards (2013) for science education in the United States, each state can adopt or craft its standards.In the Southeastern United States, where this study occurred, none of the states (TX, LA, MS, AL, GA, FL, SC, TN) adopted the national standards, although several elected to craft similar standards to the NGSS.As a result, states maintain autonomy in selecting topics covered in a given school year in science classes.At the same time, there is still a great deal of local control and very little oversight to ensure all standards are taught outside of standardized testing in most states.One by-product of the need for national standards and assurance of coverage is that there is a great deal of autonomy on a classroom-by-classroom basis.Teachers are often responsible for selecting their curriculum either entirely or on a supplementary level and have an ultimate say in what and whether they teach evolution (Rutledge and Mitchell 2002).Evolution instruction notably impacts teacher persona and approach in the classroom.In a study of established classroom teachers, Goldston and Kyzer (2009) observed marked changes in how teachers spoke to their students, modeled thinking, and responded to questions when teaching evolution, all involving being less confident and engaged than those same teachers were when teaching other topics.Not only do many teachers demonstrate a limited understanding of the basic concepts of evolution (goal of education, with differences noted between goals), but even in the presence of advanced certifications and experience, they often struggle with processes and practices of science, grouped as the Nature of Science (NOS) (Bartos and Lederman 2014).Limited understandings of fundamentals and NOS are critical failings, as this is where most of the misconceptions surrounding evolution and other sciences are grounded (e.g., law vs.theory, social constructs of science, even what science is and is not) (McComas 1997).
Science teachers' self-efficacy for teaching evolution
Teachers' self-efficacy beliefs also influence instructional practices for evolution instruction for teaching evolution.Self-efficacy beliefs impact "how teachers think, feel and teach" (Gibbs 2003, p. 1).Self-efficacy mediates various cognitive, affective, and volitional factors that define how we plan, organize, implement, and reflect on our activities (Bandura 1997).High self-efficacy beliefs support intrinsic motivation and increased engagement.On the other hand, low self-efficacy results in feelings of incompetence diminished potential for higher-order cognition, and, consequently, decreased task performance (Bandura 1993).
Self-efficacy for teaching evolution specifically is known to be impacted by a host of factors, including religious views (Alters and Nelson 2002;Asghar et al. 2007), misconceptions about evolution (Gregory 2009;Meir et al. 2007), inadequate level of acceptance of evolution (e.g., Kim and Nehm 2011;Peker et al. 2010), and lack of understanding of nature of science (Dagher and Bou-Jaoude 2005;Kim and Nehm 2011;Rutledge and Warden 2000).The ongoing perceptions in society that there is an evolution versus creationism "controversy" further hinders a teacher's ability to develop knowledge and self-efficacy for teaching evolution (Hawley and Sinatra 2019).Teachers must balance their sense of duty to their profession, the demands and concerns of their community, their response to the greater political climate, and their beliefs.Secondary science textbooks, a primary instructional resource for most teachers, caution new teachers to refrain from allowing students to debate the issue and to distinguish between a theory and a fact when teaching evolution (e.g., Chiappetta and Koballa 2002).
Teachers are encouraged to ''plan ahead to determine how to deal with objections from students and parents who oppose instruction, including evolution.[Make] provisions to give students alternative work in science, if they wish to leave the classroom during instruction on evolution" (Chiappetta and Koballa 2002, p. 144).However, more support is needed for teachers on how to communicate with students and parents regarding teaching evolution.In addition, teachers often struggle with the issue of disclosure versus neutrality, that is, whether to share their personal views and opinions with students or to adopt the role of an impartial facilitator during the deliberation of evolution (Hermann 2008;Miller-Lane et al. 2006).It is no wonder why many teachers have low self-efficacy beliefs for teaching human evolution and feel anxious and stressed about the need to protect themselves from the potential consequences of conflict over teaching evolution.
Several studies have demonstrated that biology teachers need more confidence in their knowledge of evolution (Glaze and Goldston 2015;Glaze et al. 2015;Griffith and Brem 2004).A path analysis study examining the relative effects of teacher self-efficacy, understanding and acceptance of evolution, and views on the nature of science revealed that higher levels of both understanding and acceptance of the theory and naive views on NOS were found to be associated with stronger self-efficacy beliefs for teaching evolution effectively (Akyol et al. 2012).Another relevant study examined the sources of pressure, resulting stresses, and coping strategies Arizona biology teachers devised for teaching evolution (Griffith and Brem 2004).Based on the results of focus groups, interviews, and surveys, teachers were clustered into three groups: "Conflicted, " who struggled with their beliefs and the possible impact of their teaching, "Selective, " who carefully avoided complex topics and situations, and "Scientists, " who saw no place for controversial social issues in their science classroom.Teachers from each group felt that they could be more effective in teaching evolution if they possessed: (a) the most up-to-date information about evolution (interdisciplinary content knowledge), (b) a safe space in which to reflect on the possible social and personal implications with their peers, and (c) access to more rigorous and rich lesson plans for teaching evolution that include not only science but personal stories regarding how the lessons arose, and what problems and opportunities they created.The authors emphasized that workshops on the most recent information about evolution and how to teach it may enhance science teachers' confidence in teaching evolution.
Professional development impacts teacher perceptions and actions in the classroom
Teachers' attitudes directly impact their choices of what and how they teach in their classrooms (Rutledge and Mitchell 2002).Therefore, to impact curriculum and pedagogical selections, interventions are needed that positively impact teacher attitudes and confidence surrounding topics perceived as contentious.Effective professional development is a fundamental supporting tool in science education (Loucks-Horsley 2003) that has become more necessary since implementing the Next Generation Science Standards (2013).Although there are limited studies on evolution-specific professional learning opportunities for K-12 teachers, the research suggests that lasting, large-scale effects on teaching practices and confidence are possible through effective professional learning for teachers (Ha et al. 2015;Schrein et al. 2009).
In their study on evolution education interventions, Ha et al. (2015) identified only seven existing studies, most of which still needed to be replicated, all of which varied widely in focus and approach.Despite the variety, each of the studies demonstrated self-reported or measured impacts on teacher confidence (Firenze 1997), knowledge (Crawford et al. 2005;Firenze 1997;Nehm and Schonfeld 2007), enthusiasm (Firenze 1997), acceptance (Nadelson and Sinatra 2023;Southerland and Nadelson 2012), or willingness to teach evolution (Nehm and Schonfeld 2007), although there was no consistency or homogeneity across the studies in those outcomes.What studies do agree on is that professional development can positively influence what and how teachers teach in their classrooms while providing ongoing content and pedagogical support for effective instruction, feelings of self-efficacy, and culturally relevant practices (Ha et al. 2015;Pobiner 2016).
Effective professional development for evolution requires a multi-dimensional approach
Teachers and students face internal and external pressure regarding the teaching and learning of evolution (Dotger et al. 2010;Glaze and Goldston 2015;Glaze et al. 2015;Pobiner 2016;Smith 2010a, b).One tool recommended by a wide range of studies in evolution education is ongoing professional learning that focuses on confidence, cultural considerations, and content knowledge (Berkman and Plutzer 2015;Glaze and Goldston 2015;Glaze et al. 2015;Hermann 2011;Pobiner 2016;Rutledge and Mitchell 2002).According to Bertka et al. (2019), classrooms must be structured in a way that approaches content and allows teachers and students to acknowledge controversy while including pedagogy to navigate the conflict between elements of worldview and evolution.The call for classroom experiences, teacher preparation, and professional development that meets those needs is strong (Crawford et al. 2005;Meadows 2009;Reiss 2009;Barnes et al. 2017;Wiles 2014;Yasri and Mancy 2016).However, evolution education researchers are only beginning to make headway in applying these approaches to work with students and teachers (Barnes et al. 2017;Bertka et al. 2019).In a study by Berkman and Plutzer (2012), more than half of surveyed teachers actively taught evolution using techniques they knew needed to be more robust in addressing evolution with their students.Similarly, other studies have shown that teachers often approach evolution with misconceptions, inaccurate content, alternatives to evolution, or avoiding teaching evolution for various reasons (Glaze and Goldston 2015;Glaze et al. 2015;Pobiner 2016).
Teachers face difficulty looking beyond their perceptions of conflict, confidence in content, pedagogical knowledge, incompatibility of beliefs, and concerns.However, these concerns must be identified and addressed to create meaningful learning experiences (Sanders and Ngxola 2009).With challenges coming from so many angles-teacher discomfort, low confidence in content, feelings of non-support, lack of pedagogy to address cultural/worldview conflict, and more-professional development that embraces multiple elements is crucial.
Human evolution presents a unique approach to personalizing evolution education in classrooms
While research on evolution education is steadily increasing, the connection between humans and science is most often needed, specifically our evolutionary history as a species.Admittedly, human evolution represents one of the most substantial hurdles for many when it comes to evolution.As recently as 2019, 40% of polled individuals believe that God created humans in their present form, and 33% believe that Humans evolved with God guiding their evolution (Evolution, Creationism, Intelligent Design, 2019).Studies show that responses from confidence in teaching to acceptance of evolution can demonstrate a downward trend based on whether they discuss human evolution or evolution among non-human organisms (Pobiner 2016).While human evolution can represent a barrier to acceptance, it also represents a missed opportunity to frame evolutionary study in a manner that is both personal and connects us to science on a deeper level.
In this study, professional development targeted a variety of teacher concerns and areas of need, building from existing studies on teaching human evolution in K-12 settings.Included in the approach were content features on teaching human evolution in K-12 settings (Pobiner et al. 2018), using cladistics in support of evolutionary relationships and tree-thinking (Catley 2006;Walter et al. 2013), and addressing misconceptions and the nature of science (Schilders et al. 2009;Martin-Hansen 2010).Open discourse space was created, and specific attention was paid to exploring and discussing cultural barriers, including religiosity, to focus on elements of conflict and context (Bertka et al. 2019;Barnes et al. 2017;Oliveira et al. 2011) as well as approaches to mitigating and acknowledging conflict without teaching the controversy (Bertka et al. 2019).Finally, the use of 3D models produced using 3D printing technology, viewed as a new tool for scientific discovery when materials are hard to come by or unavailable for a specific setting, was employed to provide specific supports from which teachers could approach human evolution through mediums that strongly mimic the practices and processes of scientists making discoveries in the field (Bayer and Luberda 2016;Drake and Pawlina 2014).
The Human Evolution Summer Teacher Workshop (HESTW)
Evolution is considered a unifying concept in science.It is a required component of the middle school curriculum in the Next Generation Science Standards (NGSS Lead States 2013).Unfortunately, teachers often fail to effectively or accurately teach human evolution due to barriers such as a lack of curricular support, lack of content knowledge, not understanding the nature of science, or a lack of confidence in teaching what they perceive to be a controversial subject (Glaze and Goldston 2015;Glaze et al. 2015).To prepare science teachers to integrate human evolution into their instruction more effectively, the authors of this study designed and implemented a Human Evolution Professional Development Institute hosted at a large research university in the Southeastern United States.The institute was designed to address these specific issues and obstacles as part of the workshop design.Nineteen K-12 educators from the Southeastern United States were chosen to attend the workshop.The first of the three days was dedicated to providing the educators with a thorough background on paleontology and paleoanthropology, both through a discussion of significant discoveries made in both fields and a conversation about the primary research in both fields, past and present.The participants then had the opportunity to ask questions of paleoanthropologists, paleontologists, and anthropologists in person and virtually over Skype.By the end of the first day, participants were provided with the information that would inform the teaching of evolution, specifically human evolution, in their classrooms.
Additionally, during the first and second days of the workshop, participants were introduced to 3D printing as a potential strategy for teaching a concept as morphologically-heavy as human evolution.The educators were provided with a better understanding of what free digital resources are available (Morphosource, AfricanFossils, the Human Evolution Teaching Materials Project, and more) and their potential applications.They received guidance on structuring lesson plans (aligned to statespecific science standards) around open-source data.They could experience firsthand the power of using 3D prints in the classroom.
During this time, the educators were given a tour of a maker space, provided with the use of 3D printers to make test prints and given access to over 45 full-size hominin crania.They participated in a demonstration lesson that utilized 3D-printed hominin mandibles.The participants used all this information to develop their lesson plans, which will be used by those educators in their classrooms in the coming school year but will also be made available to the public.By using open-source 3D files as part of their lesson plans, teachers not only incorporated materials that were very up-to-date (with the most recent of the 3D files being made available by the Max Planck Institute in the Spring of that year), but they were also able to present their students with the tangible evidence of human evolution.Sharing evidence allows students to conclude the theory of evolution is supported by observing the fossilized evidence of adaptation and natural selection instead of only being asked to believe evolutionary theory as presented in their textbook.
During days two and three of the workshop, the issues inherent to evolution education were addressed in various ways.Direct instruction addressing the public evolution-religion "controversy" in the public arena and a panel on the obstacles in the teaching of evolution allowed the workshop participants to ask many questions and derive real-world solutions to the problems that might arise when they go to implement their newly-developed lesson plans.The process focused both on theory and practical implementation.Educators were introduced to the concept of religion and science as two different ways of knowing (Gould 1997;Hermann 2012) that can both be a part of a student's worldview without existing in opposition to one another-viewing evolutionary theory and religion as two equally valid "ways of knowing" provided educators with a way to teach evolution that does not risk dismantling any student's way of seeing the world.
By directly addressing many previously-identified obstacles in teaching evolution, we hoped to directly impact the teacher's perceptions of the teaching of evolution and provide educators with the tools needed to successfully implement human evolution into their existing science curricula.With this workshop being the first of several evolution-focused teacher workshops in the United States, we conducted this study to better address educator needs and successfully provide the resources and support required to overcome many obstacles in teaching human evolution.
Methodology
This mixed-method study employed a combination of a quantitative pre-test and post-test as well as qualitative focus-group interviews of a selection of the workshop participants.
Sampling
The sample utilized in this study is a self-selected convenience sample.The HESTW Professional Development opportunity was marketed via Twitter, Facebook, and The University of Florida Thompson Earth Systems Institute website, and teachers interested in attending the event were asked to submit applications for consideration.From the total group of applicants (81), a cohort of 19 teachers was selected and all were able to attend.The teachers were chosen based on their application responses, with a goal of having a cohort that was as geographically diverse as possible, with diversity across demographics including gender, race/ethnicity, years of teaching experience, and school type.The teachers whose applications were selected for participation were extended an invitation and provided with informed consent documents and a request to participate in the study.
Participants
Participants were nineteen K-12 science educators from the Southeastern United States, representing Florida, Georgia, Alabama, Louisiana, and Mississippi.These teachers ranged in age from 25 to 57 with an average age of 38.5 years (median = 38, mode = 32), and were primarily white and female (17/19 white, 2/19 black/African American with no other race/ethnicity represented; 16/19 female, 3/19 male, with no other gender identified).Of the nineteen participants, nine taught solely in secondary education (9-12th grades), six taught solely in middle grades (4th-8th), and four taught across levels.One of the latter four also taught teacher educators in an accredited program.Eighteen of the teachers were public school teachers, with one teaching in a private Catholic school.Among the public schools was the representation of one magnet school, one rural Title 1 school, three developmental research schools, and one primarily minority school.The educators also reported a range of prior experiences with the implementation of human evolution into their curriculum, with some teachers having no previous experience and others previously implementing some elements of human evolution into their science curricula.
Teachers' Perceptions of Teaching Evolution Scale (TPTES)
The survey instrument consisted of 18 questions adapted from the Teachers' Perceptions of Teaching Evolution Scale (TPTES) by Tekkaya et al. (2012) and one freeresponse question added to allow participants to share additional thoughts, concerns, or perceptions they felt inclined to share.In this measure, the first 18 questions were Likert scale questions with answers ranging including 'strongly disagree, ' 'disagree, ' 'agree, ' and 'strongly agree.' Those questions addressed three domains: teachers' perceptions of the necessity of addressing evolution in their classrooms (seven items), teachers' perceptions of the factors that impede addressing evolution in their classrooms (six items), and personal science teaching efficacy beliefs regarding evolution (five items).The Teachers' Perceptions of Teaching Evolution Scale was examined by domain and shown to have a Cronbach's alpha of 0.84, 0.63, and 0.68 aligned to the domains noted above (Tekkaya et al. 2012).The authors note that two of these values are in the low range, however, consideration of having fewer than ten questions likely impacts the measure, therefore, they performed a mean inter-item correlation which resulted in scores of 0.22 for "perceptions of the factors that impede addressing evolution" and 0.31 for "personal science teaching efficacy beliefs regarding evolution" (Tekkaya et al. 2012).Since the acceptable range of that analysis is between 0.2 and 0.4, the authors suggest the measure is reliable (Pallant 2011).This measure can be found in its entirety as deployed in this study in Appendix A.
Human evolution summer teacher workshop focus group interview protocol
The questions for the qualitative exploration were derived from discussions around the body of research on evolution education amongst the research team.The team consists of two education professors with more than three decades of experience in teacher education, evolution education, measurement design, technology, and curriculum, and two doctoral students in anthropology with extensive human evolution content knowledge plus several years of teaching experience at the university level.While open-ended interviews lack construct validity, it is posited that interviews are reliable and valid when they are credible, authentic, critical, and uphold integrity (Whittemore et al. 2001).Based on a critical dissection of potential questions, the selected questions were deemed relevant to the issue of teaching evolution; provided space for the participants to voice free-flowing, thoughtful responses; and questions aligned to critical areas of exploration in the literature.Both co-investigators utilized the questions in the focus group sessions following the workshop to provide structure and focus to critical issues from the existing body of knowledge and increase the credibility of the interview process.The protocol for the focus group interviews is provided in Appendix B.
Procedures
A week before the workshop, participants completed a pre-test of the Teachers' Perceptions of Teaching Evolution Scale adapted from Tekkaya et al. (2012).The results of this pre-test were used to place a selection of the participants into two focus groups (each consisting of six participants) to achieve the most diversity in previous knowledge of human evolution, previous experience implementing evolution, and diversity in age and sex.Upon the workshop's conclusion, the nineteen participants were asked to complete the same survey again as a post-test.The participants were then divided into two focus groups and interviewed by different co-investigators using the HESTW interview protocol.Each of the semi-structured interviews lasted approximately one hour.
Data analysis
For analysis of quantitative data, a series of repeated measures ANOVA tests were conducted to explore the changes in teachers' perceptions of teaching evolution between the pre-test and post-test of the Teachers' Perceptions of Teaching Evolution Scale.Data met independent, normality, and sphericity assumptions, indicating reliable analysis.
Qualitative coding techniques were then used to analyze HESTW participants' personal experiences with human evolution before and after participating in the workshop.Before coding, twelve HESTW participants were split into two groups of six educators based on this diverse experience with human evolution in the classroom and K-12 teaching grade level and school type (i.e., private, public, title one) and systematically interviewed.All interview responses were transcribed verbatim from audio recordings and made accessible to researchers with the names of HESTW interviewees replaced to establish anonymity and reduce potential biases.Data were analyzed in three rounds of coding as an iterative process to reveal strong themes amongst the responses.All transcribed data were stored and examined on the online qualitative data analysis and scientific research software ATLAS.tiCloud.
To qualitatively analyze these results, two researchers coded both sets of response data independently to aggregate a datum of codes using both descriptive and in vivo processing (Saldaña 2009).An initial coding round created 782 code tags with 374 associated comments added to support the generation of distinguishable code and help reduce ambiguity.During a second coding round, the saturation of coded data was scrutinized, and emerging commonalities in both sets of coded data were identified.Detailed notes were recorded in the memo manager of ATLAS.tiCloud by both researchers for comparison of updated codes, categories, and emergent themes.From that point, all researchers met to discuss their independently coded data and reveal thematic trends by reconciling, verifying, and distilling code until a consensus was reached.
Quantitative study
A repeated measures ANOVA with Measure 1 "teachers' perceptions of the necessity of addressing evolution in their classrooms'' as the dependent variable revealed that teacher's perceptions regarding the necessity of addressing evolution in their classrooms significantly improved from pre to post-test (F 1,18 = 9.94, p < 006, η 2 = 0.36).Specifically, teachers' perceptions regarding the necessity of addressing evolution in their classrooms improved from M pre = 32.37,SD pre = 1.98 to M post = 33.84,SD post = 1.68.A partial eta squared value indicates a large effect size.
A repeated measures ANOVA with Measure 2 "teachers' perceptions of the factors that impede addressing evolution in their classrooms" as the dependent variable demonstrated that teachers' perceptions regarding the factors that impede addressing evolution in their classrooms did not significantly change from pre to post-test, but the differences approached significance (F 1,18 = 3.91, p = 06, η 2 = 0.18).Specifically, teachers' perceptions regarding the factors that impede addressing evolution in their classrooms changed from M = 24.84,SD pre = 3.23 to M post = 26.26,SD post = 2.13.
A repeated measures ANOVA with Measure 3 "personal science teaching efficacy beliefs regarding evolution" as the dependent variable revealed that teacher's perceptions regarding personal science teaching efficacy beliefs regarding evolution significantly improved from pre to post-test (F 1,18 = 16.01,p < 001, η 2 = 0.47).Specifically, teachers' perceptions regarding personal science teaching efficacy beliefs regarding evolution improved from M pre = 18.21,SD pre = 2.90 to M post = 20.68,SD post = 1.49.A partial eta squared value indicates a large effect size.
Qualitative study
When analyzing the focus group interviews, the teacher responses could be broken down into three main categories: student-centered responses, teacher-centered responses, and content-centered responses.These are the three areas of primary concern when considering implementing human evolution education in the science classroom.
The teachers in our workshop had responses that focused primarily on what previous knowledge students would come to class with and a desire for students to leave their classes being scientifically literate.Studentcentered responses included considerations about student interactions in the classroom, specific concerns about what background knowledge they would bring into conversations about evolution, and the best ways to engage with them with potentially polarizing topics.The teachers acknowledged that their students might come in with varying amounts of background knowledge on the topic of evolution (or more specifically, human evolution), both since it is a topic that is often brought into the realm of popular culture and also because there are not consistent standards addressing evolution in school across the United States.Regardless of what background information the students came in with, one unifying concept echoed by many teachers was a desire for their students to leave school as scientifically literate members of society.One teacher remarked: Our ultimate goal is to have them be citizens that are educated enough to vote one way or another on certain items, and we want them to be able to have that literacy to walk in and make the conscious choice that is educated and not just what they see on TV [or] that is thrown at them by way of the media.We want them to have their thought process.Be literate enough to determine what that might be.
Another concept that several teachers echoed was the desire to have the classroom be a 'safe space, ' especially when discussing potentially charged topics such as evolution and human origins.This was deemed essential to having an open and honest dialogue and addressing topics that might polarize some communities.One teacher emphasized, "If they want to share, and they want to have a class discussion, then I would always facilitate it, but never shut it down, like, ever.If they are not going to have it in that safe space that I created in my classroom, where are they going to have it, you know?".
The next category of responses was teacher-centered responses.Teacher-centered responses included reflections about the demands associated with a career in education, the relationship between teachers and their administrators, obstacles in teaching human evolution, issues surrounding accessibility, and teacher autonomy in the classroom.The teachers in this professional development workshop acknowledged that evolution needs to be taught consistently in K-12 science classrooms.Although evolution is part of the NGSS standards (e.g., HS-LS4-1 through HS-LS4-5), only 20 states have adopted NGSS standards (NGSS 2013), and none of those states are in the Southeastern United States.Evolution may be included in the standards utilized by those other states.However, this does lead to major inconsistencies in how evolution is incorporated into the science curriculum.
Furthermore, teachers from the focus group also mentioned that administrator support was one major consideration in incorporating human evolution into the existing science curriculum.Teachers reflected that the degree to which they are given autonomy to create and structure their lesson plans, pursue professional development workshops in more niche topics such as human evolution, and incorporate non-traditional learning materials like 3D prints into their lesson plans impacts their ability to teach topics that may not necessarily be covered in depth by their state standards.To many teachers, administrator support is a looming factor in incorporating human evolution lesson plans into existing science curricula.
Many teachers reflected as well on their comfort level in teaching evolution.It was a common trend in the focus group interviews to hear a teacher mention that they had previously felt uncomfortable addressing a subject area like evolution.One teacher remarked, "It is not a topic that we see at professional development workshops… I think that lack of knowledge prevents us from want-ing… to get out of your comfort zone-not wanting to have your kids ask questions that you will not know the answer to, so-I usually went the other route." Teachers who already taught evolution mentioned that they stuck to classic examples of adaptation and change over time, such as Darwin's finches and salt-and-pepper moths, and purposely avoided incorporating examples from human evolution to avoid the risk of making students or their parents uncomfortable.Additionally, teachers reported feeling they needed more mastery of the topic of human evolution, and for that reason, they decided to rely on more common examples of evolutionary theory.
It is not just their discomfort with the material that the teachers identified as one of the significant limitations surrounding the implementation of teaching human evolution in K-12 classrooms.Other significant limitations mentioned in the focus-group interviews included limited materials, outdated resources, limited support from schools and administrators to support learning about human evolution, as well as the need for more professional development opportunities that included information on human evolution.One teacher remarked, "Materials are the problem.I did not have the materials." Another major disconnect noted by the teachers was the lack of accessibility between teachers who teach science and scientists actively doing scientific research.Especially in a field like paleoanthropology, where every discovery changes our understanding of our human origin story, teachers want more direct access to the scientists to be able to adapt their curriculum in real-time as these changes are made.One teacher noted, "I liked being able to interact with people in the ivory tower, doing real science.The fact that we have been able to talk with you guys, see what you are doing, and consult with your expertise really grounds everything".Additionally, when so much of human origins is reliant on the analysis of fossil materials, teachers in this professional development workshop voiced their desire to have these fossil discoveries made accessible to them, either through open-source 3D files or publications that are not stuck behind a paywall.
The final category represented the content-centered responses.Content-centered responses included frustration at the common misconceptions about our human origin story and a discussion of the many benefits of including human evolution in the science curriculum.Many teachers acknowledged the misconceptions students commonly have surrounding evolution/human evolution.One teacher remarked, "I will always address the misconception that chimps turned into monkeys." Many teachers noted that correcting these misconceptions is integral to K-12 science education.Additionally, many teachers remarked that after getting to experience some lesson plans that addressed human evolution, they found the curricula to be interactive, fun, personal, and inclusive.The fact that human evolution is our collective origin story made many teachers perceive lessons that included this material to be particularly inclusive and unifying in an increasingly divisive time.One teacher remarked, "When things are so polarized, this brings us together as one species." It was agreed, however, that there were many different ways in which this specific content could be presented and that this would allow teachers to approach the topic with variations in the style and manner in which it is addressed.These variations included using various media such as 3D prints, 3D pdfs, podcasts, videos, journal articles, interviews with scientists, and museum visits.
Summary
As demonstrated by both the qualitative and quantitative data, the Human Evolution Summer Teacher Workshop had a significant impact on the teachers in attendance.Of the three target factors included in the pre-test/post-test measure, two measures (Measures 1 and 3) show significant improvements by the end of the teacher workshop: teachers' perceptions of the necessity of addressing evolution in their classrooms (seven items) and personal science teaching efficacy beliefs regarding evolution (five items).The remaining measure [Measure 2-teachers' perceptions of the factors that impede addressing evolution in their classrooms (six items)] approached significance.
A significant improvement in Measure 1 indicates that teachers left the workshop feeling like human evolution was a more important topic to cover in their curriculum than was previously considered.They were more willing to incorporate materials on evolution in their curricula and attend evolution-centered professional development after the HESTW PD.They reported stronger beliefs that the inadequacy of students' backgrounds regarding evolution needs to be addressed and that the incorporation of evolution into science/biology classes will increase students' interest in science.They also identified that teaching evolution was worth their time and effort.
There was also a significant improvement in Measure 3: teachers' perceptions of the factors that impede addressing evolution in their classrooms.This measure assessed educator confidence in teaching topics pertaining to evolution.By the end of the workshop, teachers felt more confident in their ability to teach evolution and more knowledgeable about various teaching strategies to deal with evolution in science/biology classes.They also felt as though they have the knowledge necessary to effectively teach about human evolution to their students.This came from feeling as though they sufficiently understand what evolution is and have confidence in developing teaching and learning materials about evolution by the end of the workshop.Interestingly, the variability in the responses decreased and means significantly improved from the pre-test to the post-test.
Although Measure 2 only approached significance, it still yielded some interesting results.This measure assessed teachers' perceptions of the factors that impede addressing evolution in their classrooms.It asked teachers to consider whether or not students are mature enough to be interested in and understand evolution and whether or not classes dealing with evolution are most likely to be aimed at high-achieving students only.It also asked them to gauge perceived student interest and consider if it could confuse them about their own values.Although teachers' views on these matters did not change significantly from the pretest to the post-test, one interesting pattern did emerge.For this measure, the standard deviations are very high for the pre-test and much lower for the post-test.This indicated that there was much more variability in teachers' responses on the pre-test than on the post-test.Although the teacher responses did not change significantly from the pre-test to the posttest, their responses were more consistent within the entire group after the conclusion of the workshop.
The focus group interviews allowed us to hear more in-depth feedback from the teachers regarding their experience at the workshop.When discussing evolution education, teachers expressed their concerns and opinions in three main categories: student-centered responses, teacher-centered responses, and content-centered responses.In order for a teacher to feel confident incorporating evolution into their curriculum, they need to feel comfortable with their own background knowledge and abilities in teaching the subject material, be able to meet the needs of their students (not just from an educational standpoint but also in terms of their wellbeing, spiritual life, home life, etc.), and have sufficient support in creating their lesson materials.It is clear through their responses that evolution education that does not address all three of these needs will fall short.Providing educators with an evolution curriculum without offering them support (as well as a way to support their students) is ineffective.
One unexpected observation made by many participating educators is their belief that the teaching of human evolution specifically has the capacity to bring students together in an increasingly divided world.The idea that our human origin story is inherently more personal than other anecdotes more traditionally used to teach evolution makes it an attractive element to add to the existing science curriculum.When combined with the storytelling nature of many of the paleoanthropological discovery stories and the occasional opportunity to connect directly with the scientists and researchers responsible for these fantastic finds, the teachers agreed that human evolution can be an extremely exciting and engaging subject area for students of all ages.
Discussion
The findings of this study suggest that there is much that can be done to impact teacher confidence, efficacy, and approaches to teaching evolution.In regards to changing perceptions from before and after participating in the HESTW program, we found that similar to Rutledge and Mitchell (2002) that engaging more with the concepts does positively impact teacher perceptions of the importance of teaching evolution, suggesting more willingness to include this material in classrooms.Efficacy was also a factor that was impacted by the experience, demonstrating that professional learning positively impacts teachers' perceptions of their ability to accurately and confidently teach evolution (Alters and Nelson 2002;Asghar et al. 2007).
While improvements occured in teacher thinking regarding their efficacy and the need for evolution to be taught, the factors teachers perceived as impeding their teaching remained relatively consistent and closely mirrored those identified in other evolution education studies.In this study, common themes emerged from the teachers surrounding their expectations of the varying backgrounds, knowledge, and beliefs their students would bring to the classroom (Bertka et al. 2019).Despite concerns about the variability of knowledge and expectations of their students, teacher perceptions in this study aligned closely with those by both Bertka et al. (2019) and Barnes et al. (2017) in that they also perceived open dialogue and acknowledging those variations are key to teaching evolution in general, and especially human evolution (Hermann 2008;Miller-Lane et al. 2006).Teachers in this study expressed feeling external pressures surrounding their teaching of evolution much like those found in other studies.Those perceptions included the importance of being given autonomy by their administrators to make choices about their classrooms (Rutledge and Mitchell 2002) and reflection on how negative pressures from administration (Glaze and Goldston 2015;Glaze et al. 2015;Pobiner 2016) and lack of support for teaching evolution in the state standards, as none of the states in the Southeastern United States adopted the NGSS standards (2013).
Content knowledge is certainly of importance despite not always having a direct impact on acceptance of evolution (Glaze and Goldston 2019) and plays a critical role in building confidence for teachers to approach evolution in their classrooms (Glaze and Goldston 2015;Glaze et al. 2015;Griffith and Brem 2004).At the same time, research shows that content knowledge alone is not enough (Bertka et al. 2019) and teachers are in dire need of a range of supports to enable them to teach human evolution in the classroom (Barnes et al. 2017;Glaze and Goldston 2015;Glaze et al. 2015).Teachers in this study focused on critical areas of need that have been expressed in other seminal studies on evolution teaching and learning including frustration with the breadth of misconceptions about evolution and human evolution (McComas 1997;Gregory 2009) and a desire to have more than just traditional examples to address concepts in the classroom (Glaze and Goldston 2015;Glaze et al. 2015Glaze et al. , 2019)).Along that line, while there are tons of resources that are available, teachers felt they had minimal support to actually implement what they do have or were left to supplement outdated resources provided in their school settings, confirming Chiappetta and Koballa's (2002) concern that textbooks, although seen as an authority in the classroom setting, are not as robust and up-to-date as they need to be to adequately teach topics such as evolution.
Teachers in the HESTW program left with the view that teaching evolution through human evolution lenses is a more inclusive and personal view for students (Berkman and Plutzer 2012) and that addressing misconceptions about human evolution is critically important (Schilders et al. 2009;Martin-Hansen 2010).Furthermore, they felt connected to the scientists that engaged in the program in such a way that they specifically noted the need for deeper connection between what is happening in science and the teaching of that science in the classroom.The connection between field science and classroom learning ties directly to the nature of science (NOS) which also circles back to addressing misconceptions and engaging students in scientific thinking and process skills (Glaze and Goldston 2015;Glaze et al. 2015;Bartos and Lederman 2014;Bayer and Luberda 2016;NGSS 2013).Finally, teachers are strongly aware of the importance of professional development in their growth and efficacy as educators (Loucks-Horsley 2003) and benefit strongly from modeling, whether that be processes of science, acknowledging controversy, or engaging with new information (Bertka et al. 2019;Ha et al. 2015;Schrein et al. 2009;Pobiner 2016).
Conclusion
The teaching and learning of evolution continues to be a strong focus of science education research due to the robust nature of evolution as the unifying theory in biology and the perceptions of controversy that persist in the public.Professional learning targeted to meet teachers where they are while also building confidence, content knowledge, and pedagogy for their unique teaching contexts is one approach impacting whether and how evolution is taught in K-12 classrooms.Integrating elements of human evolutionary studies engages students in their placement in the tree of life.Additionally, human evolution is a topic about which many have questions that often need to be addressed due to the absence of human focus from most evolution-based teaching standards and cultural considerations that enter from outside the classroom.The Human Evolution Teaching Materials Project and subsequent Human Evolution Summer Teacher Workshop combined various approaches to address content understanding, the nature of science, ways of knowing, and hands-on learning to support and empower teachers to teach evolution in their classrooms with accuracy and confidence.
Researchers must continue exploring ways to approach teacher preparation relative to evolution for various reasons.First, teachers have the autonomy and authority in their classrooms to select what and how to teach within reason.Those teachers who are comfortable with their ability to mitigate conflict their students might perceive are more likely to teach evolution.Second, teacher content training is critical to ensure that when evolution is taught, the information is accurate.Many programs do not specifically have courses on evolution even though it is so ingrained in life sciences; targeting this content area ensures that those connections are made throughout life science education at the K-12 levels.Third, evolution is a topic that defies traditional relationships between knowing and accepting, as research exploring correlations between knowledge and acceptance demonstrates that a person can have high knowledge with low acceptance or low knowledge with high acceptance, as well as all the interactions between.As such, teachers must be trained in the specific pedagogical approaches that not only represent the nature and practice of science, but that support them-and, by extension, their students-in navigating social, cultural, and other elements that give rise to confusion and conflict.Finally, integrating human evolution brings the conversation to a more personal level for students in looking at how we fit as a species in the larger picture of biodiversity.By studying human evolution specifically, students can learn that humans are not the exception to the rules of evolution, but instead are governed by them in the same way: the way that applies to all animals.
Limitations
There were several limitations of this study, including the small number of teachers involved in the professional development and the inability to represent the teacher population regarding gender, diversity, and background.Additionally, the teachers involved in this professional development workshop each applied to attend.Hence, it was inherently a group of teachers that self-selected to be involved and thus would not be representative of the range of perspectives and levels of acceptance found across classroom teachers around the nation.Had this professional development involved a larger and more representative group of teachers (especially those who did not have a pre-existing desire to incorporate human evolution into their existing science curriculum), the results of the pre/post-survey and focus group interviews may have vastly been different.
Implications for science teacher education & professional learning
Science teacher education has a broad range of topics to address to ensure that pre-service teachers have effective content and pedagogical skills, making it difficult to adequately address subject-specific strategies during pre-certification training.The broad range of topics covered by science teachers also lends itself well to the incorporation of human origins in the curriculum, since the study of human evolution is inherently multidisciplinary.However, our study demonstrates that professional development provides the ability to target teachers who specifically address evolution in their classrooms while creating a community for shared learning.The focus of professional development on specific content areas where teachers need support and the ability to discuss shared experiences and concerns mean a greater opportunity to positively impact teaching practice and student outcomes.
Suggestions for further study
History demonstrates that there are no one-size-fits-all approaches to the successful teaching of evolution due to the nature of the topic and the wide range of divisive angles that arise when the topic is mentioned.We know that more than just sharing evidence is needed to enact conceptual change and understand where that perception of conflict is present.We also recognize that at times, controversy can be inherent to the scientific process, such as among various naturalistic causal models of gene-culture coevolution.Therefore, ongoing research is needed to develop and assess the impact of approaches that target the needs of teachers as they learn to navigate a diverse array of student belief systems and cultural practices, as well as the historical conflicts that surround evolution in different parts of the United States and around the world.To do so, we must continue to strive to understand the nature of the issue in different places and among different groups of people.We must also do so with unified measures that allow more substantial generalizations and comparisons among and across these groups than what we have been able to do in the past.As we establish those baseline understandings, a wide range of approaches that integrate the needs of these groups must be developed and studied.
The field of evolution education research now hosts an array of data supporting the use of approaches such as culturally responsive strategies, targeting the nature of science, and modeling the ways of knowing to encourage us to keep exploring.We must build on those foundations to determine whether such approaches support teachers across place-based and other boundaries and whether those changes, once implemented in the classroom, positively impact student outcomes.The authors hope to see the application of this and similar projects to a more diverse representation of teachers, including those outside of the region, those teaching a variety of levels of students (both grade and rigor), those teaching across the many disciplines and subject areas that inform modern human evolutionary sciences, and those who do not immediately have interest in teaching evolution, much less human evolution.Each of these areas represents gaps where there is still much to learn to positively change the teaching and learning of evolution and public perceptions of evolution and science.
3. What are some challenges associated with teaching evolution in K-12? 4. What was the most relevant or interesting aspect of this workshop to you? 5. To what extent do you discuss your students' personal views on evolution with your students?6.Does your school require parental consent to discuss evolution in the science classroom?Are there any formal or informal mechanisms for this? 7. What do you think students will gain, if anything, from the inclusion of human evolution into K-12 science curriculum? 8. What, if any, supports does your school/district provide to support the implementation of evolution (or more specifically human evolution) in your science curriculum? | 2024-05-29T15:04:19.169Z | 2024-05-27T00:00:00.000 | {
"year": 2024,
"sha1": "bfa906f54272c1e50ae16fad896fa372ab3d73dc",
"oa_license": "CCBY",
"oa_url": "https://evolution-outreach.biomedcentral.com/counter/pdf/10.1186/s12052-024-00197-x",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2646840a25ad3402de10f51de3e568a70feb375",
"s2fieldsofstudy": [
"Education",
"Biology"
],
"extfieldsofstudy": []
} |
10775193 | pes2o/s2orc | v3-fos-license | Mycotic Aneurysm Caused by Bacteroides fragilis in an Elderly Immunosuppressed Patient.
An 82-year-old Japanese man, who presented with a fever and abdominal pain, was admitted to our hospital. According to enhanced computed tomography images, the probable diagnosis was abdominal aortic mycotic aneurysm. Eight sets of blood cultures obtained from the patient were negative. Despite administering treatment with vancomycin and ceftriaxone, the aneurysm progressively enlarged. He underwent open debridement surgery and in situ replacement because of an aneurysmal rupture. Bacteroides fragilis was isolated from the tissue culture of the aortic wall. Metronidazole was administered and discontinued without any infection relapse. When faced with similar cases, rare pathogens should thus be considered as possible causes of mycotic aneurysms.
Introduction
Mycotic aneurysm could be a fatal infectious disease if not appropriately managed. Previous reports indicate that most of the causative pathogens are Gram-positive cocci, such as Staphylococcus aureus or Streptococcus pneumoniae, and aerobic Gram-negative bacteria, such as Salmonella spp. (1).
An 82-year-old man was diagnosed with rheumatic arthritis and underwent long-term treatment with methotrexate (during the past 2 years). He then developed a mycotic aneurysm which had been caused by an anaerobic bacterial infection with Bacteroides (B.) fragilis. As this finding is extremely rare in terms of mycotic aneurysms, we herein describe the assessment, clinical course, and management of this case.
Case Report
An 82-year-old Japanese man was admitted to our hospital because of a 3-day history of chills and abdominal and back pain. He was diagnosed with rheumatoid arthritis, for which he had been prescribed methotrexate 4 mg/week 2 years earlier. Additionally, he had a history of hypertension and angina pectoris. He had no history of tuberculosis, syphilis, or previous abdominal surgery. He denied having any digestive symptoms such as nausea, diarrhea, or melena. There was no evidence that he had a colonic diverticulum.
On admission, he was alert, his vital signs were blood pressure of 144/84 mmHg, heart rate of 115 beats/min, body temperature of 38.3°C, respiration rate of 16 breaths/min, and SpO2 of 96% with ambient air. His physical examination revealed pain and tenderness on his right lower abdominal quadrant, but no pulsatile masses.
Laboratory findings showed an elevated white blood cell count of 16,400/μL (neutrophils, 92%; lymphocytes, 2.1%) Figure. Abdominal aortic aneurysm on enhanced computed tomography taken on admission day. and C-reactive protein of 22.39 mg/dL. His serological tests for syphilis were negative. Enhanced computed tomography displayed an abdominal aortic aneurysm (Figure).
According to a clinical suspicion of a mycotic aneurysm, three sets of blood cultures were obtained on admission, followed by a series of blood cultures taken on 3 consecutive days. Ceftriaxone 2 g intravenously (IV) every 12 h and vancomycin 1 g IV every 24 h were immediately administered for coverage of Gram-positive cocci and Gramnegative bacilli, such as Salmonella spp. We consulted a cardiovascular surgeon who recommended medical therapy as the primary therapy, but with continuous monitoring. The six sets of blood cultures obtained from the patient were negative. Then, vancomycin was discontinued because it was less likely that the infection was caused by Gram-positive bacteria.
On the seventh hospitalization day, he developed a highgrade (40°C) fever. Abdominal ultrasound revealed acalculous cholecystitis. At that time, two additional sets of blood cultures were obtained, and ceftriaxone was replaced with cefepime 2 g IV every 12 h.
On the ninth hospitalization day, the abdominal aneurysm became enlarged and finally ruptured; therefore, urgent open debridement and in situ replacement were performed. There was no evidence indicating either intestinal or colonic comorbidity or any abnormal laparoscopic findings. No serious complications occurred during the surgery. The Gram stain of the necrotic tissue of the aortic wall revealed Gramnegative bacteria suspected to be an anaerobe; thus, piperacillin/tazobactam 4.5 g IV every 6 h and metronidazole 500 mg PO (per os) three times daily were administered according to the general antimicrobiogram findings. On the 11th hospitalization day, piperacillin/tazobactam was replaced by sulbactam/ampicillin 1.5 g IV every 6 h because B. fragilis was isolated from the tissue culture. The bacterial profile indicated that the strain was sensitive to sulbactam/ampicillin and metronidazole, but resistant to clindamycin. While sulbactam/ampicillin was discontinued on the 22nd hospitalization day because of a skin eruption, metronidazole was maintained as suppressive therapy. The patient was dis-charged after 32 days of hospitalization without any complications. Metronidazole was discontinued after 6 months of treatment because the patient complained of dizziness. The dizziness disappeared and the patient was free of infection relapse for 3 years after the cessation of metronidazole. He ultimately died from an unrelated case of pneumonia.
Discussion
We experienced a case of mycotic aneurysm caused by an obligate anaerobic bacterium, B. fragilis, which was successfully treated with surgical intervention and long-term antimicrobial treatment.
Physicians should be alert when using empiric therapy against common pathogens. We selected vancomycin and ceftriaxone to provide coverage for the most common pathogens involved in mycotic aneurysms, such as staphylococci (60%) and Salmonella (20-25%) (1). Although the eight sets of blood cultures obtained during hospitalization were negative, B. fragilis was isolated from the tissue culture of the necrotic aortic wall that was obtained during surgery. This causative pathogen was not covered by the empirical therapy initially administered; however, it is practically impossible to provide antimicrobial coverage for all the microorganisms that could potentially cause some disease in a given patient. Nevertheless, if the clinical course is not progressing as expected after the beginning of antimicrobial therapy, physicians should consider that the causative pathogen may be a rare or unusual one. While antimicrobial coverage for anaerobes was not initially achieved by empirical therapy (e.g., carbapenems), no data are available regarding the effect of medical therapy alone (2). Therefore, whether the patient's clinical course was altered by the management provided remains equivocal.
Additionally, the duration of antimicrobial treatment was debated. Some experts mention that adverse drug reactions to metronidazole therapy are rare, but include central nervous system toxicity (3). A review article reported that metronidazole treatment over 6 months led to encephalopathy; however, this adverse event was resolved after the discontinuation of metronidazole (4). In the present case, we decided to continue the treatment for as long as possible until the onset of any adverse events. It is extremely rare for an anaerobic microorganism to cause a mycotic aneurysm, and it is also difficult for such microorganisms to be cultured successfully from blood samples. To the best of our knowledge, only 22 cases of mycotic aneurysm caused by the B. fragilis group have been previously reported in both English and Japanese literature (2,(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16). Of these, only nine reports confirmed bacteremia secondary to B. fragilis infection (Table). Likewise, in our case, we failed to isolate B. fragilis from a number of blood cultures. In addition, six patients presented conditions associated with an immunosuppressive state such as diabetes mellitus, liver cirrhosis, active malignancy, or the use of immunosuppressive agents, similar to our patient. Moreover, We then hypothesized that this patient's unusual infection was probably related to the immunosuppressive status or some colonic comorbidity (17). However, there was no available evidence to confirm such a hypothesis. Furthermore, as we generally recommend to our patients, we advised the patient to undergo colonoscopy, which he refused. If there had been a colonic malignancy, it is likely that he would have died from this disease. Thus, we might be able to rule out the possibility of colonic comorbidity, particularly colon cancer.
Previous research reported up to the late 1980s has referred to the relationship between mycotic aneurysms and endocarditis or a previously known atherosclerotic aneurysm; however, the tendency has become less common in recent studies. Further research is needed to elucidate this disease.
In similar cases, when the results of multiple blood cultures are negative, but the patient has progressive clinical manifestations of infection, we recommend that clinicians consider rare pathogens, such as facultative anaerobes, as causative agents.
The authors state that they have no Conflict of Interest (COI). | 2019-03-31T13:44:03.074Z | 2013-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "87b24689681fbebb979100f0c9b9e1ef1b1ae661",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3620ddb7d982749fcbb9c5610a5373ad2add025c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
80710823 | pes2o/s2orc | v3-fos-license | Effects of Municipal Waste Disposal Methods on Community Health in Ibadan - Nigeria
The generation of waste, and its disposal, collection, transport and processing are important for healthy ecosystems and the health of people. The negative health effects of waste management is the subject of a large literature. Two main health outcomes have been found to be statistically associated with waste exposure: cancer and congenital malformations. This research study was designed to examine the relationship of environmental characteristics with population health, and impacts of waste disposal methods on public health of Agbowo and Bodija community residents. Primary data were collected through a semi- structured questionnaire that was used to gather information on environmental characteristics, municipal waste disposal methods and its effects on population health of Agbowo and Bodija communities’ residents. 421 households in Agbowo (210) and Bodija (211) were randomly selected for this study. Data generated from our field survey were analyzed using t-test and Pearson Product Moment Correlation, (PPMC), at 0.05 alpha level. Results shows that there is a significant difference between the two study areas in terms of environmental characteristics. A significant difference was also observed between waste disposal methods of Agbowo and Bodija communities. Using PPMC, our results demonstrates a relationship between healthy ecosystems and health of communities residents in Agbowo and Bodija. In Agbowo 158 (75.24%), 163 (77.62%), 168 (80%), 109 (51.9%), 94 (44.76%), 129 (61.43%) respodents reported to suffer of watery stools, typhoid, skin infections, vomitting, sore throat, abdominal pains in the past one year. But when compared to Bodija the number of respondents who suffered of watery stools, typhoid, skin infections, vomitting, sore throat, abdominal pains in the past one year stood at 132 (62.56%), 124 (58.77%), 54 (25.59%), 73 (34.6%), 69 (32.7%), 97 (45.97%), respectively. Having established that improper waste generation and management can have adverse health effects on human health, the study concludes by reconmmending that government at all levels should adopt an integrated waste management system with appropriate policy agenda, public programmes and strategic action plans that will enhance environmental governance and end to indiscriminate waste disposal.
1 Introduction Abul (2010) classified solid waste into different types, depending on their source; household waste is generally classified as municipal waste, industrial waste as hazardous waste, and biomedical waste or hospital waste as infectious waste. The term Bsolid waste^means any garbage, refuse, or sludge from a waste treatment plant, water supply treatment plant, or air pollution control facility and other discarded material, including solid, liquid, semisolid, or contained gaseous material resulting from industrial, commercial, mining, and agricultural operations (Salam Abul 2010; US Law-Solid Waste Act 2 1999). This waste are disposed at the very outskirts of the cities. Waste generated from households, shops, supermarkets, and open market places are therefore termed as Municipal waste. This waste are either properly disposed in landfills, incinerators or open dumpsites. Salam (2010) further add that solid waste disposal sites are found on the outskirts of the urban areas, turning into the child sources of contamination due to the incubation and proliferation of flies, mosquitoes, and rodents; that, in turn, are disease transmitters that affect population's health, which has its organic defenses in a formative and creative state. The said situation produces gastrointestinal, dermatological, respiratory, genetic, and several other kind of infectious diseases. Consequently, dumping sites have a very high economic and social cost in the public health services, and have not yet been estimated by governments, industries, and families (Salam, 2010).
However, increasing population levels, booming economy, rapid urbanization and the rise in community living standards have greatly accelerated the municipal solid waste generation rate in developing countries (Debnath et al. 2015;Minghua et al. 2009) and urban cities such as Ibadan and Lagos. Rapid industrialization and population explosion in Lagos has led to the large scale migration of people from villages, less developed towns and urban areas across Nigeria.
Population growth and presence of series of commercial activities in Ibadan has eventually led to thousands of people influx into the city. This is evident in high population density and overcrowd of houses in Agbowo and Bodija community areas in Ibadan. The growth of population in Ibadan rose from the estimated 100,000 in 1851 to 175,000 in 1911175,000 in . Between 1911175,000 in and 1921 it increased at about 3.1% per annum to 238,075. The rate of increase between 1921 and 1931 was 0.5% per annum while it was only 0.8% per annum for the period between 1931 and 1952 when the population rose from 387,133 to 459,196. In 1952, the less city was counted and it was 286,252 (Tomori 2006). From then on, the population of Ibadan metropolitan area increased at a growth rate of 3.95% per annum from 1952 and 1963 when the population rose to 1,258,625. The population rose to 1829.300 in 1999 at a growth rate of 1.65% from 1963 and increased to 1,338,659 in 2006 at a growth rate of 2.35%. However, the population growth is gradually shifting to the less city with a growth rate of 4.7% per annum between 1991 and 2006 according to the provisional census figure released by the National Population Commission (2006) (Tomori 2006).
In the face of this increasing population levels and rapid urbanization, the major urban environmental concerns-municipal waste management, sanitation and associated adverse health impacts-the increased urbanization with large population density can further intensify these concerns, unless we take urgent effective steps to improve sanitation and solid waste management. In the words of Debnath et al. (2015) and Taylor (2003), landfilling is the simplest and normally cheapest method for disposing of waste. However, this claim may not be true for waste management methods in developed countries. For instance landfilling is a very expensive method of managing waste in industralised countries like China, Netherlands and Germany (Lam and Chaudry 2005). Improper management of municipal waste has become one of the problems facing developing urban cities across the world. Little attention is given to waste management practices as it is common to see heaps of waste in the major cities littering the streets, dumped indiscriminately in drainages, vacant plots and open space especially in the developing cities and our study areas in particular. This has contributed not only to the spread of communicable diseases in the affected areas; it has effect on flooding and other environmental problems (Abd'Razack et al. 2013;Babalola et al. 2010;Wilson et al. 2009). A typical solid waste management system in developing countries displays an array of problems among which include low collection coverage and irregular collection services (Abd'Razack et al. 2013;Nwaka 2005;Omran et al. 2007).
Current global MSW generation levels are approximately 1.3 billion tonnes per year, and are expected to increase to approximately 2.2 billion tonnes per year by 2025. According to Debnath et al. (2015) this increase represents a significant increase in per capita waste generation rates, from 1.2 to 1.42 kg per person per day for a decade to come. However, global averages are broad estimates only as rates vary considerably by region, country, city, and even within cities (Debnath et al. 2015;Hoornweg 2005). Nigeria, with population exceeding 170 million, is one of the largest producers of solid waste in Africa. Despite a host of policies and regulations, solid waste management in the country is assuming alarming proportions with each passing day. Nigeria generates more than 32 million tons of solid waste annually, out of which only 20-30% is collected. Reckless disposal of MSW has led to blockage of sewers and drainage networks, and choking of water bodies. Most of the wastes is generated by households and in some cases, by local industries, artisans and traders which litters the immediate surroundings. Improper collection and disposal of municipal wastes is leading to an environmental catastrophe as the country currently lack adequate budgetary provisions for the implementation of integrated waste management programmes across the states (Bakare 2016). Sada (1984) has observed that in 1980, on the average, a balance of 100 metric tons of solid waste are piled up daily in Benin City. This is because while about 350 metric tons of solid wastes are generated daily, the maximum rate of evacuation achievable was only 250 metric tons daily. Uchegbu (1988) remarked that big cities like Port Harcourt, Lagos, Kano, etc. in Nigeria produced on the average 46 kg of solid waste per person, per day. Amuda et al. 2014 states that; as at 2010 estimated MSW generated in Lagos, Port Harcourt, Ibadan and Warri are 1.23 × 10 5 ; 762,143; 996,102 and 174, 372 t/ year respectively. On the contrary, Bakare 2016 states that Lagos with a population estimate of 21 million has a per capita waste generation of 0.5 kg per day, the city generates more than 10,000 tons of urban waste every day. Ibadan with a projected population of 3,154,487, the quantity of waste generated in Ibadan Metropolis in 2012 is estimated at 634,998.43 t/year and 0.55 kg per capita per day which included provision for street sweeping (Olowe 2018).
The practice of indiscriminate and improper dumping of Municipal Solid Waste (MSW) is on the increase in Agbowo and Bodija communities areas in particular and Nigeria in general and it is compounded by a cycle of poverty, population explosion, decreasing standard of living, poor governance and low level of environmental awareness, and the end product of it all is the dumping of these waste in any available open space (Rachel et al. 2009). Abd 'Razack et al. 2013 stated that it has been observed that because of poor or improper land use planning in some part of many organically developed cities has results into the creation of informal settlement with narrow streets, which makes it difficult for waste collection trucks to access such areas (Nabegu 2010;Swapan 2008). Waste are dumped into the drainages that block the free flow of runoff water and this practice gives rise to flooding and the communities are adversely affected, some people dumped their waste to the road side, thereby reducing the width of the road and esthetics of the cities especially in Nigeria. This is evident as one walk across the nook and the crannies of Nigeria; you find heaps of refuse littering the entire landscape, road sides, parks, gardens, commercial centres and other land use (Danbuzu 2011;Imam et al. 2007). Loredana and Maria (2010) states that several studies have reported the effects of waste exposure on health. A wide range of toxic substances can be released into the environment from waste disposal sites, for example; methane, carbon dioxide, benzene and cadmium. Many of these pollutants have been shown to be toxic for human health. In addition, if the waste disposals are illegal they are likely to contain highly hazardous compounds resulting from industrial activities (e.g. nuclear discharges, Asbestos, Lead). Two main health outcomes have been found to be statistically associated with waste exposure: cancer and congenital malformations. Hazardous waste has been shown to influence the likelihood of developing lung, brain cancer, bladder and lung cancer (Loredana and Maria 2010). A United Nations Report (August 2004) noted with regret that while developing countries are improving access to clean drinking water they are falling behind on sanitation goals.
At one of its summit in 2000 (Uwaegbulam 2004) revealed that The World Health Organization-(WHO 2004) and United Nations International Children Education Fund-(UNICEF 2004) joint report in August 2004 that: Babout 2.4 billion people will likely face the risk of needless disease and death by the target of 2015 because of bad sanitation^. The report also noted that bad sanitationdecaying or non-existent sewage system and toilets-fuels the spread of diseases like cholera and basic illness like diarrhea, which kills a child every 21 s. The hardest hit by bad sanitation is rural poor and residents of slum areas in fast-growing cities, mostly in Africa and Asia (Napoleon et al. 2011).
The importance of waste collection, transfer and disposal cannot be overemphasized. Apart from the issue of esthetics, uncollected wastes constitute a health risk, which can be a serious consideration in low income residential areas. Leachate from uncollected and decomposed garbage waste can contaminate groundwater and this could have enormous health implications in low-income communities where the use of well-water for drinking is common (UNCHS 1988). Environment health conditions are hampered through the pollution of ground and surface water by leachates from dump sites. Air pollution is often caused by open burning at dumps leading to foul odors and wind-blown litters. In dump sites, Methane is an important greenhouse gas, which is a by-product of the anaerobic decomposition of organic wastes (Amuda et al. 2014). Numerous research studies has shown that environmental governance is at the lower ebb in Nigeria. This definitely has consequential implications and impacts on the public health of the people. Olukanni and Akinyinka (2012) (2013), Kaoje et al. (2015) all established various perspectives on the poor environmental governance, irritable environmental behaviour and the challenges confronting human health due to improper management of MSW.
In addition, Olukanni and Akinyinka 2012 join other researchers to conclude that there are potential risks to the environment and human health from improper handling of solid wastes. Direct health risks concern mainly the workers in this field, who need to be protected, as far as possible, from contact with wastes. This further reveals other epidemiological studies that shows that a high percentage of workers who handle refuse, and of individuals who live near or on disposal sites, are infected with gastrointestinal parasites, worms and related organisms. Disease transmission by houseflies is greatest where inadequate refuse storage, collection and disposal is accompanied by inadequate sanitation (Olukanni and Akinyinka, 2012). The mountainous heaps of solid wastes that deface Nigerian cities and the continuous discharges of industrial contaminants into streams and rivers without treatment motivated the federal government of Nigeria to promulgate Decree 58 for the establishment of a Federal Environmental Protection Agency (FEPA) in 1988. Nevertheless, research studies since 1988 has generally revealed the bankruptcy of the FEPA establishment and how largely Nigeria communities have suffered from poor environmental governance and the subsequent public health challenges which has constituted great threats to the population health.
This paper therefore emanates from the need to address improper disposal and management of MSW in Bodija and Agbowo communities in particular. The major thrust of this study is to investigate the effects of MSW disposal and management on the population health of Agbowo and Bodija communities' residents. Implications of the waste management methods and environmental characteristics of Agbowo and Bodija communities on health conditions of the residents were investigated.
Study Area
The city of Ibadan, with an heterogeneous population, is located approximately on longitude 3 0 5 1 East of the Greenwich Meridian and latitude 7 0 23 1 North of the Equator at a distance some 145 km worth east of Lagos. Ibadan is directly connected to many towns in Nigeria, as its rural hinterland by a system of roads, railways and air routes. The physical setting of the city consists of ridges of hills that run approximately in northwestsoutheast direction. There are 11 Local government areas across Ibadan-part of which include Ibadan North local government. The two most popular towns in Ibadan North local government include Agbowo and Bodija towns. Ibadan North Local government Area has a household population of 76,740 (Tomori 2006).
Agbowo is located at the heart of the historic city, Ibadan, while Bodija faces it in the Northward direction and Ojo to its West. It is directly facing the University of Ibadan at the South, Nigerian Institute of Social and Economic Research (NISER), to its west and connects to the Trans Amusement Park, along Mokola road to the Eastern direction. Agbowo and Bodija communities are densely populated with students of the University of Ibadan and The Polytechnic, Ibadan. Agbowo has many features of an urban slum: overcrowding, unplanned housing, and lack of basic social amenities such as piped water (Akinremi and Samuel 2014).
Municipal waste management methods was poor as we observed open drainages and sewers poorly constructed were available and totally absent in most households. Residents of Agbowo and Bodija were fond of indiscriminate disposal of both liquid and solid municipal wastes. Evidence to this was dominance of open dumpsites, mountainous heaps of solid waste and refuse packs dumped openly along the streets. Inadequate toilet facilities were observed among the households in Agbowo. This makes the residents to openly dispose solid waste, including feces, into drainages. More dominant, too, is the presence of open dumpsites at backyards or closer to houses.
Research Study Design
In this investigative study, a comparative cross-sectional design was employed which involved the use of semi-structured interviewer-questionnaire to randomly selected household respondents. Socio-demographic characteristics, environmental characteristics and waste disposal and management methods were compared between the two study areas. Community participation in MSW management and its effects on population health were compared between the two areas.
Study Population
Our study population included household residents of Agbowo and Bodija who are matured enough to participate in waste disposal and its management. A representative in each household was randomly chosen to participate in the survey. The total sampling techniques i.e. maximized convenience sample was adopted for this study. There were 421 household respondents from Agbowo (n = 210) and Bodija (n = 211) that participated in this survey.
Survey and Sampling Method
Multi-stage sampling technique was used to collect the data involving a two -stage design procedure.
Stage1: The division of the study areas into four stratum to represent primary selection units which denote the strata from where the data were collected. Stage 2: Simple random selection of household respondents in each of the locations in each stratum. Further details of sampling procedures are summarized in Table 1. This adopted sampling method help us to collect data that will truly represent the household population estimates at both study areas.
A semi-structured interviewer administered questionnaire was used to obtain information on environmental characteristics, waste management practice methods and environmentally related health symptoms and conditions among our
Statistical Analytical Techniques
Primary data collected from our field survey were entered, managed and analyzed using IBM SPSS Statistical package version 21. Data were analyzed using descriptive statistics, One Sample T test-to find any significant difference in environmental characteristics between Agbowo and Bodija, significance difference in waste disposal methods between Agbowo and Bodija; Correlation analysis was done to find out any relationship between environmental characteristics and health conditions of our respondents in both study areas.
Environmental Characteristics and Management Practices
In our survey, data collected reveals that the most dependable source of portable water in our study areas is Sachet water. (Agbowo, 83 (39.3%) and Bodija 108 (51.2%)). This is followed by Borehole; 67 (31.8%) in Agbowo and 59 (28%) in Bodija. Very few of our respondents have access to Tap water in Bodija, 9 (4.3%). In Agbowo there was no evidence of Tap water at all. Agbowo community, in the face of its overcrowded houses and unplanned housing settlement is characterized with boreholes and well water. 48 (22.7%) of Agbowo residents said that they depend on well water for domestic activities while the figure was 27 (12.8%) in Bodija. In addition, 12 (5.7%) respondents in Agbowo responded that their major water source is Rain water while the figure stands at 8 (3.8%) in Bodija. The dominant type of toilet facility is Pit latrine which is 114 (54%) in Agbowo and 112 (53.1%) in Bodija. Respondents also said that they make use of bush to dispose their feces (Agbowo: 26 (12.3%), and Bodija: 34 (16.1%)). 39 (18.5%) of Agbowo respondents reported that they make use of Stream/ Lake to dispose waste materials, the number is 17 (8.1%) in Bodija. At our two study areas, different MSW disposal and management practices is very common from one household to the other, especially at Agbowo where MSW and environmental management is not From our survey at both study areas-Agbowo and Bodija communities, our result clearly reveals the major environmental characteristics of both communities and the level of environmental sanitation. It was clearly shown in our data collected that 83 (39%) households in Bodija responded that they experience Flies most, those that responded they experience Mosquitoes most in their household was 54 (25.6%), Cockroaches: 24 (11.4%) households, Rats: 31 (14.7%) households, Bedbugs 19 (9%) while in Agbowo 110 (52.1%) households reported they experience Flies most, 28 (13.3%) household responded they experience Mosquitoes most, Cockroaches: 17 (8.1%), Rats: 34 (16.1%) and Bedbugs: 21 (10%). However, when asked how frequently waste collectors patrol the community to collect waste; 61 (28.9%) households in Agbowo responded they come once weekly and 47 (22.3%) in Bodija, 27 (12.8%) households in Agbowo reported more than once weekly and 38 (18%) in Bodija, 83 (39.3%) households in Agbowo responded they patrol once a month while it was 23 (10.9%) in Bodija, 14 (6.6%) households in Agbowo responded that they patrol the community more than once a month while 51 (24.2%) households in Bodija. In Bodija waste collectors never visited 54 (24.6%) households in the past one year while the number stood at 25 (11.8%) households in Agbowo. However, 49 (23.2%) households in Bodija reported that they are satisfied with the level of sanitation in their community, while in Agbowo only 43 (20.4%) households consented that they are satisfied with the level of sanitation in the community, and 110 (52.1%) households reported they are unsatisfied; and in Bodija 105 (49.8%) households responded unsatisfied. In addition, 57 (27%) households in Bodija and Agbowo remain undecided about the level of environmental sanitation in their communities. Figure 1 below typically shows the rate of infestation of selected disease carrying organisms found in Agbowo and Bodija households.
From the above graph; the rate of infestation of rats is medium in Bodija and heavy in Agbowo; mosquitoes appear to heavily infest both Agbowo and Bodija communities; cockroah infestation rate seems to be relatively populated in both communities while Bedbugs infestation is medium in both areas. Flies heavily infest Bodija community compare to its rate in Agbowo.
When asked how often does environmental health officers visit each household for environmental inspection, data collected indicated that: 23 (10.9%) households in Agbowo Fig. 1 Rate of infestation of disease carrying organism at our study areas. Source: Field Survey (July, 2017) responded that environmental health officers visit their houses once in two months and 39 (18.5%) households in Bodija consented same. In Agbowo 58 (27.5%) households responded that they visit their houses once in three months, and 33 (15.6%) in Bodija. In Agbowo 17 (8.1%) respondents said they visit their houses once in a month and 9 (4.3%) in Bodija. 25 (11.8%) household in Agbowo reported they visit more than once a month and 17 (8.1%) in Bodija. However, 87 (41.2%) households and 113 (53.6%) in Agbowo and Bodija respcetivcely reported environmental health officers never visit them in the past one year. 185 (87.7%) households in Bodija consented that they are ready to pay for waste collection and 204 (96.7%) housesholds respondents in Agbowo reported thay are ready to pay for waste collection. Very few of the respondents; 6 (2.8%) in Agbowo and 26 (12.3%) in Bodija are not willing to pay for waste collection.
Any Significant Difference in the Environmental Characteristics between Agbowo and Bodija?
To determine if there is any significant difference in the environmental characteristics between the two study areas, data collected through response from our household respondents were further analyzed using the Student t-test tool of IBM SPSS statistical package version 21. Results from our t-test analysis were presented in Tables 3 and 4 below: Result of the analysis shown in Table 3 reveals that there is a significant difference between the environmental characteristics of Agbowo and Bodija. Our P value is taken at point (P < 0.05).
The Bodija has the highest mean score (12.43) while Agbowo has the lowest (12.06).
Any Significant Difference in the Waste Disposal Methods and Management between Agbowo and Bodija.?
Result of the analysis shown in Table 4 reveals that there is a significant difference between the waste disposal methods and management between Agbowo and Bodija. Our P value is taken at point (P < 0.05). The Bodija has the highest mean score (23.74) while Agbowo has the lowest (22.65).
Health Problems and Conditions Associcated with the Environmental Characteristics of Study Areas
Results from our field survey states that environmental characteristcis of both Agbowo and Bodija are significantly different and in fact clearly indicate the possibility of threats to population health. Public health at both study locations when investigated reveals traces of the presence of some environmentally related illness among the people living at both study locations. In a situation where 57 (27%) households in Agbowo practice open burning of waste and 59 (28%) respondents in Bodija dispose waste via open burning. In Bodija the most common means of waste management is open dumping. The number of households that practice open dumping is 87 (41.2%) in Bodija and 47 (22.3%) in Agbowo. This waste management practices at these study locations definitely give room for disease carrying organisms to be manifesting in their households. This is evident in the responses of sampled households on the most common disease carrying vector that they experience most in their various households. Rate of infestation of selected diseases carrying organisms-rats, mosquitoes, cockroaches, flies and bedbugs, strongly proof and support the status of health conditions as been reported by our household respondents. Take for instance, rats are present in most houses in Agbowo when compare to Bodija (see fig. 1), presence of mosquitoes are heavily high at both locations while bedbugs and Cockroaches are more common in Bodija houses when compared to houses in Agbowo. This is an indication that most households in Bodija appear dirtier and always un-kept compare to houses in Agbowo. Bodija as well has the largest pupolation of flies in their surroundings-one of the confounding factors for this is due to the open market situated closer to our study areas. The state of solid waste management in this open market in Bodija is nothing to write home about. This is evident in heaps of food waste that dotted the nooks and crannies of the market and mountainous waste packed in sacks and polythene bags beside roadsides. Nevertheless, Agbowo houses has the largest population of rats and mosquitoes-one of the factors for the highest number of malaria illness in Agbowo.
Data on the health conditions and symptoms of some selected environmentally related illness were gathered from household respondents. Respondents were asked to indicate all the environmental illness and symptoms that they have , respectively. This result is an evidence that Agbowo and Bodija communities' residents are highly exposed to water and food borne diseases hence the prevalence of acute gastrointestinal illness and acute respiratory illness at both study locations. The high rate of skin infection, typhoid and abdominal pains are indications of contaminated portable water present at both locations. This is effective due to the irritable environmental behaviour and poor hygiene coupled with low level of enviromental sanitation. Open dumping of waste and dumping of waste in streams and lakes around the communities are results of high rate of water/food borne illness reported at our study locations.
However, acute respiratory illness is observed among our respondents due to the prevalence of health conditions like cough 61(29.05%) in Agbowo and 91 (43.13%) in Bodija and Sore throat 94 (44.76%) in Agbowo and 69 (32.7%) in Bodija. Air pollution via open dumping, indiscriminate waste disposal in streams and lakes and presencce of poorly managed dumpsites and pit latrines can be held responsible for these acute respiratory illness. Data gathered from the administration of questionnaires states that 69 (32.86%) respondents visited a hospital for environmentally related illnes in less than a year, 108 (51.43%) respondents visited hospital between 1 and 5 years, 33 (15.71%) respondents visited hospital more than 5 years ago in Agbowo while in Bodija 74 (35.07%) respondents reported that they had visited a hospital in less than a year to complain about an environmentally related illness, 84 (39.81%) respondents visited hospital in 1-5 years and 53 (25.12%) had visited hospital more than 5 years ago. To determine if there is any relationship between the environmental characteristics of our study areas as reported by correspondents during the filed survey and as indicated on our check list in the process of the survey, and the health conditions reported by our household respondents at the two study areas, a Pearson Product Moment Correllation was carried out with our response data on IBM SPSS statistal package version 21. The Tables 5 and 6 below shows the result of our analysis. Table 5 shows that there is significant relationship between environmental characteristics and health conditions (r = −0.91; p < 0.05). Table 6 shows that there is significant relationship between environmental characteristics and health conditions (r = −0.91; p < 0.05).
Conclusion
From our study, we have been able to statistically conclude that there is a significant difference in the environmental charateristics of Agbowo and Bodija locations. Our study as well shows that a significant difference exist between waste disposal methods and management in Agbowo and Bodija communities. With this investigative research, we have been able to demonstrate that there is a statistical relationship between environmental characteristics and health conditions reported at both study locations. This further affirm the claim of (Loredana and Maria 2010) on the statistical relationship between two main health outcomes found to be statistically associated with waste exposure: cancer and congenital malformations. This study has been able to show that improper handling of waste and poor environmental hygiene or sanitation can influence the likelihood of developing illness like acute respiratory illness and acute gastrointestinal illness such as Sore-throat, nausea, Typhoid, Watery stool, Acute abdominal pains etc. Exposure to air pollution, too, is associated with health conditions like asthma, sore throat, and the likelihood of associated lung diseases. There are co-founding factors that can cause vomiting. However, our study has clearly shown the association of vomiting with food-borne diseases as experienced among respondents in our study locations.
Lack of access to qualitative drinking water free of contaminants can be associated with the prevalence of Typhoid, Skin infections and Abdominal pains in our study area. In Agbowo for instance, poor environmental sanitation and absence of portable water for food preparations and drinking can be held responsible for prevalence of watery stools, vomiting and other related gastrointestinal diseases. High population of flies is an indication of poor environmental sanitation; numbers of respondents that had malaria is very dominant at both study areas-an indication that our study areas are highly infested with mosquitoes causing malaria.
This study has therefore reveals that poor MSW disposal methods and management constitute high risk factor to public health. Improper handling of waste-both liquid and solidespecially from the ranks of waste collectors and scavengers increases the risk to human health. This study is a preliminary research on the community participation in waste disposal and its management. We have been able to establish a scientific proof that environmental characteristics and poor hygiene are statistically associated with environmentally related illness. It is thereby needed to further the research to include the prevalence of environmentally related diseases among the population inhabiting Agbowo and Bodija.
Recommendation
There is no gain saying that Nigeria has too many environmental laws, environmental agencies and private waste contractors. It is thereby important for proper implementation and enforcement of all established environmental laws. Environmental laws enforcement agencies and personnel must be well trained to boost the chance of proper handling, management and enforcement of environmental laws and defaulters. Dumpsites located at our study areas are too close to residential houses and open market places. Dumpsites should be properly located and managed to minimize its effects on the environment. Dumpsites should be well fenced in and away from human settlements. There should be a follow up in the functioning of the dumpsites to avoid pollution on the environment and health hazards. An integrated waste management system must be adopted by governments at all levels. Waste management must go beyond mere collection and dumping at landfills. Government must begin the process to formulate and adopt an integrated waste management system to enhance proper management and handling of municipal solid waste.
From our study, it was revealed that private waste contractors and collectors, including those concessioned, by the government are part of the environmental problems that Nigerian societies are facing. This is seen in the poor condition, due to neglect, of vehicles used for waste collection and neglect on the safety of their workers who, due to improper handling of waste, are exposed to various gastrointestinal diseases and respiratory illness. This is why government at all levels must remain responsible to environmental management and governance. Government must provide adequate facilities and infrastructures for waste disposal and management. This ensure environmental sustainability.
Government must begin to adopt a policy agenda and strategic action plans to sensitize community people and their mode of participation in municipal waste disposal and management. In this manner, community people will be enlightened on sorting out waste before disposal, self-monitor waste disposal methods to ensure that indiscriminate waste disposal is prohibited among communities.
Health educators need to be trained and properly engaged to give public education about the effects of dumpsites on population health and public health in general. A public orientation and awareness must be organized for communities and municipalities on proper handling of municipal waste (both solid and liquid) and disposal methods in a bid to stop open dumping of refuse, indiscriminate disposal of waste in streams and lakes. Putting heaps of waste in open spaces and by roadsides must be stopped.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-03-18T14:03:48.340Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "4c06fcab3802ac0bf381c788df3229cb6adf5947",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41050-018-0008-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "53795874a05b18e4504466c4e4f7addaa9f19026",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28150243 | pes2o/s2orc | v3-fos-license | The First Instar Larva of Lutzomyia longipalpis ( Diptera : Phlebotomidae )
The morphology and chaetotaxy of the first instar larva of Lutzomyia (Lutzomyia) longipalpis are described based on observations made under scanning electron microscope. Because three-dimensional images were studied, some terminological changes are proposed to give a more realistic description of the positions of the setae. On the larval body, the pairs of setae have the following number: 9 on the head, 12 on the prothorax, 8 on the mesoand metathorax, 6 on the first to eighth abdominal segments, and 8 on the ninth abdominal segment.
Lutzomyia (Lutzomyia) longipalpis (Lutz and Neiva, 1912) is the type species of the genus Lutzomyia França (Diptera: Phlebotomidae).This sand fly is of medical and veterinary interest because it is the most important insect host of Leishmania chagasi Cunha and Chagas, the causative organism of American visceral leishmaniasis.L. longipalpis has proved to be amenable to laboratory colonization and some closed colonies have now been maintained for almost 20 years.
Although laboratory-reared material has been available for many years, little is known of the morphology of the immature stages of L. longipalpis.Guitton and Sherlock (1969), based on studies by optical microscopy, described the egg, fourth instar larva and pupa.Accepting the limitations of their study methods, the descriptions of Guitton and Sherlock (1969) are inadequete.
Scanning electron microscope (SEM) was used by Ward and Ready (1975) to study the chorionic sculpture of sand fly eggs, including those of L. longipalpis.SEM studies have also been the basis of a description of the pupa of L. longipalpis (Leite et al. 1991).Herein, we provide illustrations of SEM studies on the first instar larva of L. longipalpis, like that already described for the fourth instar larva (Leite & Williams 1996).Larvae were killed by dropping them in hot water (70°C).They were fixed in 70% ethyl alcohol, dehydrated in a sequence of increasing concentrations of ethyl alcohol, submitted to critical point drying in carbon dioxide, and spattered with colloidal gold (Leite & Williams 1996).
Apart from the head and the ninth abdominal segment, descriptions of setae are given in an anterior-posterior sequence, beginning from the dorsal mid-line and working circumferentially in a latero-ventral direction.The nomenclature used here is based on that of Barretto (1941) and previously adopted by Leite and Williams (1996).
RESULTS
The description is based on examination of 12 larvae.
General appearance of the first instar larva -The larva emerges through a dorsal, longitudinal fissure (Fig. 1) and rapidly leaves the egg shell.At eclosion, it is 0.51 mm long and 0.08 mm wide (at the second thoracic segment).The integument is wrinkled, bearing minute tubercles, with or without spicules, and also with paired setae that may be simple (bare) or barbed (brush-like).Each seta arises from a tubercle.When barbed, a seta may or may not have a smooth, unbarbed stem.Each barbed setae on the dorsal surface, except those on the ninth abdominal segment, have spatulate tips.
Barbed setae on the head and on the lateral and ventral surfaces of segments have pointed tips.
Ninth abdominal segment -The last abdominal segment is quite different from these lying anteriorly (Figs 20,21).There is a dorsal pair of prominent caudal lobes, with a single caudal seta arising from each lobe (Figs 19,20).The following structures are visible (Fig. 23) at the base of each caudal lobe: a mammiliform button (or sensillum), a campaniform sensillum; the base of the caudal seta, and the posterior lobular seta.A caudal depression is rugose anteriorly and spiculate posteriorly.Each caudal seta is 0.70 mm long.
DISCUSSION
The immature stages of phlebotomine sand flies are little known.This is because immatures are rarely encountered in the field and because most species are extremely difficult to rear in laboratory conditions.The available descriptions of larvae are based on light microscopy studies.Most commonly, detailed descriptions have been given only of the fourth instar larva.Descriptions of the earlier instars have been much briefer and often only record the extent to which they differ from the fourth instar.
Scanning electron photographs of the first instar larva of Lutzomyia longipalpis.Ward (1972) and Forattini (1973) were based on those introduced by Abonnenc (1956Abonnenc ( , 1972) ) to describe the larvae of Old World sand flies.All the aforementioned publications gave descriptions of fourth instar larvae, which are morphologically different from the larva of L. longipalpis.Therefore we reverted to the terminology of Barretto (1941), who included descriptions of first stage larvae of several Brazilian species of phlebotomines.
Some descriptive terms of Barretto (1941) have been modified in view of the three dimensional images obtained by SEM.Use of Barretto's terminology has an additional advantage.It was used to describe the larva of Bruchomyia argentina (Salchell 1953) and those of Nemapalpus nearcticus (Mahmond & Alexander 1992).Barretto's terminology, thus, is applicable to both subfamilies (Bruchomyiinae and Phlebotominae) that Williams (1993) included in the family Phlebotomidae.
Studies by means of SEM revealed features of a first instar larva that were either overlooked or not seen in light microscope studies.An example is a number of setae on the head.Excluding setae on the mouthparts, Barretto (1941) recorded eight pairs of setae on the first instar larvae of the Brazilian species he studied.In dealing with Old World species, Perfil'ev (1968) recorded seven pairs of setae.In the present study, nine pairs of setae were seen on the head of the first instar larva of L. longipalpis.Perfil'ev (1968) stated that first instar larvae of Old World phlebotomines have five teeth on the mandible but only four mandibular teeth in later instars.Other studies on larvae of both Old and New World sand flies have shown that there are four mandibular teeth in all larval instars.SEM observations on the first instar larva of L. longipalpis revealed the presence of only four teeth.This confirms the light microscope observations of Barretto (1941), Hanson (1968), Guitton andSherlock (1969), andAbonnenc (1972).Abonnenc (1956Abonnenc ( , 1972) ) and Perfil'ev (1968) considered that the antennae of sand fly larvae are composed of three segments.Other authors (Barretto 1941, Hanson 1968, Forattini 1973, Ward 1976b) have suggested that antenna of larvae has only two segments.SEM observations show that the first instar larva of L. longipalpis has an antenna with two segments: a small proximal segment and a much large, ovoid, distal segment.The third (= basal) segment of Abonnenc and Perfil'ev can be better described as the antennal socket.
The integument of the head of the first instar larva of L. longipalpis is bare anterior but, posteriorly, the dorsal, lateral and ventral surfaces bear minute, finely pointed spines.Such spines have been observed in Old World phlebotomines but their arrangement may differ from that seen in L. longipalpis.Perfil'ev (1968) recorded that such spicules occur over the entire head integument of Phlebotomus perfiliewi and P. chinensis; they lie lateral to egg breaker in P. major; in P. papatasi and P. caucasicus, they are arranged in small, isolated groups; and they are anterior and lateral to the egg breaker in Sergentomyia minuta.
The arrangement of barbed setae with spatulate tips seen in L. longipalpis, has also been recorded in two African species: P. freetownensis sudanicus and S. schwetzi (Abonnenc 1956).Perfil'ev (1968) commented that the size and arrangement of spicules on the dorsal surface of the last abdominal segment seem to be characteristic for certain species of phlebotomines.P. papatasi, for example, has few spicules; such arranged in a triangular-shaped area in P. sergenti, but in two triangular areas in P. perfiliewi.S. minuta has only a few spicules.In contrast to these Old World species, the first instar larva of L. longipalpis has a more extensive distribution of spicules on the dorsal surface of the ninth abdominal segment.
A small tubercle below seta 8 on the prothorax is considered, herein, to be a rudimentary anterior spiracle or the primordium of this structure.Mangabeira (1942a) figured the anterior spiracle of the first instar larva of Brumptomyia avellari.The certainty of the spiracle in B. avellari, examined by light microscopy, and the doubts after studies on L. longipalpis by SEM, could be an indication of morphological differences at generic level.
Differences between the larva of L. longipalpis and several first instar larvae of several Old World species have already been mentioned.The differences between the first instar larvae of three species of Brumptomyia, described by Barretto (1941), Mangabeira (1942a, e) and Hanson (1968) and that of L. longipalpis deserves further study -by SEM, if possible.Hanson (1968) briefly described the first instar of Warileya rotundipennis, but a more detailed description is required before a valid comparison can be made with the first instar larva of a species of Lutzomyia.
Within the genus Lutzomyia (which might be an invalid taxonomic concept), the first instar larva of L. longipalpis can be differentiated from all those described by Barretto (1941), Mangabeira (1942bd) and Hanson (1968).
The foregoing discussion demonstrates that morphological features of first instar larvae can be distinctive at specific and generic levels, and can probably contribute to studies on the systematics of phlebotomines which, hitherto, have been based on the morphology of adults.
Specimens were obtained from a closed laboratory colony that has been maintained in Belo Horizonte since 1983.The colony originated from blood fed females collected in Abaetetuba, State of Pará, Brazil, by Marisa Cenizio dos Santos in collaboration with members of the Wellcome Parasitology Unit, Belém in 1983. | 2017-09-16T03:46:51.750Z | 1997-03-01T00:00:00.000 | {
"year": 1997,
"sha1": "2ef163eecfceb9b8cce9b873753f6c4b3f10e7fb",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/mioc/v92n2/3179.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ef163eecfceb9b8cce9b873753f6c4b3f10e7fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
257448498 | pes2o/s2orc | v3-fos-license | Promising Antimicrobial Action of Sustained Released Curcumin-Loaded Silica Nanoparticles against Clinically Isolated Porphyromonas gingivalis
Background. Porphyromonas gingivalis (P. gingivalis) has always been one of the leading causes of periodontal disease, and antibiotics are commonly used to control it. Numerous side effects of synthetic drugs, as well as the spread of drug resistance, have led to a tendency toward using natural antimicrobials, such as curcumin. The present study aimed to prepare and physicochemically characterize curcumin-loaded silica nanoparticles and to detect their antimicrobial effects on P. gingivalis. Methods. Curcumin-loaded silica nanoparticles were prepared using the chemical precipitation method and then were characterized using conventional methods (properties such as the particle size, drug loading percentage, and release pattern). P. gingivalis was isolated from one patient with chronic periodontal diseases. The patient’s gingival crevice fluid was sampled using sterile filter paper and was transferred to the microbiology laboratory in less than 30 min. The disk diffusion method was used to determine the sensitivity of clinically isolated P. gingivalis to curcumin-loaded silica nanoparticles. SPSS software, version 20, was used to compare the data between groups with a p value of <0.05 as the level of significance. Then, one-way ANOVA testing was utilized to compare the groups. Results. The curcumin-loaded silica nanoparticles showed a nanometric size and a drug loading percentage of 68% for curcumin. The nanoparticles had a mesoporous structure and rod-shaped morphology. They showed a relatively rapid release pattern in the first 5 days. The release of the drug from the nanoparticles continued slowly until the 45th day. The results of in vitro antimicrobial tests showed that P. gingivalis was sensitive to the curcumin-loaded silica nanoparticles at concentrations of 50, 25, 12.5, and 6.25 µg/mL. One-way ANOVA showed that there was a significant difference between the mean growth inhibition zone, and the concentration of 50 µg/mL showed the highest inhibition zone (p ≤ 0.05). Conclusion. Based on the obtained results, it can be concluded that the local nanocurcumin application for periodontal disease and implant-related infections can be considered a promising method for the near future in dentistry.
Introduction
The main causes of periodontal diseases are inflammation and infection of the gums and bone surrounding the teeth. In the early stage of periodontal disease, which is called New designs based on nanotechnology have been discovered to improve the bioavailability of curcumin and reduce its cytotoxicity [22]. Today, nanotechnology has become important in various medical fields, such as drug delivery [23]. Nanoporous silica materials have been extensively studied [24][25][26] since their initial deployment as a drug delivery platform in 2001 [27] or as implant surface coatings [28,29]. Nanoporous silica has a variety of qualities that make it an attractive option for a controlled-release system. It has a large surface area, huge pore volumes, and variable pore sizes with contracted pore size distributions, allowing for significant cargo loading. On the other hand, uncontrolled antimicrobial chemical leaching from release mechanisms has disadvantages. Although burst release could benefit the treatment of acute infections, and it is much more efficient than protracted delivery, it is essential for controlled release systems that can stay quiescent for lengthy periods yet distribute cargo when triggered. As a result, the medicine remains in the pores and could be removed when required. Due to the antimicrobial properties of curcumin and the useful characteristics of porous silica nanoparticles as a sustained-release carrier, the present study was conducted with the aim of preparing and physicochemically identifying curcumin-loaded silica nanoparticles and evaluating their antimicrobial effect on P. gingivalis.
Preparation of Mesoporous Silica Nanoparticles Containing Curcumin
Fifteen milligrams of powder of silica nanoparticles (Nano Sadra Company, Isfahan, Iran) and 0.75 mg of curcumin powder (Sigma Aldrich, Burlington, MA, USA) were added to 10 mL of cyclohexane. The prepared suspension was sonicated, stirred overnight, and washed with cyclohexane, and the silica particles containing curcumin were vacuum dried [30]. The nanoparticles were stored at −18 • C for further investigations.
Sampling of P. gingivalis
To attain clinically isolated P. gingivalis, one patient with chronic periodontal disease was selected from the patients referred to the Department of Periodontics, Faculty of Dentistry, Tabriz University of Medical Sciences, Tabriz, Iran. With sterile gauze, the surface of the tooth was cleaned, and the gingival crevice fluid was then sampled using sterile filter paper and placed in a thioglycollate broth media. The samples were moved to the microbiology laboratory in less than 30 minutes and stored at −20 • C until assayed.
Cultivation of P. gingivalis
The isolated sample from the mentioned patient was vortexed for 30 s. Selective medium for P. gingivalis containing Columbia agar base supplemented with vitamin K1, 5% defibrillated sheep blood, hemin, colistin sulfate, bacitracin, and nalidixic acid was used [31]. Then, the plates were incubated under 80% N 2 , 10% CO 2 , 10% H 2 and 0% O 2 in anaerobic conditions provided by the Anoxomat system (MART microbiology B.V., Drachten, The Netherlands). The growth of bacterial colonies was examined at 48, 72, and 96 h. The trypsin reagent test was used to confirm the presence of P. gingivalis on the plates. Gingipain, which is produced by P. gingivalis, is a trypsin-like enzyme. The aerotolerance test and biochemical and microbiological assays (such as colony morphology, special potency disks, pigment production, fluorescent under UV light, catalase test, indole, and trypsin-like peptidase activity assay) were used to identify P. gingivalis isolates [31]. The prepared nanoparticles were characterized using a dynamic light scattering (DLS) device (DLS, Malvern, Cambridge, UK) for size determination. The suspension of the nanoparticles was prepared in distilled water and poured into the device. An argon laser beam at 633 nm and a scattering angle of 90 • at 25 • C were used for DLS device settings. DLS is an instrument for measuring the hydrodynamic size of molecules and submicron and nanoparticles. This test was performed three times.
Morphology and the Cytotoxicity Investigation
Transmission electron microscopy (TEM) is a powerful tool to investigate the interaction of nanoparticles, their structure, and their morphology. A transmission electron microscope (TEM-2100F; JEOL, Tokyo, Japan) was used to investigate the mesoporous structure of the silica nanoparticles. For this analysis, the samples were prepared by dropping a solution of nanoparticles in deionized water on a carbon-coated copper TEM grid, followed by imaging. Size histograms for free silica nanoparticles and curcumin-loaded silica, based on TEM analysis, were also reported.
Cell viability examination was used to define the cytotoxicity of the prepared nanoparticles against dental pulp stem cells. The cells were obtained from the cell bank of Shahid Beheshti University (Tehran, Iran). Then, the nanoparticles as disks were placed in the bottoms of the wells. The cells were cultured in a single layer in DMEM including serum and antibiotics. After 72 h, the washing, incubating (for 4 h at 37 • C), and adding of MTT solution (2 mg/mL PBS) were performed. As a next step, the above solution was removed and, 200 mL of DMSO and 25 mL of Sorenson glycine buffer were added to each well. The absorbance was read at 540 nm, and the percentage of living cells was evaluated. Cells grown without any material were considered as control group.
Determination of Curcumin Loading Inside the Nanoparticles
One of the key parameters for drug-loaded nanoparticles is drug loading percentage, which is defined as the mass ratio of drug to drug-loaded nanoparticles. To determine the amount of curcumin loaded on silica nanoparticles, 10 mg of the prepared nanoparticles were dissolved in 20 mL of dimethyl sulfoxide. One milliliter of the dissolved nanoparticle solution was poured into a special tube of an ultraviolet spectrophotometer, and Lambda Max was adjusted to 350 nm for curcumin. This test was performed three times.
Evaluation of Release Pattern
Drug release denotes the procedure in which drug solutes migrate from the initial position in the carrier system to the carrier's outer surface and then to the release medium.
To determine the pattern of drug release from curcumin-loaded silica nanoparticles, phosphate buffer (300 mL) was poured into 3 beakers. An amount of 5 mg of the prepared nanoparticles was poured into the beaker. The pH of the liquid was adjusted to 7.4, and the temperature was set to 37 • C. The stirrer was set to 100 rpm. Indeed, these parameters had to be established based on the body's condition for a dissolution test of a drug (pH of 7.4, temperature of 37 • C, and stirring rate of 100 rpm). Samples were taken from the beaker every day (1 mL), and the absorbance was noted using a UV spectrophotometer for curcumin at 350 nm. The sample taken from the beakers was replaced with 1 mL of a new buffer medium to keep the concentration in balance. The amount of UV absorption was then changed to concentration. Subsequently, the cumulative release percentage was designed against the time (day) for the release study. The calculation method for the percentage of cumulative release (%) was according to the following equation: Cumulative percentage release (%) = Volume of sample withdrawn (mL)/ The volume of release media (v) × P (t − 1) + Pt where Pt is the percentage release at time t.
The Antimicrobial Action of Nanoparticles
The original method for determining susceptibility to antimicrobials was based on broth dilution methods. In this study, the disk diffusion method as a routine laboratory test was utilized to investigate the antibacterial effects of silica nanoparticles loaded with curcumin. This method identifies the action of bacteria on an antimicrobial material by creating a gradient of concentration around a disk. The bacterial isolate used in this study was isolated from a patient with chronic periodontal disease. First, a bacterial suspension of 0.5 McFarland was prepared, and then, using a sterile cotton swab, a uniform grass culture was grown on the surface of Brucella agar enriched with dried sheep blood (5%), vitamin K1 (1 µg/mL), and hemin (5 µg in mL). To prepare discs containing nanoparticles, sterile blank disks were immersed in concentrations of 3.12, 6.25, 12.5, 25, and 50 µg/mL nanoparticle suspensions, and then the disks were placed on the agar surface. A blank disk was used as a negative control, and metronidazole antibiotic disks (5 µg/mL) were used as a positive control. After incubating the plates at 37 • C for 42 h, the growth inhibition zones were measured. With this method, the halos of non-growth around the discs were measured from the back of the plate with a ruler based on millimeters.
In the next step, Brucella broth supplemented with hemin (5 µg/mL), vitamin K1 (1 µg/mL), and lysed horse blood (5%) in the presence of a serial concentration of nanoparticles (50, 25, 12.5, and 6.25 µg/mL concentrations) was applied to obtain the MICs of the nanoparticles against P. gingivalis. The wells were incubated for 48 h at 35 • C and then observed for microbial growth turbidity. The positive control was metronidazole antibiotic, and water was considered as a negative control.
Statistical Analysis
The results are stated as descriptive indices. The Shapiro-Wilk test was applied to test the normality of the units. The, we used SPSS software, version 20 (IBM Company, Armonk, NY, USA), to compare the data between groups with a p value of <0.05 as the significance level. One-way ANOVA and Tukey's post hoc test were utilized to compare the groups. The flow chart of the study process is shown in Figure 1.
The Antimicrobial Action of Nanoparticles
The original method for determining susceptibility to antimicrobials was based on broth dilution methods. In this study, the disk diffusion method as a routine laboratory test was utilized to investigate the antibacterial effects of silica nanoparticles loaded with curcumin. This method identifies the action of bacteria on an antimicrobial material by creating a gradient of concentration around a disk. The bacterial isolate used in this study was isolated from a patient with chronic periodontal disease. First, a bacterial suspension of 0.5 McFarland was prepared, and then, using a sterile cotton swab, a uniform grass culture was grown on the surface of Brucella agar enriched with dried sheep blood (5%), vitamin K1 (1 µg/mL), and hemin (5 µg in mL). To prepare discs containing nanoparticles, sterile blank disks were immersed in concentrations of 3.12, 6.25, 12.5, 25, and 50 µg/mL nanoparticle suspensions, and then the disks were placed on the agar surface. A blank disk was used as a negative control, and metronidazole antibiotic disks (5 µg/mL) were used as a positive control. After incubating the plates at 37 °C for 42 h, the growth inhibition zones were measured. With this method, the halos of non-growth around the discs were measured from the back of the plate with a ruler based on millimeters.
In the next step, Brucella broth supplemented with hemin (5 µg/mL), vitamin K1 (1 µg/mL), and lysed horse blood (5%) in the presence of a serial concentration of nanoparticles (50, 25, 12.5, and 6.25 µg/mL concentrations) was applied to obtain the MICs of the nanoparticles against P. gingivalis. The wells were incubated for 48 h at 35 °C and then observed for microbial growth turbidity. The positive control was metronidazole antibiotic, and water was considered as a negative control.
Statistical Analysis
The results are stated as descriptive indices. The Shapiro-Wilk test was applied to test the normality of the units. The, we used SPSS software, version 20 (IBM Company, Armonk, NY, USA), to compare the data between groups with a p value of < 0.05 as the significance level. One-way ANOVA and Tukey's post hoc test were utilized to compare the groups. The flow chart of the study process is shown in Figure 1.
Results and Discussion
The low bioavailability of curcumin is the most important concern for its clinical use. Additionally, little information is available about its safety at higher doses. Today, to reduce its toxicity and improve the bioavailability of curcumin, new designs based on its nanoformulation have been discovered [17,18]. Evaluating the physicochemical properties of nanoparticles is necessary to ensure their suitability for various uses. The interactions of nanoparticles in vitro and in vivo are related to their physicochemical properties [32]. Reducing the size of nanoparticles increases their surface area, the interaction of these
Results and Discussion
The low bioavailability of curcumin is the most important concern for its clinical use. Additionally, little information is available about its safety at higher doses. Today, to reduce its toxicity and improve the bioavailability of curcumin, new designs based on its nanoformulation have been discovered [17,18]. Evaluating the physicochemical properties of nanoparticles is necessary to ensure their suitability for various uses. The interactions of nanoparticles in vitro and in vivo are related to their physicochemical properties [32]. Reducing the size of nanoparticles increases their surface area, the interaction of these nanoparticles with the environment increases, and their ways of crossing body barriers and entering cells will be different [33,34].
The average particle size of drug-free silica nanoparticles is shown in Figure 2a, and that for curcumin-loaded silica nanoparticles is shown in Figure 2b. The results showed that both types of nanoparticles had nanometric sizes. For drug-free silica nanoparticles the mean particles size was 90 ± 1.02 nm, while curcumin-loaded silica nanoparticles had a mean particle size of 110 ± 1.23 nm. Figure 3a shows the morphology of the drug-free silica nanoparticles, and the morphology of curcumin-loaded silica nanoparticles has shown in Figure 3b. The size histograms for free silica nanoparticles and curcumin-loaded silica, based on TEM analysis, are shown also in the Figure 3c and d, respectively. Our outcomes showed that the nanoparticle sizes differed in DLS analysis compared to TEM analysis. This difference may be owing to the hydrating of the outer layer of the nanoparticles in the DLS technique. In addition, the aggregation of nanoparticles and the non-spherical shape of nanoparticles could be the cause of this difference [35].
Nanoparticles exert their antimicrobial effects on bacteria by several mechanisms that depend on the size of the nanoparticles and the type of bacteria. The dose of nanoparticles and their physicochemical properties (shape, size and surface properties) are very important to their antimicrobial effects [36]. The size of nanoparticles is important to their antibacterial effect, so smaller nanoparticles, by binding to the surface of bacteria with high affinity, can disrupt the function of the cell membrane of bacteria compared to larger nanoparticles [37]. The interaction of nanoparticles with the bacterial membrane causes local pores in the membrane. Additionally the entry of nanoparticles into bacterial cells causes damage to DNA and proteins (especially sulfur-rich proteins). In this way, nanoparticles can disrupt the function of bacteria. Nanocarriers containing antibacterial agents can also combine their structure with the bacterial cell wall and introduce their medicinal substances into the cytoplasm [38].
TEM pictures proved the mesoporous building and the rod-shaped morphology of the prepared nanoparticles. The filled pores of mesoporous silica can also be detected by TEM imaging of drug-loaded mesoporous silica nanoparticles that show the loading of curcumin into the silica nanoparticles. Rod-shaped nanoparticles may display a longer circulation time and a slight uptake by the RES in the body compared with spherical particles [39,40]. A recent in vivo study also showed that rod-type nanoparticles exhibit a high capacity to overcome uptake through RES and show a longer presence in the blood compared with spherical nanoparticles [41].
nanoparticles with the environment increases, and their ways of crossing body barriers and entering cells will be different [33,34].
The average particle size of drug-free silica nanoparticles is shown in Figure 2a, and that for curcumin-loaded silica nanoparticles is shown in Figure 2b. The results showed that both types of nanoparticles had nanometric sizes. For drug-free silica nanoparticles the mean particles size was 90 ± 1.02 nm, while curcumin-loaded silica nanoparticles had a mean particle size of 110 ± 1.23 nm. Figure 3a shows the morphology of the drug-free silica nanoparticles, and the morphology of curcumin-loaded silica nanoparticles has shown in Figure 3b. The size histograms for free silica nanoparticles and curcumin-loaded silica, based on TEM analysis, are shown also in the Figure 3c and d, respectively. Our outcomes showed that the nanoparticle sizes differed in DLS analysis compared to TEM analysis. This difference may be owing to the hydrating of the outer layer of the nanoparticles in the DLS technique. In addition, the aggregation of nanoparticles and the nonspherical shape of nanoparticles could be the cause of this difference [35].
Nanoparticles exert their antimicrobial effects on bacteria by several mechanisms that depend on the size of the nanoparticles and the type of bacteria. The dose of nanoparticles and their physicochemical properties (shape, size and surface properties) are very important to their antimicrobial effects [36]. The size of nanoparticles is important to their antibacterial effect, so smaller nanoparticles, by binding to the surface of bacteria with high affinity, can disrupt the function of the cell membrane of bacteria compared to larger nanoparticles [37]. The interaction of nanoparticles with the bacterial membrane causes local pores in the membrane. Additionally the entry of nanoparticles into bacterial cells causes damage to DNA and proteins (especially sulfur-rich proteins). In this way, nanoparticles can disrupt the function of bacteria. Nanocarriers containing antibacterial agents can also combine their structure with the bacterial cell wall and introduce their medicinal substances into the cytoplasm [38].
TEM pictures proved the mesoporous building and the rod-shaped morphology of the prepared nanoparticles. The filled pores of mesoporous silica can also be detected by TEM imaging of drug-loaded mesoporous silica nanoparticles that show the loading of curcumin into the silica nanoparticles. Rod-shaped nanoparticles may display a longer circulation time and a slight uptake by the RES in the body compared with spherical particles [39,40]. A recent in vivo study also showed that rod-type nanoparticles exhibit a high capacity to overcome uptake through RES and show a longer presence in the blood compared with spherical nanoparticles [41]. The percentage of cytotoxicity (cell viability) of the prepared nanoparticles on dental pulp stem cells is shown in Figure 4. There was no significant reduction in the viability of the cells exposed to the nanoparticles compared to the control group (cells grown without any material). Therefore, the prepared nanoparticles were non-cytotoxic against dental pulp stem cells (Figure 4). The percentage of cytotoxicity (cell viability) of the prepared nanoparticles on dental pulp stem cells is shown in Figure 4. There was no significant reduction in the viability of the cells exposed to the nanoparticles compared to the control group (cells grown without any material). Therefore, the prepared nanoparticles were non-cytotoxic against dental pulp stem cells (Figure 4). The percentage of cytotoxicity (cell viability) of the prepared nanoparticles on dental pulp stem cells is shown in Figure 4. There was no significant reduction in the viability of the cells exposed to the nanoparticles compared to the control group (cells grown without any material). Therefore, the prepared nanoparticles were non-cytotoxic against dental pulp stem cells (Figure 4). The loading results showed that the loading percentage of curcumin in silica nanoparticles was 68% ± 1.02. Currently, most nanoparticle systems have relatively low drug loading, and increasing the increase drug loading capacity remains a challenge. The reason for the high drug-loading percentage of our nanoparticles was their mesoporous structure.
The prepared nanoparticles displayed a relatively fast release pattern in the first 5 days (Figure 5). The release of curcumin from silica nanoparticles continued slowly until day 45. The burst release of curcumin from the prepared nanoparticles could eradicate acute infections, and the controlled sustained release could provide the drug content for long periods. As a result, the drug remained in the pores and could be removed when required [42]. It seems that the pattern of rapid drug release from nanoparticles in the first days is related to drugs adsorbed to the surface of nanoparticles that are not inside the cavities and have a weak interaction with the outer surface of the cavities. Curcumin molecules inside the cavities that had electrostatic interactions with the nanoparticle cavity wall caused slow and continuous release on days 6 to 45. The slow-release pattern of drugs is very critical in the clinical application of drugs [43]. Memar et al. achieved similar results for meropenem-loaded silica nanoparticles [44]. They showed that, in the first two days, about 40 percent of meropenem was released from silica nanoparticles, and then slow release was sustained until the 30th day. The loading results showed that the loading percentage of curcumin in silica nanoparticles was 68% ± 1.02. Currently, most nanoparticle systems have relatively low drug loading, and increasing the increase drug loading capacity remains a challenge. The reason for the high drug-loading percentage of our nanoparticles was their mesoporous structure.
The prepared nanoparticles displayed a relatively fast release pattern in the first 5 days ( Figure 5). The release of curcumin from silica nanoparticles continued slowly until day 45. The burst release of curcumin from the prepared nanoparticles could eradicate acute infections, and the controlled sustained release could provide the drug content for long periods. As a result, the drug remained in the pores and could be removed when required [42]. It seems that the pattern of rapid drug release from nanoparticles in the first days is related to drugs adsorbed to the surface of nanoparticles that are not inside the cavities and have a weak interaction with the outer surface of the cavities. Curcumin molecules inside the cavities that had electrostatic interactions with the nanoparticle cavity wall caused slow and continuous release on days 6 to 45. The slow-release pattern of drugs is very critical in the clinical application of drugs [43]. Memar et al. achieved similar results for meropenem-loaded silica nanoparticles [44]. They showed that, in the first two days, about 40 percent of meropenem was released from silica nanoparticles, and then slow release was sustained until the 30th day. With a conventional drug-delivery method, the drug concentration in the blood remains within a relatively large range for a short period of time, which can fall short of the lowest effective dose or exceed the maximum tolerated dose. As a result, frequent doses are necessary, which will be associated with side effects. Using the appropriate nanocarrier, the blood concentration of the drug at the site of infection can be maintained at the required effective concentration for a long time and, as a result, reduce the frequency of consumption, produce good stability, reduce patient pain, and improve patient compliance. The drug loaded in the nanocarrier has a much more prominent inhibitory effect on cell growth with long-term drug release compared to the free drug at the same concentration [45].
Antimicrobial Action
The results of microbial tests showed that P. gingivalis is sensitive to the silica nanoparticles loaded with curcumin at concentrations of 50, 25, 12.5, and 6.25 µg/mL. The mean With a conventional drug-delivery method, the drug concentration in the blood remains within a relatively large range for a short period of time, which can fall short of the lowest effective dose or exceed the maximum tolerated dose. As a result, frequent doses are necessary, which will be associated with side effects. Using the appropriate nanocarrier, the blood concentration of the drug at the site of infection can be maintained at the required effective concentration for a long time and, as a result, reduce the frequency of consumption, produce good stability, reduce patient pain, and improve patient compliance. The drug loaded in the nanocarrier has a much more prominent inhibitory effect on cell growth with long-term drug release compared to the free drug at the same concentration [45].
Antimicrobial Action
The results of microbial tests showed that P. gingivalis is sensitive to the silica nanoparticles loaded with curcumin at concentrations of 50, 25, 12.5, and 6.25 µg/mL. The mean growth inhibition zones of curcumin-loaded silica nanoparticles concentrations and control antibiotic (metronidazole) are shown in Table 1 and Figure 6.
Based on the MIC test, the nanoparticles showed inhibitory effects against P. gingivalis at 6.25 µL/mL. In addition, based on our previous study, free silica nanoparticles did not have any significant antibacterial effects [46]. growth inhibition zones of curcumin-loaded silica nanoparticles concentrations and control antibiotic (metronidazole) are shown in Table 1 and Figure 6.
Based on the MIC test, the nanoparticles showed inhibitory effects against P. gingivalis at 6.25 µL/mL. In addition, based on our previous study, free silica nanoparticles did not have any significant antibacterial effects [46]. One-way ANOVA (between curcumin groups) revealed that there is a significant relation in the concentration of curcumin-loaded silica nanoparticles with the size of the growth inhibition, zone and the highest inhibition zone was displayed in the concentration of 50 µg/mL (p ≤ 0.05). Tukey's post hoc test showed that there was a significant difference between the antimicrobial effects of all concentrations of curcumin-loaded silica nanoparticles (p ≤ 0.05). Thus, the nanoparticles had dose-dependent antimicrobial effects.
Other studies used P. gingivalis (ATCC33277). In a study, Shahzad et al. reported that the growth inhibition of P. gingivalis (ATCC33277) was effected by curcumin at a concentration of 7.8 µg/mL [47]. Additionally, Mandroli and Bhat showed that the MIC of curcumin against P. gingivalis (ATCC33277) was 125 µg/mL [48], while Izui et al. showed that the prevention of bacterial growth occurred with curcumin at a concentration of 20 µg/mL [49]. In another recent study, the sensitivity of P. gingivalis (ATCC33277) to curcumin was shown in a concentration of 100 µg/mL [50]. The main reason for the difference between the results of our study and the results of other studies may be that they investigated the effects of free curcumin on laboratory strains, while in our study, the effects of sustainedrelease nanoparticles containing curcumin on clinically isolated P. gingivalis were investigated. One-way ANOVA (between curcumin groups) revealed that there is a significant relation in the concentration of curcumin-loaded silica nanoparticles with the size of the growth inhibition, zone and the highest inhibition zone was displayed in the concentration of 50 µg/mL (p ≤ 0.05). Tukey's post hoc test showed that there was a significant difference between the antimicrobial effects of all concentrations of curcumin-loaded silica nanoparticles (p ≤ 0.05). Thus, the nanoparticles had dose-dependent antimicrobial effects.
Other studies used P. gingivalis (ATCC33277). In a study, Shahzad et al. reported that the growth inhibition of P. gingivalis (ATCC33277) was effected by curcumin at a concentration of 7.8 µg/mL [47]. Additionally, Mandroli and Bhat showed that the MIC of curcumin against P. gingivalis (ATCC33277) was 125 µg/mL [48], while Izui et al. showed that the prevention of bacterial growth occurred with curcumin at a concentration of 20 µg/mL [49]. In another recent study, the sensitivity of P. gingivalis (ATCC33277) to curcumin was shown in a concentration of 100 µg/mL [50]. The main reason for the difference between the results of our study and the results of other studies may be that they investigated the effects of free curcumin on laboratory strains, while in our study, the effects of sustained-release nanoparticles containing curcumin on clinically isolated P. gingivalis were investigated.
In our previous study, the prevalence of P. gingivalis isolated from the gingival crevicular fluid (GCF) of 15 Iranian patients with implant failure was investigated. The results showed that, out of 15 patients, eight (53.33%) were positive for the presence of P. gingivalis. The antimicrobial action of curcumin nanocrystals was also investigated against P. gingivalis isolated from patients with implant failure, and the results showed that curcumin nanocrystals had an MBC of 12.5 µg/mL and a MIC of 6.25 µg/mL. Additionally cur-cumin nanocrystals showed the highest inhibition zone at the concentration of 50 µg/mL (p = 0.0003) [51].
A study showed that curcumin prevented bacterial strains by damaging the membrane of bacteria [52]. Curcumin can inhibit the proliferation of bacteria by perturbation of FtsZ assembly. Some studies have shown that curcumin deactivates bacteria by stimulating ROS generation [53,54].
Kumbar and coworkers explained the effects of curcumin on the biofilm formation and virulence factor gene expression of P. gingivalis using gene expression studies. They showed that the MBC and MIC of curcumin for both clinical strains and ATCC of P. gingivalis were 125 and 62.5 µg/mL, respectively. Curcumin inhibited attachment and biofilm formation of bacteria in a dose-dependent way. Additionally, curcumin decreased the virulence of P. gingivalis by decreasing the expression of proteinases (rgpA, rgpB, and kgp) and adhesions (fimA, hagA, and hagB) as the main genes of virulence factors. Curcumin has presented anti-biofilm and antibacterial effects against P. gingivalis. Furthermore, due to the pleiotropic actions of curcumin, it can be an inexpensive and readily available therapeutic agent in the treatment of periodontal disease [55].
Chen and coworkers investigated the anti-inflammatory effects and the mechanism of action of curcumin in macrophages stimulated by P. gingivalis lipopolysaccharide (LPS). They reported that curcumin prevented the expression of IL-1β and TNF-α genes and protein synthesis in RAW264.7 cells that were stimulated with LPS of P. gingivalis. In RAW264.7 cells, LPS of P. gingivalis stimulated NF-κB-dependent transcription, which was downregulated by pretreatment with curcumin [56].
The Strengths and Limitations
The results of this investigation showed that curcumin-loaded silica nanoparticles had suitable antibacterial actions against P. gingivalis. This finding could be very useful in overcoming bacterial resistance. In addition, the concentrations obtained in this study were lower compared to those obtained previous research works, advancing the hope of preparing optimal formulations based on these nanoparticles.
The main limitation of this study was its use of a single isolate of P. gingivalis. A single isolate is not enough to draw conclusions regarding MIC values and accurately compare them to other studies. In addition, the possibility of human error in the sampling of bacteria, nanoparticle aggregation, and microbial contaminations with other bacterial strains can be considered other limitations.
There are also other types of bacteria that act as periodontal pathogens, such as Fusobacterium nu-cleatum, Prevotella Intermedia, Aggregatibacter and Actinomicetencomitans. Curcumin-loaded silica nanoparticles should also be examined against these bacteria in future studies.
This report was an in vitro study. Any possible toxicity of these nanoparticles should be tested in future studies before any animal or clinical trials. Moreover, the antimicrobial and antibiofilm mechanisms for them should be investigated to confirm their exact function.
Suggestions and Future Perspective
It is suggested to investigate the effects of curcumin-loaded silica nanoparticles on P. gingivalis-related infections in vivo and then clinically. Additionally silica nanoparticles coloaded with curcumin and other antibacterial agents can be prepared, and their antibacterial effects can be investigated in vitro, in vivo, and clinically. A limited number of clinical isolates of P. gingivalis were analyzed in this study, and they can be used in future studies to investigate the effects of curcumin-loaded silica nanoparticles on a greater number of bacteria.
Nanoformulations of plant substances or phytochemicals can replace chemical antibacterial drugs in the future. This replacement can be a solution to reduce the use of antibiotics, which will reduce not only microbial resistance but also the toxicity and side effects caused by antibiotics.
Conclusions
This study showed that P. gingivalis clinically isolated from the gingival crevice fluid of a patient with chronic periodontal diseases is highly sensitive to curcumin-loaded silica nanoparticles at a low concentration. In addition, the two-stage release profile of the prepared nanoparticles can provide both the burst release and the controlled sustained release of curcumin, which can be used to eradicate acute infections at first and then provide the drug content for a long time. It can be concluded that local nanocurcumin application for periodontal disease and implant-related infections can be considered as a promising method for the near future in dentistry. Informed Consent Statement: Informed consent was obtained from the patient involved in the study and for publishing this paper.
Data Availability Statement:
The raw data from the reported study are available upon request from the corresponding author. | 2023-03-12T15:35:44.280Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "e5d0767eb7162aa9a66914bc0d807ba7704aa1ad",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9721/11/1/48/pdf?version=1678344635",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49830e51c40d0e51c590a024899f6c4ce3d1348a",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246823288 | pes2o/s2orc | v3-fos-license | From Symplectic to Poisson. A Study of Reduction and a Proposal Towards Implosion
The imploded cross-section of a symplectic manifold is a stratified space allowing for an abelianization of its symplectic reduction. After recalling symplectic and Poisson reduction and reviewing the basics of symplectic implosion, we prove a cross-section theorem for Poisson manifolds, generalizing the Guillemin-Sternberg theorem for symplectic manifolds, which constitutes a first step towards Poisson implosion. On our way, we find and fix a mistake in the proof of Guillemin-Sternberg's theorem, and we identify Poisson transversals as the right analogue to symplectic submanifolds in this context.
Moment maps
We begin by reviewing the language we will be using in the rest of the dissertation: that of symplectic and Poisson manifolds, smooth Lie group actions and moment maps. We offer explicit constructions of moment maps in some special cases and several examples.
Symplectic Geometry
Let M be a smooth manifold and ω ∈ Ω 2 (M ). We define the kernel of ω as the set ker ω := {v ∈ T M : i v ω = 0} ⊆ T M and say that ω is nondegenerate if ker ω = 0. This is equivalent to the condition that the bundle morphism ω : T M → T * M given by ω(v) := i v ω is in fact a bundle isomorphism. We use the same symbol for the 2-form and the associated bundle morphism because it will be clear from the context which of the two we are referring to each time.
Nondegeneracy of ω amounts to the matrix (ω ij ) being nonsingular at each point. If (M, ω) is a symplectic manifold, then necessarily the dimension of M is even, since every odddimensional skew-symmetric matrix is singular.
Let p ∈ M and let W ⊆ T p M be a vector subspace. We define its orthogonal space with respect to ω as W ⊥ := {v ∈ T p M : ω(v, w) = 0, for all w ∈ W }.
Depending on the relation between W and its orthogonal, we can consider different types of submanifolds on a symplectic manifold. If N ⊆ M is a submanifold, we say that N is symplectic (resp. isotropic, coisotropic, Lagrangian) if T p N is a symplectic (resp. isotropic, coisotropic, Lagrangian) subspace of T p M for every p ∈ N .
To study the dynamics on symplectic manifolds, we define two special kinds of vector fields on M . Definition 1.3. Let (M, ω) be a symplectic manifold. We say that X ∈ X(M ) is a symplectic vector field if L X ω = 0. For any f ∈ C ∞ (M ), we define the associated Hamiltonian vector field as the unique vector field X f such that i X f ω = df , i.e., X f = ω −1 (df ). We say that X ∈ X(M ) is Hamiltonian if there is some f ∈ C ∞ (M ) such that X = X f , in which case we say that f is an associated Hamiltonian or energy function (it is not unique, although any two differ by a locally constant function).
By Cartan's magic formula, X ∈ X(M ) is a symplectic vector field if and only if i X ω is closed, and it is Hamiltonian if in addition i X ω is exact. We say that a flow on M is Hamiltonian if the induced vector field is Hamiltonian.
Example 1.4. Consider R 2n with Cartesian coordinates (q 1 , . . . , q n , p 1 , . . . , p n , ). Then the 2-form ω 0 = dq i ∧ dp i is a symplectic form on R 2n , called the standard symplectic form. A vector field X ∈ X(R 2n ) is Hamiltonian with energy H if and only if Its integral curves are precisely the solutions to Hamilton's equations as generally written in classical mechanics, see [LL76,§40].
In fact, the Darboux theorem ensures that every symplectic form can be locally written in the standard form just described. For a proof see, for instance, [MS98,Sect. 3.2] or [Lee12,Thm. 22.13].
Theorem 1.5 (Darboux). Let (M, ω) be a symplectic manifold of dimension 2n. For every p ∈ M there is a chart (U, (q i , p i ) n i=1 ), with p ∈ U , such that ω = dq i ∧ dp i on U .
Another important example is the following.
Example 1.6. Let M be any smooth manifold. Its cotangent bundle T * M has a natural symplectic structure. Let π : T * M → M be the projection and define the tautological 1-form θ ∈ Ω 1 (T * M ) as θ α (v) := α(π * v), for α ∈ T * M and v ∈ T T * M .
Then ω = −dθ is a symplectic form on T * M , called the canonical symplectic form on the cotangent bundle. To see that it is symplectic, let (U, (x i )) be a chart on M and let (π −1 (U ), (q i , p i )) be the associated natural chart on T * M , given by q i = x i • π and p i (α) = α(∂/∂x i ). Then it is easy to see that θ = p i dq i , and hence ω = dq i ∧ dp i .
This example allows us to extend the notion of classical mechanics on R 2n , as in Example 1.4, to classical mechanics on an arbitrary manifold M . If M is the model of the space of degrees of freedom of a mechanical system, one can define a smooth function H on the phase space T * M , called the Hamiltonian or energy function, governing the dynamics of the system through its associated Hamiltonian vector field X H . This makes sense because T * M is canonically endowed with a symplectic structure. The motions of the system are precisely the integral curves of X H . This notion can be generalized even further, without asking for the phase space to be the cotangent bundle of the space of degrees of freedom: we call the triple (M, ω, H) a Hamiltonian system if (M, ω) is a symplectic manifold and H ∈ C ∞ (M ). Further information on symplectic geometry can be found in [MS98;Can08].
Group Actions
Let M be a smooth manifold and G a Lie group acting smoothly on M . Let g be the Lie algebra of G and Φ the action of G on M . We write Φ g for the left translation by g ∈ G on M and Φ p for the orbit map of p ∈ M . For any ξ ∈ g we define the associated infinitesimal generator or fundamental vector field as the vector field on M defined by (1.1) We then define g M (p) := {ξ M (p) : ξ ∈ g}, the set of infinitesimal generators at p. There are two natural actions of G on its Lie algebra g and its dual g * : the adjoint and coadjoint actions, respectively. To define them, let C g stand for conjugation by g ∈ G on G, i.e., C g (h) := ghg −1 . Then the adjoint action is given by Ad(g)ξ := C g * ξ, for ξ ∈ g, and the coadjoint action by Ad * (g)α := Ad(g −1 ) * α, for α ∈ g * . It is easy to check that these indeed define smooth actions of G on g and g * . It is a well-known fact that the pushforward of the adjoint action Ad : G → GL(g) at the identity is The following proposition gives some properties concerning infinitesimal generators and these actions, which will be useful in computations.
Proposition 1.7. For any ξ, η ∈ g and g ∈ G we have that Proof. To see item 1, we compute for any p ∈ M : To see item 2, using that Ad * = ad and item 1, we obtain The following proposition gives information about the structure of the orbits of the G-action. Notice that for any p ∈ M , the isotropy group G p is a closed subgroup of G because G p = (Φ p ) −1 (p). Hence the quotient G/G p is a smooth manifold such that the projection π : G → G/G p is a smooth submersion (see [AM78,Cor. 4 is an injective immersion. In particular, Θ p (G/G p ) = G · p is an immersed submanifold of M such that Θ p is a diffeomorphism onto its image.
Proof. We have that Θ p • π = Φ p . This implies that Θ p is smooth because Φ p is smooth. Indeed, as an easy consequence of the local normal form for submersions, π has smooth local sections about each point in G/G p , and this immediately gives the smoothness for Θ p .
if and only if w ∈ ker Φ p * (g). First consider the case g = e. Then w ∈ ker Φ p * (e) means and this is equivalent to Φ exp(tw) (p) = p for all t ∈ R, since for any s ∈ R we have Hence w ∈ ker Φ p * (e) if and only if exp(tw) ∈ G p for all t ∈ R, which implies v = π * w = 0, and this proves the assertion for g = e.
If g = e, then w ∈ ker Φ p * (g) if and only if which is equivalent to L g −1 * w ∈ ker Φ p * (e), since Φ g is a diffeomorphism. Therefore, if Ψ denotes the smooth action of G on G/G p by left translations, v = π * w = d dt t=0 π(g exp(tL g −1 * w)) = Ψ g * (π * L g −1 * w) = 0.
From here we can deduce some useful characterizations of the isotropy Lie algebra g p := T e G p of a point p ∈ M and the tangent space of a G-orbit. Corollary 1.9. If p ∈ M then g p = {ξ ∈ g : ξ M (p) = 0} and T p (G · p) = g M (p).
Proof. By the proof of Proposition 1.8, if ξ ∈ g is such that ξ M (p) = 0, then exp(tξ) ∈ G p for all t ∈ R, and therefore ξ ∈ g p . Conversely, if ξ ∈ g p then π * ξ = 0, so that On the other hand, by the definition of infinitesimal generator, eq. (1.1), one has that Φ p * (g) = g M (p). Since Φ p * = Θ p * • π * is surjective because both Θ p * and π * are surjective (when thinking of Θ p as a diffeomorphism between G/G p and G · p), the second claim follows.
Moment Maps
The concept of moment map is a generalization of the concept of momentum in physics, which appeared with the study of conserved quantities in mechanical systems. Let (M, ω) be a symplectic manifold and G a Lie group acting symplectically on M through Φ (that is, Φ g is a symplectomorphism of M on itself for each g ∈ G), and let g be the Lie algebra of G.
Definition 1.10. We say that a smooth map µ : M → g * is a moment map for the action Φ if for every ξ ∈ g we have that whereμ : g → C ∞ (M ) is the comoment map, given byμ(ξ)(p) := µ(p)(ξ). In other words, every infinitesimal generator ξ M is a Hamiltonian vector field with energyμ(ξ). If a moment map exists for the action Φ we say that the action is Hamiltonian. We say that a moment map µ is Ad * -equivariant, or simply equivariant, if it is equivariant with respect to the coadjoint action on g * , that is, if Notice that µ : M → g * is smooth if and only ifμ(ξ) ∈ C ∞ (M ) for every ξ ∈ g. Indeed, if we fix {ξ i } i a basis for g and {λ i } i its dual basis for g * , we can write µ = µ i λ i , for some µ i : M → R; then µ is smooth if and only if each µ i is so, and it is immediate to see that µ i =μ(ξ i ). Thus, the only condition for a moment map to exist is that every infinitesimal generator be a Hamiltonian vector field: if for every ξ ∈ g there is some f ξ ∈ C ∞ (M ) such that ω(ξ M ) = df ξ , since the Hamiltonian functions are only defined up to additive constants, the correspondence ξ → f ξ can be easily made linear on ξ, so that for each p ∈ M , the map µ(p) : g → R defined by µ(p)(ξ) := f ξ (p) lives in g * , defining, hence, a moment map for the action.
As we said, the original interest in moment maps comes from their relationship with conserved quantities. This is expressed in the translation of Noether's classical theorem on symmetries and conserved quantities to the language of moment maps. We say that a Lie group G acts by symmetries on a Hamiltonian system (M, ω, H) if it preserves H: if Φ * g H = H for every g ∈ G.
Theorem 1.11 (Noether). Let (M, ω, H) be a Hamiltonian system with a moment map µ arising from a Hamiltonian G-action Φ by symmetries. Then µ is a conserved quantity, that is, µ • θ t = µ, where θ t is the flow of X H .
Proof. By the definition of comoment map, µ is a conserved quantity if and only ifμ(ξ) is a conserved quantity for every ξ ∈ g. On the one hand, we have thatμ(ξ) • θ 0 =μ(ξ), and on the other, that, at every point p ∈ M , Henceμ(ξ) • θ t =μ(ξ) and that ends the proof.
Before giving some concrete examples, we give a general strategy for constructing an equivariant moment map on exact symplectic manifolds. We say that θ is a potential for the symplectic structure ω if ω = −dθ.
Proposition 1.12. Let (M, ω) be an exact symplectic manifold with potential θ. Let G be a Lie group acting smoothly on M such that θ is invariant under the action: Φ * g θ = θ for every g ∈ G. Then the action is Hamiltonian with equivariant moment map given by the comoment map Proof. The action is clearly symplectic because the differential commutes with any pullback. The map µ is obviously smooth since so areμ(ξ) for any ξ ∈ g. Because θ is invariant under the G-action, and by Cartan's magic formula, Therefore, µ is a moment map for the action. To see that it is equivariant, we compute, using again the invariance of θ and also item 1 of Proposition 1.7: This construction can be particularized to the cotangent bundle.
Corollary 1.13. Let M be a smooth manifold with a smooth G-action Φ. Consider the G-action on .
Then the lifted action acts by symplectomorphisms on the canonical symplectic structure on T * M and gives rise to an equivariant moment map defined by µ(α)(ξ) = α(ξ M (π(α))), for α ∈ T * M , where π : T * M → M is the projection.
Proof. First of all, for every α ∈ T * p M and g ∈ G, which also implies that π * ξ T * M (α) = ξ M (π(α)). With this in mind, we can see that the tautological 1-form is invariant under the lifted action: for any v ∈ T α T * M This shows that the action indeed acts by symplectomorphisms and that, by Proposition 1.12, the action is Hamiltonian with equivariant moment map The following properties of equivariant maps µ : M → g * will be very useful throughout the text.
Proof. Item 1 follows from the equivariance of µ, and then, by Corollary 1.9, Item 2 is a straightforward computation: We now turn to the examples.
Obviously the action preserves the symplectic potential p i dq i . Proposition 1.12 then asserts that with ·, · the usual inner product on R 3 , is an equivariant moment map for the action. Identifying R 3 with its dual through this inner product, we obtain that the moment map is just µ(q, p) = p. This is the linear momentum in classical mechanics.
This action also preserves the potential, since for any R ∈ SO(3), and the moment map is, then, µ(q, p)(X) = (i (Xq,Xp) (p i dq i ))(q, p) = p, Xq .
The Lie algebra so(3) can be identified with R 3 via so that X ξ q = ξ × q, where × is the usual cross product on R 3 . With this identification, µ(q, p)(ξ) = p, ξ × q = q × p, ξ , and with the further identification of R 3 with its dual, we obtain µ(q, p) = q × p. This is the angular momentum in classical mechanics.
Example 1.17. Let G be any Lie group acted upon by itself by left translations. For any ξ ∈ g, the infinitesimal generator at g ∈ G is where R g is right translation by g. By Corollary 1.13, the moment map of the lifted action to T * G is µ(λ) = λ • R g * for any λ ∈ T * g G.
The cotangent bundle T * G can be trivialized through with inverse λ → (g, λ • R g * ) for λ ∈ T * g G. With this identification, the moment map is just projection onto the second factor: We also present a couple of examples on non-exact symplectic manifolds.
Example 1.18. Consider (S 2 , ω), with ω the area form on S 2 , acted upon by SO(3) by matrix multiplication. The form ω can be expressed in simple form as ω Then, using the identification of so(3) with R 3 (1.2), for any u ∈ T x S 2 and ξ ∈ R 3 , Using the identity a × (b × c) = a, c b − a, b c, we can rewrite this as so that µ(x)(ξ) = x, ξ defines a moment map for the action. Identifying again R 3 with its dual we obtain that the moment map is just the inclusion S 2 → R 3 .
Example 1.19. Let (M, ω) be a symplectic manifold and X H a complete Hamiltonian vector field. The flow of X H gives rise to a Hamiltonian R-action on M with H as moment map. Indeed, for any a ∈ R and p ∈ M , Moment maps are not always submersions, as the next example shows.
Example 1.20. Let SO(3) act on S 2 ×S 2 by the diagonal action: R·(x, y) := (Rx, Ry). By Example 1.18, the moment map for such an action (identifying so(3) * with R 3 ) is just µ(x, y) = x + y, with image µ(S 2 × S 2 ) = B(0, 2), the closed ball of radius 2 and center 0. Hence, for any x ∈ S 2 , since we can identify T x S 2 with span(x) ⊥ , using the usual Euclidean metric on R 3 , we have that
Poisson Geometry
The notion of a Poisson manifold is a generalization of the notion of a symplectic manifold. Its origins lie in the study of analytical mechanics, as did those of symplectic geometry. Poisson structures were introduced by A. Lichnerowicz in [Lic77] and have been extensively studied since then. ,
Poisson Geometry
If such a canonical diffeomorphism exists we say that M and M are canonically diffeomorphic. The bracket {·, ·} is called Poisson bracket.
The Leibniz rule asserts that {f, ·} is a derivation of C ∞ (M ), and so there is a vector field X f ∈ X(M ) such that X f = {f, ·}. Such a vector field is called the Hamiltonian vector field associated to f . As in the symplectic case, we say that X ∈ X(M ) is Hamiltonian if X = X f for some f ∈ C ∞ (M ), and say that f is a Hamiltonian or energy function for X.
Since {·, ·} is bilinear, it is a derivation in both arguments, and so its value on f, g ∈ C ∞ (M ) depends only on the differentials df and dg. That is, there is some bivector field Π ∈ X 2 (M ) such that Conversely, given a bivector field Π ∈ X 2 (M ), we can define a skew-symmetric bilinear bracket fulfilling the Leibniz rule by eq. (1.3). We say that a bivector field Π ∈ X 2 (M ) is Poisson if the associated bracket is Poisson, i.e., if it satisfies the Jacobi identity. It is apparent that it is enough to verify the Jacobi identity on functions whose differentials span T As we did in the symplectic case, consider the bundle morphism Π : T * M → T M given by Π(λ) := i λ Π. Since there are no nondegeneracy conditions on Π, this morphism may fail to be an isomorphism. The Hamiltonian vector field X f may hence be written as X f = Π(df ).
We give now some basic examples of Poisson manifolds.
Example 1.23. Consider R 2n with Cartesian coordinates (q 1 , . . . , q n , p 1 , . . . , p n , ). Then the bracket defined by is Poisson, and is called the standard Poisson structure on R 2n . It is the Poisson bracket one usually encounters in classical mechanics on R 2n (see [LL76,§42]). The corresponding Poisson bivector field is obviously Example 1.24. Let M be a smooth manifold and ω ∈ Ω 2 (M ) nondegenerate. We define the bracket of We claim that it is Poisson if and only if dω = 0. In particular if (M, ω) is a symplectic manifold, then it is also a nondegenerate Poisson manifold (meaning that its Poisson bivector field is nondegenerate). The bracket is evidently bilinear and alternating. If f, g, h ∈ C ∞ (M ), by the Leibniz rule for differentials, Lastly, by the formula for the exterior differential of a 2-form, we have that By Cartan's magic formulas, and taking into account that di and similarly for ω( Therefore . Thus, it is clear that {·, ·} fulfills the Jacobi identity if and only if dω = 0. We conclude that a symplectic manifold can be equivalently considered as a pair (M, ω), with ω a nondegenerate closed 2-form, or as a pair (M, Π), with Π a nondegenerate Poisson bivector field. The corresponding bundle isomorphisms are related by Π = ω −1 . In addition, a diffeomorphism F ∈ Diff(M ) is canonical if and only if it is a symplectomorphism, since if F is a symplectomorphism or canonical, it is easily seen that X F * f = F −1 * X f , which immediately implies the other. If (M, ω) is a symplectic manifold carrying a Hamiltonian action of some Lie group G with equivariant moment map µ, then, because of item 2 of Proposition 1.7, eq. (1.4) implies that d{μ(ξ),μ(η)} = −dμ([ξ, η]), i.e., {μ(ξ),μ(η)} +μ([ξ, η]) is a constant function. By Proposition 1.14, this constant is, for any p ∈ M , Hence, in fact we have that {μ(ξ),μ(η)} = −μ([ξ, η]), that is, the comoment map is a Lie algebra antihomomorphism between (g, [·, ·]) and (C ∞ (M ), {·, ·}). Conversely, it can be seen that if the comoment map is a Lie algebra antihomomorphism, then the moment map is equivariant with respect to the elements of the identity component of G (see [MS98,Lem. 5.16]). It can also be seen that the obstruction for a moment map to be equivariant is of cohomological type, concerning the vanishing of a particular cocycle class in the Lie algebra cohomology of g (see [MS98,Lem 5.15] and comments afterwards).
Let G be a Lie group acting smoothly through Φ on a Poisson manifold (M, {·, ·}). We say that the action is canonical if Φ g is a canonical diffeomorphism for every g ∈ G. Furthermore, as in the symplectic case, we say that the action is Hamiltonian if there is a moment map for the action, meaning a smooth map µ : M → g * such that the comoment map satisfies Π(dμ(ξ)) = ξ M , for every ξ ∈ g.
The underlying idea is the same as in the symplectic case: for each ξ ∈ g, the vector field ξ M is Hamiltonian withμ(ξ) as energy function.
Reduction
It is a common practice in physics to "divide out symmetries" when trying to solve the dynamics of a mechanical system. For instance, if (R 3 × R 3 , ω 0 , H) is a Hamiltonian system such that H only depends on q and p , where (q, p) ∈ R 3 × R 3 , then it is invariant under the action of SO(3) described in Example 1.16 and the angular momentum L(q, p) := q × p gives rise to three conserved quantities by Noether's theorem. Hence, we can limit our study of the dynamics to L −1 (ξ), for any ξ ∈ R 3 such that L −1 (ξ) defines a smooth manifold. Moreover, there is still some symmetry left on the level set L −1 (ξ), meaning that there is a subgroup of SO(3) still acting on L −1 (ξ). Therefore, we can quotient L −1 (ξ) by this subgroup, identifying all the points in the same orbit, and obtain a "reduced" space, which will have fewer degrees of freedom than the original one, and which will presumably be easier to solve. We can then "lift" the solved motion to the unreduced space and obtain a motion therein.
In this chapter we study, under some regularity conditions, the reduction of manifolds by symmetry groups in both the symplectic and the Poisson cases.
Marsden-Weinstein Reduction
Symplectic reduction was introduced by Marsden and Weinstein in 1974 in a short paper [MW74].
First of all, we recall that a G-action on a manifold M , where G is some Lie group, is said to be free if the isotropy group of any element is trivial. Hence, by Proposition 1.8, each orbit is an immersed submanifold diffeomorphic to G. The action is said to be proper if the map (g, p) → (g, Φ g (p)), for g ∈ G and p ∈ M , is proper, in the sense that the preimage of compact sets are compact sets. It is well known (see for instance [Lee12,Thm. 21.10]) that if the action is both free and proper, then M/G has a unique smooth structure such that the projection π : M → M/G is a submersion. In this case Suppose now that (M, ω) is a symplectic manifold, and that the action is Hamiltonian with associated equivariant moment map µ : M → g * . We say that α ∈ g * is a clean value of µ if µ −1 (α) is a submanifold such that T p µ −1 (α) = ker µ * (p). For instance, if α is a regular value, then it is evidently a clean value.
We first prove a useful technical lemma. By • we denote the annihilator of a vector subspace.
2. If α ∈ g * is a clean value for µ and p ∈ µ −1 (α), then Proof. Item 1 follows from for v ∈ T p M and ξ ∈ g, and the nondegeneracy of ω.
To see item 2, since α is a clean value, ξ M (p) ∈ T p µ −1 (α) if and only if µ * ξ M (p) = 0. By Proposition 1.14, this is precisely ξ g * (α) = 0, which in turn is equivalent, by Corollary 1.9, to To motivate the construction of the symplectic reduction, consider α a clean value of µ. For any lie in the kernel of the 2-form i * ω. One would think, then, that by dividing by G α we will obtain a symplectic form on µ −1 (α), and this is precisely what happens. This makes sense since µ −1 (α) is G α -invariant by the equivariance of µ.
Proof. If ω α exists, then its value is wholly determined by the relation i * ω = π * ω α , so uniqueness is clear. To see existence, for any π * u, π To see that it is well defined, assume that u ∈ T p µ −1 (α) is such that π * u = π * u. Then u − u ∈ ker π * = T p (G α · p), and by the comments preceding this theorem, u − u ∈ ker i * ω and, hence, On the other hand, suppose that q ∈ µ −1 (α) is such that π(q) = π(p), i.e., there is some g ∈ G α such that p = Φ g (q), and that there are Since the action is symplectic and Φ g commutes with i, we finally obtain that and ω α is well defined. By the previous comments, it is clear that ω α is nondegenerate. Because the differential commutes with pullbacks, we have that π * dω α = i * dω = 0. Because π is a submersion, this is equivalent to dω α = 0, and so ω α indeed defines a symplectic structure on M/ / α G.
It is useful to have the following diagram in mind The manifold M/ / α G is called the symplectic reduced space. Of course, everything works fine because we are in the most friendly scenario for symplectic reduction. As shown in [LT97], if the action is not taken to be free, then M/ / α G is not a manifold, but an orbifold (see this same paper for the concept of orbifold, or look at [Sat57]). If α is a singular value, then M/ / α G was shown in [SL91] to be a symplectic stratified space, which, roughly speaking, is a collection of symplectic manifolds that fit together nicely (the theorem stating that the reduced space decomposes as a disjoint union of symplectic manifolds is presented as Theorem 3.7 in Section 3.2). The book [OR04] is a comprehensive exposition of reduction theory in general symplectic and Poisson manifolds, with no regularity conditions whatsoever. For our purposes, though, the regular Marsden-Weinstein theorem is enough.
One of the appeals of symplectic reduction is that by taking a simple symplectic manifold M and reducing it with respect to some action we can obtain symplectic structures on relatively complicated objects for which a direct presentation of its symplectic structure would seem rather magical or special. We illustrate this fact with two examples. Take (M, ω) = (R 2n , ω 0 ), the standard symplectic structure, and take the harmonic oscillator Hamiltonian Then by Example 1.4 we have that whose flow is θ t (q, p) = (q cos t + p sin t, p cos t − q sin t).
Since θ t is 2π-periodic in t, it defines a symplectic action of S 1 on M , which is obviously free. Since S 1 is compact, the action is proper, and since 1/2 is a regular value of H, is a 2(n − 1)-dimensional symplectic manifold.
Before giving the second example, we need the expression for the canonical symplectic form on the cotangent bundle of a Lie group.
Example 2.5. We retake Example 1.17. The moment map was given by µ(λ) = λ • R g * for λ ∈ T * g G. Through the identification of T * G with G × g * , the moment map could be written as µ(g, α) = α, which shows that any value α ∈ g * is regular. The lifted action to the cotangent bundle can be written as where G/G α here is the orbit space by left translations, which equals the set of right cosets. Right cosets are in bijective correspondence with left cosets by the map G α g → g −1 G α , so that, by Proposition 1.8, in fact T * G/ / α G ∼ = G · α. To see that the action is proper, by [Lee12,Prop. 21.5], it suffices to see that if (g i ) and (h i ) are sequences in G such that (g i ) and (g i h i ) converge, then a subsequence of (h i ) converges. But this is obvious, since if g := lim i g i and k := lim i g i h i , then, by continuity of the product and inverse maps on G, On the other hand, the action is obviously free. By Theorem 2.2, G · α has a symplectic structure.
Reduced Symplectic Dynamics
As has already been said, the interest in reduction is to simplify a given system. So far we have reduced the symplectic manifold-the phase space-, but from the dynamical point of view we are also interested in how the dynamics on the manifold behave under the reduction process. The answer is that we obtain reduced dynamics in the reduced system, related in a natural way to the dynamics on the original system. Theorem 2.6 (Reduced symplectic dynamics). Let (M, ω, H) be a Hamiltonian system with moment map µ arising from a Hamiltonian G-action Φ by symmetries. Let α ∈ g * be a clean value of µ such that G α acts freely and properly on µ −1 (α). Then the flow of X H leaves µ −1 (α) invariant and induces a Hamiltonian flow on M/ / α G with Hamiltonian h given by i * H = π * h, where i : µ −1 (α) → M is the inclusion and π : µ −1 (α) → M/ / α G the projection. In particular, X H and X h are π-related. The function h is called reduced Hamiltonian or energy.
Proof. By Noether's theorem (Theorem 1.11), µ is a conserved quantity, so that the flow of X H , say Θ t , preserves µ −1 (α). Hence, it gives rise to a well-defined flow θ t on M/ / α G such that π • Θ t = θ t • π. Let X be the vector field associated to θ t . Then X H and X are π-related: for any p ∈ µ −1 (α), The invariance of H under the G-action ensures the existence of a smooth function h in M/ / α G uniquely determined by i * H = π * h. For any p ∈ µ −1 (α) and v ∈ T p µ −1 (α), we have that Thus, X = X h , and we are done.
The ultimate interest in the dynamics is, of course, to recover the motion of the original system (recall that a motion of the Hamiltonian system (M, ω, H) is an integral curve of X H ) from the motion of the reduced system. We follow [AM78] in this exposition.
Lemma 2.7. Let M be a manifold with a smooth G-action Φ, and let p ∈ M and g ∈ G.
For v ∈ T p M and ξ ∈ g, and identifying T (g,p) (G × M ) ∼ = T g G ⊕ T p M in the natural way, we have that Proof. By the Leibniz rule for pushforwards, Theorem 2.8 (Lifted motion). Let (M, ω, H) be a Hamiltonian system with moment map µ arising from a Hamiltonian G-action Φ by symmetries. Let α ∈ g * be a clean value of µ such that G α acts freely and properly on µ −1 (α), and let (M/ / α G, ω α , h) be the reduced system. Let γ : I → M/ / α G be a motion of the reduced system and β : I → M any smooth curve such that π • β = γ. Let ξ : I → g α be a smooth curve such that and g : I → G a smooth curve such thatġ (t) = L g(t) * ξ(t).
(2.3)
Then the curve defined by Γ(t) := Φ g(t) (β(t)) is a motion of the original system. The fact that γ is a motion for the reduced system ensures the existence and uniqueness of the solution to eq. (2.2).
To see the smoothness, write ξ(t) = ξ i (t)ξ i , for {ξ i } a basis for g α and some functions ξ i : I → R. It is enough to see the smoothness of each ξ i . We have that ξ(t) M (β(t)) = ξ i (t)(ξ i ) M (β(t)). Since for each t the restriction of Φ β(t) * to g α is injective, as has been shown in the previous paragraph, then the vectors {(ξ i ) M (β(t))} are linearly independent. In addition, (ξ i ) M (β(t)) is the composition of the smooth maps β : I → M and (ξ i ) M : M → T M , so that (ξ i ) M (β(t)) is smooth in t. Since ξ i (t)(ξ i ) M (β(t)) is also smooth in t, necessarily the functions ξ i are smooth, and this ends the proof.
The problem of finding the motion on the original system, therefore, reduces to solving the algebraic equation (2.2) and then the differential equation (2.3).
Recall that by Example 1.24 any symplectic manifold is a Poisson manifold with the Poisson bracket {f, g} = −ω(X f , X g ). One could naturally ask how the Poisson brackets on M and on M/ / α G are related. Let f, h ∈ C ∞ (M/ / α G) and let F, H ∈ C ∞ (M ) be G-invariant extensions of π * f and π * h, respectively.
Using Theorem 2.6, we have that X F is π-related to X f and X H is π-related to X h . Hence, it is immediate to see that if we write {·, ·} α for the Poisson bracket on M/ / α G, π * {f, h} α = i * {F, H}.
Marsden-Ratiu Reduction
Poisson reduction as will be presented here was introduced by Marsden and Ratiu in 1986 in a short paper [MR86]. Definition 2.9. The triple (M, N, E) is called Poisson reducible if there is a Poisson structure {·, ·} N/F on the leaf space N/F such that for any f, g ∈ C ∞ (N/F ) and F, G ∈ C ∞ (M ) local extensions of π * f and π * g, respectively, with dF (E) = dG(E) = 0, we have that When we say that F and G are local extensions of π * f and π * g with dF (E) = dG(E) = 0 we mean that this last condition must be satisfied locally but need not be satisfied throughout all of N . That is to say, when evaluating {f, g} N/F at π(p) ∈ N/F , the extensions F and G must satisfy dF (E) = dG(E) = 0 only in a neighborhood of p. Note that such local extensions always exist. Indeed, taking a slice chart for N about p ∈ N (that is, a chart (U, φ) such that p ∈ U and φ(U ∩ N ) = φ(U ) ∩ (R n × {0}), where dim N = n), it suffices to see the existence of local extensions in the case where M = R m for some m, N = R n × {0} and E is a distribution of k-planes on N (k is the rank of E). If we denote by π 1 : R n × R m−n → R n the projection onto the first component and we are given a smooth function f on N constant on the leaves of the foliation induced by E ∩ T N , then F = π * 1 f is an extension of f with dF (E) = 0.
The reduction theorem states a necessary and sufficient condition for a triple (M, N, E), where E is a Poisson distribution over N , to be Poisson reducible. We denote by E • the annihilator of E (defined pointwise as ( Proof. If (M, N, E) is Poisson reducible, let p ∈ N , λ ∈ E • p and F ∈ C ∞ (M ) such that dF p = λ and dF (E) = 0. Let β ∈ T p N • ∩ E • p = (T p N + E p ) • and let G ∈ C ∞ (M ) be an extension of the zero function on N such that dG p = β and dG(E) = 0. Since dF (E) = 0, we have that F is constant on the leaves of F , and, hence, it descends to a smooth function f ∈ C ∞ (N/F ) such that π * f = F on a neighborhood of p. Then eq. (2.4) follows from extensions of π * f and π * g, respectively, such that dF (E) = dG(E) = 0 about p ∈ N . Since E is a Poisson distribution, by (PD3) we know that {F, G} is constant along the leaves of F and so gives rise to a well-defined function on N/F , {f, g} N/F , such that π * {f, g} N/F = i * {F, G}. Bilinearity, skew-symmetry and the Leibniz rule for {·, ·} N/F are inherited directly from {·, ·}. For the Jacobi identity, we have already seen that {F, G} is a suitable local extension for π * {f, g} N/F , so that for any h ∈ C ∞ (N/F ) and its extension H ∈ C ∞ (M ) with dH(E) = 0, Hence, the Jacobi identity is also inherited from {·, ·}.
Also in this Poisson context, the book [OR04] explores what can be retained when the regularity conditions are dropped.
We present now some examples of Poisson reduction that are of interest. As with symplectic reduction, Poisson reduction also allows us to justify otherwise special Poisson structures on some objects. To see this, though, we first need the expression of the Poisson bracket on the cotangent bundle of a Lie group.
Proposition 2.12. Let G be a Lie group and g its Lie algebra. Identify T * G with G × g * via right translations. Let (g, α) ∈ G × g * and define i α : G → G × g * by i α (h) = (h, α) and i g : and similarly for H. The canonical Poisson structure on T * G can be expressed as where F α and H α , which are linear functionals on g * , are to be regarded as elements of g.
Example 2.13. Take Example 2.11 with M = T * G ∼ = G × g * and right translation lifted to T * G as a G-action, which acts through (g, α) · h = (gh, α).
This Poisson structure on g * is known as the Lie-Poisson structure on g * . It was first found by S. Lie [Lie93] and then rediscovered by Berezin [Ber67], both writing its expression in local coordinates. For a further account on the Lie-Poisson structure, see [Wei83].
Reduced Poisson Dynamics
As in the symplectic case, dynamics can also be reduced in the Poisson context. If N ⊆ M is a submanifold, we say that a function Proof. Since N is preserved by X H , clearly its flow, say Θ t , leaves N invariant. Hence, it gives rise to a well-defined flow θ t on N/F such that π • Θ t = θ t • π. Let X be the vector field associated to θ t . Then X H and X are π-related: for any p ∈ N , The fact that H is constant on the leaves ensures the existence of a smooth function h in N/F uniquely determined by i * H = π * h. For any f ∈ C ∞ (N/F ) and F ∈ C ∞ (M ) its local extension with dF (E) = 0, we have that df (X)(π(p)) = X(π(p))f = (π * X H (p))f = df (π * X H (p)) = dπ * f (X H (p)) = dF (X H (p)) = {H, F } (p) = {h, f } N/F (π(p)).
Thus, X = X h , and we are done.
The reduced Poisson dynamics allows us to see that Marsden-Ratiu and Marsden-Weinstein reduction coincide in the symplectic case, in the following sense. Let (M, {·, ·}) be a Poisson manifold and µ : M → g * an equivariant moment map for some canonical action of a Lie group G on M . Let α ∈ g * be a clean value for µ and suppose that G α acts freely and properly on µ −1 (α). Then there is a unique Poisson structure {·, ·} α on M/ / α G := µ −1 (α)/G α such that if f, h ∈ C ∞ (M/ / α G) and F, H ∈ C ∞ (M ) are local extensions of π * f and π * h, respectively, constant on the G-orbits on µ −1 (α), meaning that dF (T p (G · p)) = dH(T p (G · p)) = 0 for every p ∈ µ −1 (α), then 3 Implosion Symplectic implosion was introduced by Guillemin, Jeffrey and Sjamaar in 2002, [GJS02]. Loosely speaking, it is a way of "abelianizing" the action of a compact Lie group on a symplectic manifold at the cost of introducing singularities in the manifold. That is, if G is a compact Lie group acting in a Hamiltonian fashion on a symplectic manifold (M, ω), then the so-called imploded cross-section M impl inherits a Hamiltonian action of a maximal torus T ⊆ G such that M/ / α G = M impl / / α T for certain values of α. The price to pay is that M impl will not be a symplectic manifold in general, but a stratified symplectic space.
Symplectic Implosion
We fix once and for all a compact connected Lie group G and a maximal torus T ⊆ G. Let g denote the Lie algebra of G and t ⊆ g the Lie algebra of T .
Root Decomposition and Weyl Chambers
We first review the root decomposition of g and the concept of Weyl chambers. Here we mainly follow [Sep07]. Let z = {ξ ∈ g : [ξ, η] = 0, for all η ∈ g} be the center of g and let [g, g] be the ideal generated by {[ξ, η] : ξ, η ∈ g}. Then [Sep07, Thm. 5.18] tells us that g = z ⊕ [g, g] and that [g, g] is semisimple. On the other hand, if we write C for complexification, g C always admits an Ad-invariant (Hermitian) inner product (see [Sep07,Lem. 5.6]), so that the ad action is skew-Hermitian. Hence, ad t C is simultaneously diagonalizable and there is a finite set ∆ ⊆ t * C {0} such that where g α := {ξ ∈ g C : ad η(ξ) = α(η)ξ, for all η ∈ t C } = ∅. Equation (3.1) is called the root decomposition of g C . The set ∆ is called the set of roots, and, if we set t := t ∩ [g, g], it spans t * C ([Sep07, Thm. 6.11]).
Because of the skew-Hermiticity of ad g, we have that α is imaginary-valued on t. Hence, by Clinearity, any root α ∈ ∆ is completely determined by its restriction to t, and so we will interchangeably think of the roots as living in t * or t * C . Using the root decomposition it is very easy to see that ξ ∈ t lies in z if and only if α(ξ) = 0 for all α ∈ ∆.
The Killing form is defined as the bilinear symmetric form on g given by B(ξ, η) := tr(ad ξ • ad η). Restricted to [g, g], it is negative definite (see [Sep07,Thm. 6.16]), and by C-linearity it can be extended to a nondegenerate bilinear symmetric form on [g C , g C ]. By taking any nondegenerate bilinear symmetric form on z C (notice that any bilinear form on z C is Ad-invariant), we can extend B further to a nondegenerate and Ad-invariant bilinear symmetric form ·, · on all of g C such that z C and [g C , g C ] are orthogonal. For any λ ∈ g * C , let ζ λ ∈ g C be the unique element such that λ(ξ) = ζ λ , ξ for all ξ ∈ g C ; we can then define a nondegenerate and Ad * -invariant bilinear symmetric form on g * C as λ, β := ζ λ , ζ β . Observe that if λ ∈ t * C , then ζ λ ∈ t C . Indeed, first of all note that in the root decomposition (3.1), the two terms in the direct sum are mutually orthogonal, since if ξ ∈ t C and η α ∈ g α for some α ∈ ∆, then for any ζ ∈ t, 0 = ad ζ(ξ), η α + ξ, ad ζ(η α ) = α(ζ) ξ, η α .
Hence, if we write ⊥ for the orthogonal space, λ ∈ t * C = α∈∆ g • α if and only if ζ λ ∈ α∈∆ g ⊥ α = t C . We say that a subset Σ ⊆ ∆ is a system of simple roots if it spans t * (since α(z) = 0, it makes sense to think of the roots as elements of t * ) and any root in ∆ can be written as a linear combination of the elements of Σ of either nonnegative or nonpositive coefficients. The set t * α∈∆ α ⊥ decomposes into some finite number of connected components, each of which is called an open Weyl chamber.
We fix once and for all a system of simple roots Σ and the corresponding fundamental closed Weyl chamber t * + .
Symplectic Implosion
Recall that for a group H, its commutator subgroup is defined as [H, H] := ghg −1 h −1 : g, h ∈ H . We give now the basic construction of symplectic implosion.
Definition 3.1. Let (M, ω) be a symplectic manifold with an equivariant moment map µ : M → g * arising from a Hamiltonian G-action Φ. We define the following equivalence relation: If we write ∼ for this equivalence relation, we define the imploded cross-section of M as the quotient space M impl := µ −1 (t * + )/ ∼, equipped with the quotient topology. Indeed, ∼ is an equivalence relation because of the equivariance of µ: if p ∼ q and g ∈ [G µ(p) , G µ(p) ] is such that Φ g (p) = q, then µ(q) = µ • Φ g (p) = Ad * (g)µ(p) = µ(p), so that the relation is symmetric.
We define the following partial order on the set of faces of t * + : if σ and τ are faces of t * + we say that σ ≤ τ if σ ⊆ τ .
To give a first description of M impl , we need the fact that the isotropy group with respect to the coadjoint action is constant on the faces of the fundamental Weyl chamber.
Proposition 3.2. The isotropy group by the coadjoint action is the same for all the elements of a given face in t * + . Moreover, T is contained in any of these isotropy groups and it is the isotropy group of any element in the interior face of t * + .
Proof. Since G is connected, then the isotropy groups for the coadjoint action are also connected (see the first remark after [GLS96, Lem. 2.3.2]). Hence, the equality of isotropy groups is equivalent to the equality of isotropy algebras. Fix a (n − k)-face σ ⊆ t * + defined by a subset Σ 0 ⊆ Σ. Let λ ∈ σ. By Proposition 1.14 and Corollary 1.9, the complexification of its isotropy algebra is where here we are thinking of λ as an element in the annihilator of z C ⊕ α∈∆ g α . If we define ∆ 0 := {α ∈ ∆ : B(λ, α) = 0}, we will show that and this will give the results, since ∆ 0 is totally determined by Σ 0 , which is independent of λ.
Hence, for a face σ, we may write without confusion G σ to refer to the isotropy group of any of its elements. If we call F the set of faces of t * + , then t * + = σ∈F σ. Thus, we may rewrite the imploded cross-section (set-theoretically) as (3.2)
Cross-section Theorems and Symplectic Decomposition
We will now see that each of the disjoint sets of eq. (3.2) is actually a (possibly singular) symplectic quotient. The basic tool to see this is Guillemin and Sternberg's cross-section theorem, [GS90,Thm. 26.7], or rather an improved version due to Lerman, Meinrenken, Tolman and Woodward, [LMTW98,Thm. 3.8]. It is a way of obtaining symplectic submanifolds through the moment map. Let (M, ω) be a symplectic manifold with a Hamiltonian G-action, where G is Lie group with Lie algebra g. Let µ : M → g * be the associated equivariant moment map. For λ ∈ g * we write G · λ for the coadjoint orbit.
Theorem 3.3 (Guillemin-Sternberg). Let λ ∈ g * and Z ⊆ g * a submanifold perfectly transverse to G · λ at λ, that is, such that T λ Z ⊕ T λ (G · λ) = g * . Then, if p ∈ M with µ(p) = λ, there is a neighborhood U of p such that µ −1 (Z) ∩ U is a symplectic submanifold of M .
Proof. We follow the proof in [GS90], correcting a mistake at the end of the proof. We repeatedly use Proposition 1.14, Corollary 1.9 and Lemma 2.1.
First of all, since T λ (G · λ) = µ * (T p (G · p)), µ is transverse to Z. Hence in a neighborhood of p, we have that µ −1 (Z) is a submanifold and T p µ −1 (Z) = µ −1 * (T λ Z). To see that in fact it
If G is compact and connected, which from now on we suppose, for any λ ∈ g * there is always a submanifold S ⊆ g * perfectly transverse to λ, called the natural slice. What the refinement of the crosssection theorem provided by Lerman, Meinrenken, Tolman and Woodward affirms is that µ −1 (S) is a symplectic submanifold not only locally, but also globally.
Before giving the definition of a slice to an action, recall that if π : P → M is a principal G-bundle and N is a manifold with a left G-action, then the associated bundle with fiber N is P × G N := (P × N )/G, where the G-action on P × N is (p, n) · g := (p · g, g −1 · n) and the projection is given by π ([p, n]) := π(p). 1 Definition 3.4. Let M be a manifold with a G-action. If p ∈ M , we say that a submanifold S ⊆ M is a slice for the G-action at p if (S1) S is G p -invariant, (S2) G × Gp S → M given by [g, s] → Φ g (s) is a diffeomorphism onto its image, and (S3) G · S is an open neighborhood for G · p.
Notice that (S2) makes sense because the action of G p on G by right translations defines a principal bundle G → G/G p and (S1) ensures the existence of a G p -action on S. Note also that by (S2), for any s ∈ S, if Φ g (s) ∈ S for some g ∈ G, then g ∈ G p . Therefore (G · s) ∩ S = G p · s and G s ⊆ G p .
The way one must think of a slice at p is that it is somehow transverse to the orbits near p, and so it can be used to "parameterize" them in a neighborhood of G · p via the diffeomorphism G × Gp S → G · S, being this parameterization "degenerate" by G p , in the sense that two points in S represent the same orbit if and only if there is some element of G p taking one to the other.
The slice is perfectly transverse to G · p at p, as the following concatenation of identifications shows = (T e G ⊕ T p S)/(T (e,p) (G p · (e, p))) It is the case that if S is a G p -invariant submanifold perfectly transverse to G · p at p fulfilling that if s ∈ S and Φ g (s) ∈ S for some g ∈ G, then g ∈ G p , this is sufficient for S to be a slice at p (see [GLS96,Sect. 2.3.2]). This allows us to construct a slice for the coadjoint action at every point λ ∈ t * + , called the natural slice at λ.
is a slice for the coadjoint action at λ. Moreover, We are now ready to state a second version of the cross-section theorem.
Theorem 3.6 ([LMTW98, Thm. 3.8]). Let σ be a face of t * + , and let S σ be the natural slice for the coadjoint action at any point in σ. Then the symplectic cross-section M σ := µ −1 (S σ ) is a G σ -invariant symplectic submanifold of M and the restriction of µ to M σ is a moment map for the G σ -action on it.
Let σ be a face in t * + . Since G σ is compact, then g σ = z σ ⊕ [g σ , g σ ], where z σ is the center of g σ . Hence g * σ = z * σ ⊕ [g σ , g σ ] * . If we let µ σ := µ| Mσ , then the composition of µ σ with the projection g * σ → [g σ , g σ ] * is a moment map for the action of [G σ , G σ ] on M σ whose zero locus is Poisson submanifolds are somehow the most natural submanifolds to consider in a Poisson manifold. More natural in the sense that it obviously inherits a Poisson structure from the ambient space. Observe that a submanifold N is Poisson if and only if X f is tangent to N for every f ∈ C ∞ (M ), because Π(T * p M | N ) = span{X f (p) : f ∈ C ∞ (M )} for any p ∈ N . The most naive approach to a generalization of the Guillemin-Sternberg cross-section theorem to the Poisson case would say that if we have λ ∈ g * and Z ⊆ g * a submanifold perfectly transverse to G · λ at λ, then µ −1 (Z) is a Poisson submanifold locally around some p with µ(p) = λ. But this is certainly not true, because if ξ ∈ g, then Xμ (ξ) cannot be tangent to µ −1 (Z) at p, because T λ µ −1 (Z) = µ −1 * (T λ Z) and and T λ (G · λ) is precisely the direct sum complement of T λ Z! Thus, the generalization must be done with greater care. As we will see, the correct notion is that of a Poisson tranversal (sometimes called cosymplectic submanifolds).
There are some equivalent ways to characterize this type of submanifolds. Proof. If N is a Poisson transversal and Π(λ) ∈ T p N ∩ Π(T p N • ) for some p ∈ N and λ ∈ T p N • , then for every w ∈ T p M there are v ∈ T p N and α ∈ T p N • such that w = v + Π(α), and, hence, λ(w) = λ(Π(α)) = −α(Π(λ)) = 0, so that λ = 0. This shows the equivalence with item 2.
Though less obviously than in the case of a Poisson submanifold, Poisson transversals also carry a natural Poisson structure inherited from the ambient manifold. Because of the fact that the sum of T N and Π(T N • ) in a Poisson transversal is actually direct, we can identify T * N with Π −1 (T N ) and Π(T N • ) * with T N • , and therefore split Π| N in a suitable sense.
We will now show that in fact Π N is a Poisson bivector field, defining the sought Poisson structure on N . Before proceeding to the proof, we need a technical lemma, whose proof is immediate because the Schouten-Nijenhuis bracket is defined pointwise.
Lemma 3.14. Let N ⊆ M be a submanifold and let X ∈ X n (M ) and Y ∈ X m (M ) be multivector fields tangent to N , in the sense that X| N ∈ X n (N ) and Y | N ∈ X m (N ). Then [X, Y ] Proof. It only remains to show that [Π N , Π N ] = 0. We follow the proof in [Xu03]. Let V be as in Lemma 3.13, and let In terms of smooth functions on N , the Poisson structure can be described in the following way: let f, g ∈ C ∞ (N ) and let F, G ∈ C ∞ (M ) be extensions of f and g, respectively, such that dF or dG annihilates Π(T N • ) locally, then {f, g} N = {F, G}.
In the case (M, Π) were symplectic, with symplectic form ω, then, since T M | N = T N ⊕ Π(T N • ) = T N ⊕ ω −1 (T N • ) = T N ⊕ T N ⊥ , we have that N is a symplectic submanifold. We now see that Poisson transversals are the correct candidate for a Poisson cross-section theorem.
Poisson Cross-section Theorem
We are now ready to prove the Poisson cross-section theorem. Let (M, Π) be a Poisson manifold with a Hamiltonian G-action, where G is a Lie group with Lie algebra g. Let µ : M → g * be the associated equivariant moment map.
Theorem 3.17 (Poisson Cross-section). Let λ ∈ g * and Z ⊆ g * a submanifold perfectly transverse to G · λ at λ, that is, such that T λ Z ⊕ T λ (G · λ) = g * . Then, if p ∈ M with µ(p) = λ, there is a neighborhood U of p such that µ −1 (Z) ∩ U is a Poisson transversal of M .
As noted before, this is just the beginning towards a Poisson implosion construction. There is yet much way to go. Indeed, let us recall what are the successive steps we have taken until reaching the conclusion that M impl partitions into symplectic manifolds such that its reduction by the T -action equals the reduction of M by the G-action: 1. The Lerman-Meinrenken-Tolman-Woodward cross-section theorem, Theorem 3.6, which generalizes the Guillemin-Sternberg cross-section theorem, Theorem 3.3, allowed us to write M impl as the disjoint union of possibly singular symplectic quotients; 2. the Sjamaar-Lerman reduction theorem, Theorem 3.7, ensured that these symplectic quotients can be further decomposed into regular symplectic quotients, by using the decomposition of M into orbit type submanifolds; 3. symplectic reduction by stages let us see that M/ / λ G and M impl / / λ T are symplectomorphic for λ ∈ t * + .
The path to follow seems rather natural: Marsden-Ratiu reduction by Poisson distributions gives a natural framework for reducing a Poisson manifold by a Hamiltonian action ; hence, it seems plausible that the results just enunciated can be properly generalized to the Poisson setting, obtaining a decomposition of the Poisson imploded cross-section into Poisson manifolds such that item 3 applies (substituting "symplectomorphic" by "canonically diffeomorphic"). To see item 3 in the Poisson setting, we would somehow need Poisson reduction by stages, and, happily, this has already been considered, see [Mar+07,Sect. 5.3].
Our contribution is the first part of the generalization of step 1. We have shown that the Poisson manifolds into which the Poisson imploded cross-section will partition will be reduced spaces of Poisson transversals of the original manifold. This makes sense because, as has been shown, Poisson transversals inherit a natural Poisson structure from the ambient manifold.
Remember that a graded algebra is an algebra (V, ∧) decomposed as a direct sum of vector spaces V = ∞ k=0 V k satisfying V k ∧ V l ⊆ V k+l . A linear map D : V → V is said to be of degree a if D(V k ) ⊆ V k+a . If, in addition, it fulfills D(α ∧ β) = Dα ∧ β + ak α ∧ Dβ, for α ∈ V k , β ∈ V l and = ±1, then it is called a derivation for = 1 or antiderivation for = −1.
There are three natural derivations on (Ω • (M ), ∧): the exterior differential d, which is a degree 1 antiderivation with d 2 = 0 such that for any f ∈ C ∞ (M ), df is the differential of f ; the inner differential or inner product i X with X ∈ X(M ), a degree −1 antiderivation with i 2 X = 0, defined by i X α(X 2 , . . . , X k ) := α(X, X 2 , . . . , X k ), for α ∈ Ω k (M ) and X i ∈ X(M ); and the Lie derivative L X , a degree 0 derivation commuting with d defined by L X α := d dt t=0 θ * t α, for any α ∈ Ω • (M ) and where θ t is the flow of X. 2 They are related by Cartan's magic formulas The Lie derivative can be extended in the obvious way to act on any covariant tensor field A ∈ Γ((T * M ) ⊗k ).
We recall that the For a 1-form α ∈ Ω 1 (M ) and a 2-form ω ∈ Ω 2 (M ), and vector fields X, Y, Z ∈ X(M ), the following formulas for the exterior differential are useful | 2022-02-14T06:48:10.908Z | 2022-02-11T00:00:00.000 | {
"year": 2022,
"sha1": "ed50b6714db4b2f4ca754e3a5c4cdb1b1755b825",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed50b6714db4b2f4ca754e3a5c4cdb1b1755b825",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
22165562 | pes2o/s2orc | v3-fos-license | The relationship of blood lead to systolic blood pressure in a longitudinal study of policemen.
We examined the relationship of blood lead level to systolic and diastolic blood pressure in a longitudinal study of 89 Boston, MA, policemen. At the second examination blood lead level and blood pressure were measured in triplicate. Blood pressure measurements were taken in a similar fashion in years 3, 4, and 5. Multivariate analysis using a first-order autoregressive model revealed that after adjusting for previous systolic blood pressure, body mass index, age, and cigarette smoking, an elevated blood lead level was a significant predictor of subsequent systolic blood pressure. Bootstrap simulations of these models provided supporting evidence for the observed association. These data suggest that blood lead level can influence systolic blood pressure even within the normal range.
Introduction
A variety of epidemiologic studies have suggested a small but statistically significant effect of blood lead level on systolic blood pressure and suggest a weaker association with diastolic blood pressure as well (1)(2)(3)(4). These studies were cross-sectional in design and involved large numbers of subjects. We examined the relationship between blood lead level and longitudinal change in blood pressure in a small group of policemen under observation for health outcomes related to environmental work exposures. Using computer-intensive statistical techniques, we demonstrated an effect of blood lead on systolic blood pressure similar to that seen in large cross-sectional surveys.
Materials and Methods
The populations of subjects studied has been characterized in previous reports (5)(6)(7). The men of two Pulmonary function testing was measured on a water-filled spirometer and is the subject of a separate report (7). A blood specimen was obtained to determine hematocrit concentration and to make peripheral smears to assess basophilic stippling. Blood lead concentration was determined on a small sample of subjects to validate their traffic exposure histories. Blood lead concentration was determined using the technique of atomic absorption spectrophotometry (9). Blood pressure was measured with a random zero blood pressure machine to minimize digit preference. With the subject seated, systolic pressure and fifthphase diastolic pressure was measured in the left arm to the nearest 2 mm Hg.
The study was a longitudinal investigation with initial screening beginning in 1969-1970 and with observations completed in [1974][1975]. Blood pressure (in mm Hg) was recorded in years 2 through 5. The mean of triplicate measures of systolic pressure and diastolic pressure at each visit was used for analysis. Age in years, body mass index (in kg/M2), and current smoking status (recorded as 1 = current, 0 = never or exsmoker) were available for years 1 through 5. Current cigarette smokers were defined as subjects who smoked as many as one cigarette a day in the study year. Blood lead values (in ,ug/100 mL) were collected only in year 2. Based on the distribution of blood lead values in our sample and in that of the United States population (10), blood lead values were divided into high (. 30 ,ug/100 mL) and low (. 20 and < 30 gg/100 mL) groups for purposes of the regression analysis. This gave comparable numbers of subjects in each category. These two groups were compared with our reference group in which values were < 20 ,ug/100 mL.
To examine the relationship of blood lead concentration to change in blood pressure over the 4 years of the study, a Markov type autoregressive model was used (11). In this model, blood pressure (systolic or diastolic) at time t(Yt) is related to blood pressure at t-1 (Yt-1) and the levels of other covariates at time t or at previous time points. Specifically, the model takes the form Yt =A+C(Yt_1) +B1X1t +B2X2t+...+et This model has several advantages for the analysis of longitudinal data sets. Specifically, the model uses the data efficiently since any individual for whom complete data are available for any two consecutive years (t and t-1) will contribute data to the model. Secondly, the model does not impose a particular shape (i.e., linearity) on the relationship between the dependent and independent variables. Finally, this model can be fitted using ordinary statistical software packages. The model assumes that the residual e's are independent with constant variance (02) both within and between individuals. An additional assumption is that the relationship between the outcome variable and the independent variables is the same for all individuals (fixed effects model).
A cross-validation was undertaken to determine if the magnitude and significance of the regression coefficients obtained for the blood lead variables were the result of the disproportionately large contribution of values from a few individuals (12). To determine if an individual was an outlier, we used a modification of Tukey's fences applied to the cross-validation exercise (13). After the full cross-validation, we determined the subject whose exclusion has the largest influence on any regression coefficient (measured in units of interquartile range away from either the first or the third quartiles of the distribution of the values of all the regression coefficients obtained from the crossvalidation). This individual was then removed and the cross-validation repeated. This process was continued until all regression coefficients resulting from a cross-validation exercise were within five interquartile range units away from either the first or the third quartiles.
To determine the variability of the regression coefficients, without the assumption that the residuals are normally distributed, a bootstrap analysis was performed (14). In this analysis, we generated 1000 (bootstrap) samples equal in size to the original sample by randomly sampling with replacement from the original pool of individuals. The distribution of the coefficients for the bootstrap samples can be considered as though they were coming from real samples, and thus they provide a measure of the statistical precision of the original estimates on the regression coefficients. Table 1 presents the cross-sectional data for the variables used in the longitudinal regression analysis. In the second year of the study, when blood lead was initially measured, the average subject was a normotensive, middle-aged man who was overweight. Roughly half of all subjects were cigarette smokers, and one-quarter of all subjects had a concentration of blood lead > 30 ,ug/100 mL.
Results
A total of 95 men had blood lead determinations out of a total of 314 (30%). Information for six men was missing for all covariates. The other 89 men were entirely comparable to men without blood lead measurements for all covariates (17).
We modeled the level of current systolic blood pres- sure at time t as a function of previous systolic blood pressure at t-1, other independent variables known to influence blood pressure and blood lead ( Table 2). Seventy individuals provided 162 pairs of data (consecutive examinations) for this regression. There was a statistically significant association (p = 0.036) between a high (> 30 ,ug/100 mL) level of blood lead and subsequent systolic blood pressure. Similar modeling was performed for diastolic blood pressure, but no significant association was found (15).
To investigate whether the observations noted above were the product of a few influential points, an iterative cross-validation analysis was undertaken. For systolic pressure, three individuals were excluded so that all regression coefficients resulting from a cross-validation were within five interquartile range units away from either the first or the third quartile. The regression was repeated without the data of these subjects, and the results are presented in Table 3. Although the association betwen high systolic pressure and high blood lead level noted above is only of borderline significance (p = 0.097), the magnitude and direction of the observed relationships are essentially unchanged. On the other hand, the effect of the other covariates (age, body mass index, and smoking) is more consistent with known effects of these variables on blood pressure. In summary, the exclusion of influential points improved the relationship between systolic pressure and the independent variables (prior systolic pressure, body mass index, age, smoking status) and did not dramatically change the relationship between systolic pressure and high blood lead. This provides further support for the observed association of these variables.
In an attempt to estimate the variability (without the normality assumption) of the parameter estimates for blood lead, bootstrap simulations of the model were performed for systolic pressure (Fig. 1). For this purpose, we generated 1000 separate random samples by sampling with replacement from the 70 subjects who provided data on systolic pressure. Figure 1 suggests that the coefficient for high blood lead is greater than zero with a mean value of 5.8 (C. I. 90%, 1.5-11.5 mm Hg). This bootstrapping simulation confirms the association between high blood lead and high systolic pressure without the need to assume normality of the residuals.
Discussion
This longitudinal analysis demonstrates that blood lead levels at the upper range of normal are associated with mild elevations in systolic blood pressure in normotensive working men. The powerful statistical techniques used in this analysis have allowed us to estimate an effect of blood lead on blood pressure quite similar to that observed in large cross-sectional surveys (2)(3)(4). It is worth noting that our modeling approach would have allowed for repeated measurements of blood lead. Greater precision in the measure of exposure should have enhanced the statistical power of the analysis.
Selection bias is unlikely to account for the observed Table 2; (A) coefficient for blood lead excluding influential subjects, Table 3. relationships, as subjects with blood lead were essentially similar subjects without blood lead (17). In addition, no appreciable selective loss to follow-up could be observed in this cohort (Table 1).
In addition to possible bias, the small number of subjects could influence the precision of the regression coefficients. The cross-validation and subsequent study of influential data points (Table 3) provide an estimate of the smoking effect more consistent with published data than that observed with all the data ( Table 2). In addition, the dose-response relationship for low and high blood lead is more internally consistent when the influential data points are excluded ( Table 3).
The influential points were excluded in a blinded fashion, i.e., without regard to the magnitude or directionality of their effect on the parameter estimates. Although the exclusions do influence statistical significance, the parameter estimates for the effect of high blood lead on systolic pressure are similar in both analyses (Tables 2 and 3). This suggests that our results are not driven by data from a few individuals, an important consideration more likely in a small data set.
The bootstrap analysis assesses the statistical precision for the effect of blood lead on blood pressure and indicates that the 90% confidence interval for the parameter estimate for the effect of high blood lead (i.e., . 30 gg/100 mL) on systolic pressure ranges 1.5 to 11 mm Hg. What remains unclear is the reason for the elevation in blood lead in these men.
These 90% confidence limits encompass all of the point estimates from larger cross-sectional surveys. Indeed, the estimate of a 5 mm Hg increase in systolic blood pressure with high blood lead is almost identical to that observed in NHANES with 8000 subjects (3,4).
Previous investigations by Pocock et al. (15) and Shaper et al. (16) have suggested a role for both alcohol consumption and cigarette smoking as environmental sources of lead exposure other than drinking water. We tested alcohol in our model and could find no independent effect of alcohol consumption on blood pressure either cross-sectionally or longitudinally. The fact that blood lead contributed independently to our model (Tables 2 and 3) when cigarette smoking was included suggests that blood lead level itself, rather than cigarette smoking-induced blood lead elevation, has an influence on systolic pressure.
Recently, calcium intake has been shown to influence blood pressure and blood lead (18). We have no information on calcium for this cohort and thus could not examine this relationship.
Clearly, further epidemiologic and physiologic work is necessary to elucidate the mechanisms for the blood lead-blood pressure relationship. However, there seems to be remarkable consistency in the epidemio-logic data, suggesting a small but consistent increase in systolic blood pressure with elevated blood lead levels. | 2014-10-01T00:00:00.000Z | 1988-06-01T00:00:00.000 | {
"year": 1988,
"sha1": "bdc9ecb9bdfdeb2b6c3b040909d4edb61283eba4",
"oa_license": "pd",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1474617/pdf/envhper00429-0055.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdc9ecb9bdfdeb2b6c3b040909d4edb61283eba4",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.